messages
listlengths
2
2
[ { "content": "Classify the node 'Coordinating Human and Computer Agents . In many application areas individuals are responsible for an agenda of tasks and face choices about the best way to locally handle each task, in what order to do tasks, and when to do them. Such decisions are often hard to make because of coordination problems: individual tasks are related to the tasks of others in complex ways, and there are many sources of uncertainty (no one has a complete view of the task structure at arbitrary levels of detail, the situation may be changing dynamically, and no one is entirely sure of the outcomes of all of their actions). The focus of this paper is the development of support tools for distributed, cooperative work by groups (collaborative teams) of human and computational agents. We will discuss the design of a set of distributed autonomous computer programs (\"agents\") that assist people in coordinating their activities by helping them to manage their agendas. We describe several ongoing implementations of these ideas including 1) simulated agen...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nEnvironment Centered Analysis and Design of Coordination Mechanisms Environment Centered Analysis and Design of Coordination Mechanisms May 1995 KEITH S. DECKER B.S., Carnegie Mellon University M.S., Rensselaer Polytechnic Institute Ph.D., University of Massachusetts Amherst Directed by: Professor Victor R. Lesser Committee: Professor Paul R. Cohen Professor John A. Stankovic Professor Douglas L. Anderton Coordination, as the act of managing interdependencies between activities, is one of the central research issues in Distributed Artificial Intelligence. Many researchers have shown that there is no single best organization or coordination mechanism for all environments. Problems in coordinating the activities of distributed intelligent agents appear in many domains: the control of distributed sensor networks; multi-agent scheduling of people and/or machines; distributed diagnosis of errors in local-area or telephone networks; concurrent engineering; `software agents' for information gathering. The design of coordination mechanisms for groups of compu...\n\n2-Hop Neighbour:\nAn Agent Infrastructure to Build and Evaluate Multi-Agent Systems: The Java Agent Framework and Multi-Agent System Simulator . In this paper, we describe our agent framework and address\n\n2-Hop Neighbour:\nConstraints and Agents in MADEsmart As part of the DARPA Rapid Design Exploration and Optimization (RaDEO) program, Boeing, Philadelphia, is involved in an on-going concurrent design engineering research project called MADEsmart which seeks to partially automate the Integrated Product Team (IPT) concept used by Boeing for organizing the design engineering process, with the aid of intelligent agent technology. Although currently only in an early stage of development, the project is expected to crucially employ a constraint-centered System Design Management Agent developed by the University of Toronto's IE Department in conjunction with Boeing. The SDMA will use the constraint-based Toronto Ontologies for a Virtual Enterprise (TOVE) ontologies, and its domain theories for design engineering and dependent underlying theories, phrased as KIF/Ontolingua assertions in an axiomatic system running in the constraint logic system ECLiPSe, as its primary knowledge resource to monitor an ongoing design project, offering resource-all...\n\n2-Hop Neighbour:\nCoordination Assistance for Mixed Human and Computational Agent Systems In many application areas (such as concurrent engineering, software development, hospital scheduling, manufacturing scheduling, and military planning), individuals are responsible for an agenda of tasks and face choices about the best way to locally handle each task, in what order to do tasks, and when to do them. Such decisions are often hard to make because of coordination problems: individual tasks are related to the tasks of others in complex ways, and there are many sources of uncertainty (no one has a complete view of the task structure at arbitrary levels of detail, the situation may be changing dynamically, and no one is entirely sure of the outcomes of all of their actions). The focus of this paper is the development of support tools for distributed, cooperative work by groups (collaborative teams) of human and computational agents. We will discuss the design of a set of distributed autonomous computer programs (\"agents\") that assist people in coordinating their activities by ...\n\n2-Hop Neighbour:\nRisk Management in Concurrent Engineering in Presence of Intelligent Agents Contents 1 Introduction 2 1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Working principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Current Status 5 2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 SDMA as an agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Assumptions about the environment . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 SDMA-Risk Version 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Summary 8 A KQML Specification for SD\n\n2-Hop Neighbour:\nCoordinating Mutually Exclusive Resources using GPGP Hospital Patient Scheduling is an inherently distributed problem because of the way real hospitals are organized. As medical procedures have become more complex, and their associated tests and treatments have become interrelated, the current ad hoc patient scheduling solutions have been observed to break down. We propose a multi-agent solution using the Generalized Partial Global Planning (GPGP) approach that preserves the existing human organization and authority structures, while providing better system-level performance (increased hospital unit throughput and decreased patient stay time). To do this, we extend GPGP with a new coordination mechanism to handle mutually exclusive resource relationships. Like the other GPGP mechanisms, the new mechanism can be applied to any problem with the appropriate resource relationship. We evaluate this new mechanism in the context of the hospital patient scheduling problem, and examine the effect of increasing interrelations between tasks performed...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Agent-Oriented Software Engineering ion: The process of defining a simplified model of the system that emphasises some of the details or properties, while suppressing others. . Organisation 1 : The process of identifying and managing interrelationships between various problem solving components. Next, the characteristics of complex systems need to be enumerated [8]: . Complexity frequently takes the form of a hierarchy. That is, a system that is composed of inter-related sub-systems, each of which is in turn hierarchic in structure, until the lowest level of elementary sub-system is reached. The precise nature of these organisational relationships varies between sub-systems, however some generic forms (such as client-server, peer, team, etc.) can be identified. These relationships are not static: they often vary over time. . The choice of which components in the system are primitive is relatively arbitrary and is defined by the observer's aims and objectives. . Hierarchic systems evolve more quickly than non-hiera...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAbstractions and Infrastructures for the Design and Development of Mobile Agent Organizations Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, suitable abstractions and infrastructures are required for the effective engineering of such applications. In this paper, we argue that a conceptual framework for context-dependent coordination, supported by an infrastructure based on programmable media, can promote a modular and easy to manage approach to the design and development of mobile agent applications in terms of computational organizations. The MARS coordination infrastructure is presented as an implementation of a coordination infrastructure promoting context-dependent coordination. A case study in the area of workflow management is introduced to clarify the concepts presented. Keywords: Mobile Agents, Agent Organizations, Coordination Infrastructures, Agent-oriented Software Engineering. 1\n\n1-Hop Neighbour:\nAn Application of Agent UML to Supply Chain Management Agent UML is certainly the most well-known graphical modeling language for describing multiagent systems but until now, it is not applied to real-world applications. The aim of our project is to apply Agent UML to the Supply Chain Management. This project has several objectives: (1) it allows to prove that Agent UML can be applied to real-world applications, (2) it allows to discover what is missing in Agent UML and what is wrong and finally, (3) it allows to define a methodology based on Agent UML and several tools. The aim of this paper is to sum up our first results on appying Agent UML to the Supply Chain Management and especially, the paper sketches what diagrams are interesting and what could be done after in our project.\n\n1-Hop Neighbour:\nDetermining When to Use an Agent-Oriented Software Engineering Paradigm . With the emergence of agent-oriented software engineering techniques, software engineers have a new way of conceptualizing complex distributed software requirements. To help determine the most appropriate software engineering methodology, a set of defining criteria is required. In this paper, we describe out approach to determining these criteria, as well as a technique to assist software engineers with the selection of a software engineering methodology based on those criteria. 1\n\n2-Hop Neighbour:\nAutomated Derivation of Complex Agent Architectures from Analysis Specifications . Multiagent systems have been touted as a way to meet the need for distributed software systems that must operate in dynamic and complex environments. However, in order for multiagent systems to be effective, they must be reliable and robust. Engineering multiagent systems is a non-trivial task, providing ample opportunity for even experts to make mistakes. Formal transformation systems can provide automated support for synthesizing multiagent systems, which can greatly improve their correctness and reliability. This paper describes a semi-automated transformation system that generates an agent's internal architecture from an analysis specification in the MaSE methodology. 1\n\n2-Hop Neighbour:\nMARS: a Programmable Coordination Architecture for Mobile Agents Mobile agents represent a promising technology for the development of Internet applications. However, mobile computational entities introduce peculiar problems w.r.t. the coordination of the application components. The paper outlines the advantages of Linda-like coordination models, and shows how a programmable coordination model based on reactive tuple spaces can provide further desirable features for Internet applications based on mobile agents. Accordingly, the paper presents the design and the implementation of the MARS coordination architecture for Java-based mobile agents. MARS defines Linda-like tuple spaces, which can be programmed to react with specific actions to the accesses made by mobile agents.\n\n2-Hop Neighbour:\nLIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'On Case-Based Representability and Learnability of Languages . Within the present paper we investigate case-based representability as well as case-based learnability of indexed families of uniformly recursive languages. Since we are mainly interested in case-based learning with respect to an arbitrary fixed similarity measure, case-based learnability of an indexed family requires its representability, first. We show that every indexed family is case-based representable by positive and negative cases. If only positive cases are allowed the class of representable families is comparatively small. Furthermore, we present results that provide some bounds concerning the necessary size of case bases. We study, in detail, how the choice of a case selection strategy influences the learning capabilities of a case-based learner. We define different case selection strategies and compare their learning power to one another. Furthermore, we elaborate the relations to Gold-style language learning from positive and both positive and negative examples. 1 Introdu...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCase-Based Learning Algorithms Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.\n\n1-Hop Neighbour:\nCase-Based Representation and Learning of Pattern Languages Pattern languages seem to suit case-based reasoning particularly well. Therefore, the problem of inductively learning pattern languages is paraphrased in a case-based manner. A careful investigation requires a formal semantics for case bases together with similarity measures in terms of formal languages. Two basic semantics are introduced and investigated. It turns out that representability problems are major obstacles for case-based learnability. Restricting the attention to so-called proper patterns avoids these representability problems. A couple of learnability results for proper pattern languages are derived both for case-based learning from only positive data and for case-based learning from positive and negative data. Under the so-called competing semantics, we show that the learnability result for positive and negative data can be lifted to the general case of arbitrary patterns. Learning under the standard semantics from positive data is closely related to monotonic language l...\n\n2-Hop Neighbour:\nSolving Stabilization Problems in Case-Based Knowledge Acquisition Case-based reasoning is widely deemed an important methodology towards alleviating the bottleneck of knowledge acquisition. The key idea is to collect cases representing a human's or a system's experience directly rather than trying to construct generalizations. Episodic knowledge accumulated this way may be used flexibly for different purposes by determining similarities between formerly solved problems and current situations under investigation. But the flexibility of case-based reasoning brings with it a number of disadvantages. One crucial difficulty is that every new experience might seem worth to be memorized. As a result, a case-based reasoning system may substantially suffer from collecting a huge amount of garbage without being able to separate the chaff from the wheat. This paper presents a case study in case-based learning. Some target concept has to be learned by collecting cases and tuning similarity concepts. It is extremely difficult to avoid collecting a huge amount of ...\n\n2-Hop Neighbour:\nCase-Based and Symbolic Classification Algorithms . Contrary to symbolic learning approaches, that represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB;sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept. 1 Introduction In this article we want to compare the learning power of two important learning paradigms -- the symbolic and the case-based approach [1, 4]. As a first step ...\n\n2-Hop Neighbour:\nOn Case-Based Learnability of Languages Case-based reasoning is deemed an important technology to alleviate the bottleneck of knowledge acquisition in Artificial Intelligence (AI). In case-based reasoning, knowledge is represented in the form of particular cases with an appropriate similarity measure rather than any form of rules. The case-based reasoning paradigm adopts the view that an AI system is dynamically changing during its life-cycle which immediately leads to learning considerations. Within the present paper, we investigate the problem of case-based learning of indexable classes of formal languages. Prior to learning considerations, we study the problem of case-based representability and show that every indexable class is case-based representable with respect to a fixed similarity measure. Next, we investigate several models of case-based learning and systematically analyze their strengths as well as their limitations. Finally, the general approach to case-based learnability of indexable classes of form...\n\n2-Hop Neighbour:\nLogical Case Memory Systems: Foundations And Learning Issues The focus of this paper is on the introduction of a quite general type of case-based reasoning systems called logical case memory systems. The development of the underlying concepts has been driven by investigations in certain problems of case-based learning. Therefore, the present development of the target concepts is accompanied by an in-depth discussion of related learning problems. Logical case memory systems provide some formal framework for the investigation and for the application of structural similarity concepts. Those concepts have some crucial advantage over traditional numerical similarity concepts: The result of determining a new case's similarity to some formerly experienced case can be directly taken as a basis for performing case adaptation. Essentially, every logical case memory system consists of two constituents, some partially ordered case base and some partially ordered set of predicates. Cases are terms, in a logical sense. Given some problem case, every predicat...\n\n2-Hop Neighbour:\nTypes of Incremental Learning This paper is intended to introduce a closer look at incremental learning by developing the two concepts of informationally incremental learning and operationally incremental learning. These concept are applied to the problem of learning containment decision lists for demonstrating its relevance. 1 Introduction The intention of the present paper is to introduce two new notions in incremental learning which allow a classification of phenomena finer than known so far in the area. These concepts are denoted by the phrases informationally incremental learning and operationally incremental learning, respectively. Roughly spoken, informationally incremental algorithms are required to work incrementally as usual, i.e. they have no permission to look back at the whole history of information presented during the learning process. Operationally incremental learning algorithms may have permission to look back, but they are not allowed to use information of the past in some effective way. ...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'An Introduction to Software Agents ion and delegation: Agents can be made extensible and composable in ways that common iconic interface objects cannot. Because we can \"communicate\" with them, they can share our goals, rather than simply process our commands. They can show us how to do things and tell us what went wrong (Miller and Neches 1987). . Flexibility and opportunism: Because they can be instructed at the level of 16 BRADSHAW goals and strategies, agents can find ways to \"work around\" unforeseen problems and exploit new opportunities as they help solve problems. . Task orientation: Agents can be designed to take the context of the person's tasks and situation into account as they present information and take action. . Adaptivity: Agents can use learning algorithms to continually improve their behavior by noticing recurrent patterns of actions and events. Toward Agent-Enabled System Architectures In the future, assistant agents at the user interface and resource-managing agents behind the scenes will increas...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSemantics for an Agent Communication Language . We address the issue of semantics for an agent communication language. In particular, the semantics of Knowledge Query Manipulation Language (KQML) is investigated. KQML is a language and protocol to support communication between (intelligent) software agents. Based on ideas from speech act theory, we present a semantic description for KQML that associates \"cognitive\" states of the agent with the use of the language's primitives (performatives). We have used this approach to describe the semantics for the whole set of reserved KQML performatives. Our research offers a method for a speech act theory-based semantic description of a language of communication acts. Languages of communication acts address the issue of communication between software applications at a level of abstraction that could prove particularly useful to the emerging software agents paradigm of software design and development. 1 Introduction This research is concerned with communication between software agents [13]...\n\n1-Hop Neighbour:\nAutomatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect this paper is to attempt to bring together people, results and questions from these three different disciplines -- HCI, AI, and Cognitive Science -- to explore the potential of building computer interfaces which understand and respond to the richness of the information conveyed in the human face. Until recently, information has been conveyed from the computer to the user mainly via the visual channel, whereas inputs from the user to the computer have been made from the keyboard and pointing devices via the user's motor channel. The recent emergence of multimodal interfaces as our everyday tools might restore a better balance between our physiology and sensory/motor skills, and impact (for the better we hope), the richness of activities we will find ourselves involved in. Given recent progress in user-interface primitives composed of gesture, speech, context and affect, it seems feasible to design environments which do not impose themselves as computer environments, but have a much more natural feeling associated with them.\n\n1-Hop Neighbour:\nAutomated Discovery of Concise Predictive Rules for Intrusion Detection This paper details an essential component of a multi-agent distributed knowledge network system for intrusion detection. We describe a distributed intrusion detection architecture, complete with a data warehouse and mobile and stationary agents for distributed problem-solving to facilitate building, monitoring, and analyzing global, spatio-temporal views of intrusions on large distributed systems. An agent for the intrusion detection system, which uses a machine learning approach to automated discovery of concise rules from system call traces, is described. We use a feature vector representation to describe the system calls executed by privileged processes. The feature vectors are labeled as good or bad depending on whether or not they were executed during an observed attack. A rule learning algorithm is then used to induce rules that can be used to monitor the system and detect potential intrusions. We study the performance of the rule learning algorithm on this task with an...\n\n2-Hop Neighbour:\nCoordinating Adaptations in Open Service Architectures An Open Service Architecture (OSA) is a software structure that makes an open set of information services available to an open set of users. The World Wide Web constitutes the most outstanding example of an OSA as of today. An important feature of an OSA is personalization, i.e. adapting the user interface, functionality, and information of services to its users. However, designers of such a feature are facing many problems, perhaps the biggest one being coordination. If services fail to coordinate how they adapt to users, chances are that the whole point of performing the adaptation, i.e. helping the user, is lost. In this thesis, I lay out a framework for describing and reasoning about adaptive systems in Open Service Architectures, with a special emphasis on coordination. This framework is mainly meant for analysis and design, but some of the ideas presented are also suitable as metaphors for implementations. An implementation of an adaptive system that was designed using this fram...\n\n2-Hop Neighbour:\nData Mining Approaches for Intrusion Detection In this paper we discuss our research in developing general and systematic methods for intrusion detection. The key ideas are to use data mining techniques to discover consistent and useful patterns of system features that describe program and user behavior, and use the set of relevant system features to compute (inductively learned) classifiers that can recognize anomalies and known intrusions. Using experiments on the sendmail system call data and the network tcpdump data, we demonstrate that we can construct concise and accurate classifiers to detect anomalies. We provide an overview on two general data mining algorithms that we have implemented: the association rules algorithm and the frequent episodes algorithm. These algorithms can be used to compute the intra- and inter- audit record patterns, which are essential in describing program or user behavior. The discovered patterns can guide the audit data gathering process and facilitate feature selection. To meet the challenges of both efficient learning (mining) and real-time detection, we propose an agent-based architecture for intrusion detection systems where the learning agents continuously compute and provide the updated (detection) models to the detection agents.\n\n2-Hop Neighbour:\nAn Overview of Active Information Gathering in InfoSleuth InfoSleuth is a system of collaborating software agents that can be configured to perform many different information management activities in a distributed environment. InfoSleuth agents advertise semantic constraints about themselves to InfoSleuth brokers using a global domain ontology. When queried, a broker reasons over these constraints to determine the minimal set of agents that can provide a solution to the query. InfoSleuth's architecture is based on a generic agent shell that provides basic agent communication behaviors over a subset of Knowledge Query Manipulation Language. Individual agents are subclasses of this generic shell that provide specific kinds of functionality. InfoSleuth agents perform a number of complex query activities that require resolving ontology-based queries over dynamically changing, distributed, heterogeneous resources, including distributed query, location-independent single-resource updates, event monitoring by means of subscription/notification servi...\n\n2-Hop Neighbour:\nDetecting Intrusions Using System Calls: Alternative Data Models Intrusion detection systems rely on a wide variety of observable data to distinguish between legitimate and illegitimate activities. In this paper we study one such observable---sequences of system calls into the kernel of an operating system. Using system-call data sets generated by several different programs, each consisting both of normal and intrusive behavior, we compare the ability of different data-modeling methods to represent normal behavior accurately and to recognize intrusions. We compare the following methods: simple enumeration of observed sequences, methods based on relative frequencies of different sequences, a data mining technique, and Hidden Markov Models (HMMs). All of the methods perform adequately, with HMMs giving the best overall results. We discuss the factors affecting the performance of each method, and conclude that for this particular problem, weaker methods than HMMs are likely sufficient. 1 Introduction In 1996, Forrest and others introduced a simple in...\n\n2-Hop Neighbour:\nFeature Subset Selection Using A Genetic Algorithm Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classification and knowledge discovery. 1 Introduction Many practical pattern classification tasks (e.g., medical diagnosis) require learning of an appropriate classification function that assigns a given input pattern (typically represented using a vector of attribute or feature values) to one of a finite set of classes. The choice of features, attributes, or measurements used to represent patterns that are presented to a classifier affect (among other things): ffl The accuracy of the classification function that can be learn...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Pedagogical Content Knowledge in a Tutorial Dialogue System to Support Self-Explanation We are engaged in a research project to create a tutorial dialogue system that helps students learn through self-explanation. Our current prototype is able to analyze students' general explanations of their problem-solving steps, stated in their own words, recognize the types of omissions that we often see in these explanations, and provide feedback. Our approach to architectural tradeoffs is to equip the system with a sophisticated NLU component but to keep dialogue management simple. The system has a knowledge-based NLU component, which performed with 81% accuracy in a preliminary evaluation study. The system's approach to dialogue management can be characterised as \"classify and react\". In each dialogue cycle, the system classifies the student input with respect to a hierarchy of explanation categories that represent common ways of stating complete or incomplete explanations of geometry rules. The system then provides feedback based on that classification. We consider what extensions are necessary or desirable in order to make the dialogues more robust.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nToward Computer-Based Support of Meta-Cognitive Skills: a Computational Framework to Coach Self-Explanation this paper, we describe how these solutions have been implemented in a computer tutor that coaches self-explanation within Andes, a tutoring system for Newtonian physics. We also present the results of a formal study to evaluate the usability and effectiveness of the system. Finally, we discuss some hypotheses to explain the obtained results, based on the analysis of the data collected during the study. INTRODUCTION Research on Intelligent Tutoring Systems (ITS) has been increasingly affecting education. While for many years ITS remained confined to research labs, today they have started moving into the classroom, showing their effectiveness for learning and influencing the structure of traditional curricula (Koedinger, Anderson, H., & Mark, 1995). However, existing ITS still target only a limited part of the learning process. They generally focus on teaching problem solving and domain specific cognitive skills. The long-term goal of our research is to explore innovat\n\n2-Hop Neighbour:\nA Tutorial Dialogue System with Knowledge-Based Understanding and Classification of Student Explanations We are engaged in a research project to create a tutorial dialogue system that helps students to explain the reasons behind their problem-solving actions, in order to help them learn with greater understanding. Currently, we are pilottesting a prototype system that is able to analyze student explanations, stated in their own words, recognize the types of omissions that we typically see in these explanations, and provide feedback. The system takes a knowledge-based approach to natural language understanding and uses a statistical text classifier as a backup. The main features are: robust parsing, logic-based representation of semantic content, representation of pedagogical content knowledge in the form of a hierarchy of partial and complete explanations, and reactive dialogue management. A preliminary evaluation study indicates that the knowledge-based natural language component correctly classifies 80% of explanations and produces a reasonable classification for all but 6% of explanations. 1", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Keyword Searching and Browsing in Databases using BANKS With the growth of the Web, there has been a rapid increase in the number of users who need to access online databases without having a detailed knowledge of the schema or of query languages; even relatively simple query languages designed for non-experts are too complicated for them. We describe BANKS, a system which enables keyword-based search on relational databases, together with data and schema browsing. BANKS enables users to extract information in a simple manner without any knowledge of the schema or any need for writing complex queries. A user can get information by typing a few keywords, following hyperlinks, and interacting with controls on the displayed results. BANKS models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. Answers to a query are modeled as rooted trees connecting tuples that match individual keywords in the query. Answers are ranked using a notion of proximity coupled with a notion of prestige of nodes based on inlinks, similar to techniques developed for Web search. We present an efficient heuristic algorithm for finding and ranking query results. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBBQ: A Visual Interface for Integrated Browsing and Querying of XML In this paper we present BBQ (Blended Browsing and Querying), a graphic user interface for seamlessly browsing and querying XML data sources. BBQ displays the structure of multiple data sources using a paradigm that resembles drilling-down in Windows' directory structures. BBQ allows queries incorporating one or more of the sources. Queries are constructed in a query-by-example (QBE) manner, where DTDs play the role of schema. The queries are arbitrary conjunctive queries with GROUPBY, and their results can be subsequently used and refined. To support query refinement, BBQ introduces virtual result views: standalone virtual data sources that (i) are constructed by user queries, from elements in other data sources, and (ii) can be used in subsequent queries as first-class data sources themselves. Furthermore, BBQ allows users to query data sources with loose or incomplete schema, and can augment such schema with a DTD inference mechanism.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nThe Shape of the Web and Its Implications for Searching the Web With the rapid growth of the number of web pages, designing a search engine that can retrieve high quality information in response to a user query is a challenging task. Automated search engines that rely on keyword matching usually return too many low quality matches and they take a long time to run. It is argued in the literature that link-following search methods can substantially increase the search quality, provided that these methods use an accurate assumption about useful patterns in the hyperlink topology of the web. Recent work in the field has focused on detecting identi able patterns in the web graph and exploiting this information to improve the performance of search algorithms. We survey relevant work in this area and comment on the implications of these patterns for other areas such as advertisement and marketing.\n\n2-Hop Neighbour:\nThe Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.\n\n2-Hop Neighbour:\nThe XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art.\n\n2-Hop Neighbour:\nDetection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.\n\n2-Hop Neighbour:\nUsing Common Hypertext Links to Identify the Best Phrasal Description of Target Web Documents This paper describes previous work which studied and compared the distribution of words in web documents with the distribution of words in \"normal\" flat texts. Based on the findings from this study it is suggested that the traditional IR techniques cannot be used for web search purposes the same way they are used for \"normal\" text collections, e.g. news articles. Then, based on these same findings, I will describe a new document description model which exploits valuable anchor text information provided on the web that is ignored by the traditional techniques. The problem Amitay (1997) has found, through a corpus analysis of a 1000 web pages that the lexical distribution in documents which were written especially for the web (home pages), is significantly different than the lexical distribution observed in a corpus of \"normal\" English language (the British National Corpus - 100,000,000 words). For example, in the web documents collection there were some HTML files which contained no v...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Authoring and Annotation of Web Pages in CREAM Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nOn2broker: Semantic-Based Access to Information Sources at the WWW On2broker provides brokering services to improve access to heterogeneous, distributed and semistructured information sources as they are presented in the World Wide Web. It relies on the use of ontologies to make explicit the semantics of web pages. In the paper we will discuss the general architecture and main components of On2broker and provide some application scenarios. 1. Introduction In the paper we describe a tool environment called On2broker 1 that processes information sources and content descriptions in HTML, XML, and RDF and that provides intelligent information retrieval, query answering and maintenance support. Central for our approach is the use of ontologies to describe background knowledge and to make explicit the semantics of web documents. Ontologies have been developed in the area of knowledge-based systems for structuring and reusing large bodies of knowledge (cf. CYC [Lenat, 1995], (KA)2 [Benjamins et al., 1998]). Ontologies are consensual and formal specificat...\n\n1-Hop Neighbour:\nUsing an Expressive Description Logic: FaCT or Fiction? Description Logics form a family of formalisms closely related to semantic networks but with the distinguishing characteristic that the semantics of the concept description language is formally defined, so that the subsumption relationship between two concept descriptions can be computed by a suitable algorithm. Description Logics have proved useful in a range of applications but their wider acceptance has been hindered by their limited expressiveness and the intractability of their subsumption algorithms. This paper addresses both these issues by describing a sound and complete tableaux subsumption testing algorithm for a relatively expressive Description Logic which, in spite of the logic's worst case complexity, has been shown to perform well in realistic applications. 1 INTRODUCTION Description Logics (DLs) form a family of formalisms which have grown out of knowledge representation techniques using frames and semantic networks\n\n1-Hop Neighbour:\nEmbedding Knowledge in Web Documents The paper argues for the use of general and intuitive knowledge representation languages (and simpler notational variants, e.g. subsets of natural languages) for indexing the content of Web documents and representing knowledge within them. We believe that these languages have advantages over metadata languages based on the Extensible Mark-up Language (XML). Indeed, the retrieval of precise information is better supported by languages designed to represent semantic content and support logical inference, and the readability of such a language eases its exploitation, presentation and direct insertion within a document (thus also avoiding information duplication). We advocate the use of Conceptual Graphs and simpler notational variants that enhance knowledge readability. To further ease the representation process, we propose techniques allowing users to leave some knowledge terms undeclared. We also show how lexical, structural and knowledge-based techniques may be combined to retrieve or ...\n\n2-Hop Neighbour:\nReasoning with Concrete Domains Description logics are knowledge representation and reasoning formalisms which represent conceptual knowledge on an abstract logical level. Concrete domains are a theoretically well-founded approach to the integration of description logic reasoning with reasoning about concrete objects such as numbers, time intervals or spatial regions. In this paper, the complexity of combined reasoning with description logics and concrete domains is investigated. We extend ALC(D), which is the basic description logic for reasoning with concrete domains, by the operators \"feature agreement\" and \"feature disagreement\". For the extended logic, called ALCF(D), an algorithm for deciding the ABox consistency problem is devised. The strategy employed by this algorithm is vital for the efficient implementation of reasoners for description logics incorporating concrete domains. Based on the algorithm, it is proved that the standard reasoning problems for both logics ALC(D) and ALCF(D) are PSpace-co...\n\n2-Hop Neighbour:\nRevisiting and Versioning in Virtual Special Reports Adaptation/personalization is one of the main issues for web applications and require large repositories. Creating adaptive web applications from these repositories requires to have methods to facilitate web application creation and management and to ensure reuse, sharing and exchange of data through the internet/intranet. Virtual documents deal with these issues. In our framework, we are interested in adaptive virtual documents for author-oriented web applications providing several reading strategies to readers. These applications have the following characteristics: authors have know-how which enables them to choose document contents and to organize them in one or more consistent ways. A reading strategy and the corresponding content are semantically coherent and convey a particular meaning to the readers. Such author's know-how can be represented at knowledge level and then be used for generating web documents dynamically, for ensuring reader comprehension and for sharing and reuse. Then an adaptive virtual document can be computed on the fly by means of a semantic composition engine using: i) an overall document structure -- for instance a narrative structure - representing a reading strategy for which node contents are linked at run time, according to user's needs for adaptation, ii) an intelligent search engine and semantic metadata relying on semantic web initiative, and iv) a user model. In this paper, we focus on a semantic composition engine enabling us to compute on the fly adaptive/personalized web documents in the ICCARS project. Its main goal is to assist the journalist in building adaptive special reports. In such a framework, adaptation, personalization and reusability are central issues for delivering adaptive special reports.\n\n2-Hop Neighbour:\nA Description Logic with Transitive and Inverse Roles and Role Hierarchies transitive roles play an important rôle in the adequate representation of aggregated objects: they allow these objects to be described by referring to their parts without specifying a level of decomposition. In [Horrocks & Gough, 1997], the Description Logic (DL) ALCH R + is presented, which extends ALC with transitive roles and a role hierarchy. It is argued in [Sattler, 1998] that ALCH R + is well-suited to the representation of aggregated objects in applications that require various part-whole relations to be distinguished, some of which are transitive. For example, a medical knowledge base could contain the following entries defining two different parts of the brain, namely the gyrus and the cerebellum. In contrast to a gyrus, a cerebellum is an integral organ and, furthermore, a functional component of the brain. Hence the role is component (which is a non-transitive sub-role of is part) is used to describe the relation between the brain and the cerebellum: is component ⊑ is part gyrus:= (∀consists.brain mass) ⊓ (∃is part.brain) cerebellum:= organ ⊓ (∃is component.brain) However, ALCH R + does not allow the simultaneous description of parts by means of the whole to which they belong and of wholes by means of their constituent parts: one or other is possible, but not both. To overcome this limitation, we present the DL ALCHI R + which extends ALCH R + with inverse (converse) roles, allowing, for example, the use of has part as well as is part. 1 Using ALCHIR +, we can define a tumorous brain as: tumorous brain:= brain ⊓ (tumorous ⊔ (∃has part.tumorous)) Part of this work was carried out while being a guest at IRST,\n\n2-Hop Neighbour:\nKnowledge Retrieval and the World Wide Web }) )$ Figure 5. Images, knowledge indexations, and a customized query interface contained within one document. The sample query shows how the command spec, which looks for specializations of a conceptual graph, can be used to retrieve images CGs indexed. (Figure 7 gives the results.) [Cat]->(On)->[Table]. In WordNet, cat has five meanings (feline, gossiper, x-ray, beat, and vomit), and table has five meanings (array, furniture, tableland, food, and postpone) . In the WebKB ontology, the relation type On connects a concept of type Spatial_entity to another concept of the same type. Thus, WebKB can infer that beat and vomit are not the intended meanings for cat, and array and postpone are not the intended meanings for table. To further identify the intended meanings, WebKB could prompt the following questions to the user: \"Does cat refer to feline, gossiper, x-ray, or something else?\" and \"Does table refer to furniture, tableland, food, or something else?\" Finally, knowledge state...\n\n2-Hop Neighbour:\nConceptual Linking: Ontology-based Open Hypermedia This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of . an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; . a Web-based open hypermedia link service that can offer a range of different linkproviding facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents). Introduction: concepts and metadata Metadata is data that describes other data to enhance its usefulness. The library catalogue or database schema are canonical examples. For our purposes, metadata falls into three broad categories: . Catalogue information: e.g. the artist ...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Graph Query Language and Its Query Processing Many new database applications involve querying of graph data. In this paper, we present an objectoriented graph data model, and an OQL like graph query language, GOQL. The data model and the language are illustrated in the application domain of multimedia presentation graphs. We then discuss the query processing techniques for GOQL, more specifically, the translation of GOQL into an operatorbased language, called O-Algebra, extended with operators to deal with paths and sequences. We also discuss different approaches for efficient implementations of algebra operators for paths and sequences. 1 Introduction Many database applications such as hypertext applications, geographic information systems, world wide web searching, and heterogeneous information integration, etc., require modeling and querying of graph data ([Guti94, GBPV 94, MeMM 96, BDHS 96, AQMWW 96, AM 98, FFKLS 98, LSBBOO98]). In this paper, we present a data model, and an OQL-like query language GOQL, for querying graphs. ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nElectronic Books in Digital Libraries 1 Electronic book is an application with a multimedia database of instructional resources, which include hyperlinked text, instructor's audio/video clips, slides, animation, still images, etc. as well as content-based information about these data, and metadata such as annotations, tags, and cross-referencing information. Electronic books in the Internet or on CDs today are not easy to learn from. We propose the use of a multimedia database of instructional resources in constructing and delivering multimedia lessons about topics in an electronic book. We introduce an electronic book data model containing (a) topic objects and (b) instructional resources, called instruction module objects, which are multimedia presentations possibly capturing real-life lectures of instructors. We use the notion of topic prerequisites for topics at different detail levels, to allow electronic book users to request/compose multimedia lessons about topics in the electronic book. We present automated construction of the \"best\" user-tailored lesson (as a multimedia presentation. 1.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Foundation for Representing and Querying Moving Objects Spatio-temporal databases deal with geometries changing over time. The goal of our work is to provide a DBMS data model and query language capable of handling such time-dependent geometries, including those changing continuously which describe moving objects. Two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, are of interest, respectively. We propose to represent such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a DBMS data model and query language. This paper presents a design of such a system of abstract data types. It turns out that besides the main types of interest, moving point and moving region, a relatively large number of auxiliary data types is needed. For example, one needs a line type to represent the projection of a moving point into the plane, or a “moving real ” to represent the time-dependent distance of two moving points. It then becomes crucial to achieve (i) orthogonality in the design of the type system, i.e., type constructors can be applied uniformly, (ii) genericity' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSpatio-Temporal Data Types: An Approach to Modeling and Querying Moving Objects in Databases Spatio-temporal databases deal with geometries changing over time. In general, geometries cannot only change in discrete steps, but continuously, and we are talking about moving objects. If only the position in space of an object is relevant, then moving point is a basic abstraction; if also the extent is of interest, then the moving region abstraction captures moving as well as growing or shrinking regions. We propose a new line of research where moving points and moving regions are viewed as three-dimensional (2D space + time) or higher-dimensional entities whose structure and behavior is captured by modeling them as abstract data types. Such types can be integrated as base (attribute) data types into relational, object-oriented, or other DBMS data models; they can be implemented as data blades, cartridges, etc. for extensible DBMSs. We expect these spatio-temporal data types to play a similarly fundamental role for spatio-temporal databases as spatial data types have played for spatial databases. The paper explains the approach and discusses several fundamental issues and questions related to it that need to be clarified before delving into specific designs of spatio-temporal algebras.\n\n1-Hop Neighbour:\nThe DEDALE System for Complex Spatial Queries This paper presents dedale, a spatial database system intended to overcome some limitations of current systems by providing an abstract and non-specialized data model and query language for the representation and manipulation of spatial objects. dedale relies on a logical model based on linear constraints, which generalizes the constraint database model of [KKR90]. While in the classical constraint model, spatial data is always decomposed into its convex components, in dedale holes are allowed to fit the need of practical applications. The logical representation of spatial data although slightly more costly in memory, has the advantage of simplifying the algorithms. dedale relies on nested relations, in which all sorts of data (thematic, spatial, etc.) are stored in a uniform fashion. This new data model supports declarative query languages, which allow an intuitive and efficient manipulation of spatial objects. Their formal foundation constitutes a basis for practical query optimizati...\n\n2-Hop Neighbour:\nTowards A Semantic Framework For Service Description The rapid development of the Internet and of distributed computing has led to a proliferation of online service providers such as digital libraries, web information sources, electronically requestable traditional services, and even software-to-software services such as those provided by persistence and event managers. This has created a need for catalogs of services, based on description languages covering both traditional and electronic services. This paper presents a classification and a domainindependent characterisation of services, which provide a foundation for their description to potential consumers. For each of the service characteristics that we consider, we identify the range of its possible values in di#erent settings, and when applicable, we point to alternative approaches for representing these values. The idea is that by merging # This work was funded by an Australian Research Council SPIRT Grant entitled \"Selfdescribing transactions operating in a open, heterogeneous and distributed environment\" involving QUT and GBST Holdings Pty Ltd. 1 2 these individual approaches, and by mapping them into a unified notation, it is possible to design service description languages suitable for advertisement and matchmaking within specific application settings. 1.\n\n2-Hop Neighbour:\nApplication of Moving Objects and Spatiotemporal Reasoning In order to predict future variations of moving objects which general attributes, locations, and regions of spatial objects are changed over time, spatiotemporal data, domain knowledge, and spatiotemporal operations are required to process together with temporal and spatial attributes of data. However, conventional researches on temporal and spatial reasoning cannot be applied directly to the inference using moving objects, because they have been studied separately on temporal or spatial attribute of data. Therefore, in this paper, we not only define spatial objects in time domain but also propose a new type of moving objects and spatiotemporal reasoning model that has the capability of operation and inference for moving objects. The proposed model is made up of spatiotemporal database, GIS tool, and inference engine for application of spatiotemporal reasoning using moving objects and they execute operations and inferences for moving objects. Finally, to show the applicability of the proposed model, a proper domain is established for the battlefield analysis system to support commander's decision making in the army operational situation and it is experimented with this domain.\n\n2-Hop Neighbour:\nQuery-By-Trace: Visual Predicate Specification In Spatio-Temporal Databases In this paper we propose a visual interface for the specification of predicates to be used in queries on spatio-temporal databases. The approach is based on a visual specification method for temporally changing spatial situations. This extends existing concepts for visual spatial query languages, which are only capable of querying static spatial situations. We outline a preliminary user interface that supports the specification on an intuitive and easily manageable level, and we describe the design of the underlying visual language. The visual notation can be used directly as a visual query interface to spatio-temporal databases, or it can provide predicate specifications that can be integrated into textual query languages leading to heterogeneous languages. Key Words Spatio-Temporal Queries, Visual Predicate Specification, Visual Database Interface 1. INTRODUCTION Spatio-temporal databases deal with spatial objects that change over time (for example, they move or they grow): cars, ...\n\n2-Hop Neighbour:\nManipulating Interpolated Data is Easier than You Thought Data defined by interpolation is frequently found in new applications involving geographical entities, moving objects, or spatiotemporal data. These data lead to potentially infinite collections of items, (e.g., the elevation of any point in a map), whose definitions are based on the association of a collection of samples with an interpolation function. The naive manipulation of the data through direct access to both the samples and the interpolation functions leads to cumbersome or inaccurate queries. It is desirable to hide the samples and the interpolation functions from the logical level, while their manipulation is performed automatically. We propose to model such data using infinite relations (e.g., the map with elevation yields an infinite ternary relation) which can be manipulated through standard relational query languages (e.g., SQL), with no mention of the interpolated definition. The clear separation between logical and physical levels ensures the accu...\n\n2-Hop Neighbour:\nNearest Neighbor Queries in a Mobile Environment Nearest neighbor queries have received much interest in recent years due to their increased importance in advanced database applications. However, past work", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Developing Language Processing Components with GATE (a User Guide) Contents 1 Introduction 3 1.1 How to Use This Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 How To. . . 14 2.1 Download GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Install and Run GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 [D,F] Configure GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4 Build GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 [D,F] Create a New CREOLE Resource . . . . . . . . . . . . . . . . . . . . 18 2.6<F11' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nExperience with a Language Engineering Architecture: Three Years of GATE GATE, the General Architecture for Text Engineering, aims to provide a software infrastructure for researchers and developers working in the area of natural language processing. A version of GATE has now been widely available for three years. In this paper we review the objectives which motivated the creation of GATE and the functionality and design of the current system. We discuss the strengths and weaknesses of the current system, identify areas for improvement and present plans for implementing these improvements. Introduction We think that if you're researching human language processing you should probably not be writing code to: ffl store data on disk; ffl display data; ffl load processor modules and data stores into processes; ffl initiate and administer processes; ffl divide computation between client and server; ffl pass data between processes and machines. A Domain-Specific Software Architecture (DSSA) for language processing should do all this for you. You will have t...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Let's Talk! Socially Intelligent Agents for Language Conversation Training Abstract — This paper promotes socially intelligent animated agents for the pedagogical task of English conversation training for native speakers of Japanese. Since student-agent conversations are realized as role-playing interactions, strong requirements are imposed on the agents ’ affective and social abilities. As a novel feature, social role awareness is introduced to animated conversational agents, that are by now strong affective reasoners, but otherwise often lack the social competence observed with humans. In particular, humans may easily adjust their behavior depending on their respective role in a social setting, whereas their synthetic pendants tend to be driven mostly by emotions and personality. Our main contribution is the incorporation of a ‘social filter program ’ to mental models of animated agents. This program may qualify an agent’s expression of its emotional state by the social context, thereby enhancing the agent’s believability as a conversational partner. Our implemented system is web-based and demonstrates socially aware animated agents in a virtual coffee shop environment. An experiment with our conversation system shows that users consider socially aware agents as more natural than agents that violate conventional practices.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nIntegrating Models of Personality and Emotions into Lifelike Characters . A growing number of research projects in academia and industry have recently started to develop lifelike agents as a new metaphor for highly personalised human-machine communication. A strong argument in favour of using such characters in the interface is the fact that they make humancomputer interaction more enjoyable and allow for communication styles common in human-human dialogue. In this paper we discuss three ongoing projects that use personality and emotions to address different aspects of the affective agent-user interface: (a) Puppet uses affect to teach children how the different emotional states can change or modify a character's behaviour, and how physical and verbal actions in social interactions can induce emotions in others; (b) the Inhabited Market Place uses affect to tailor the roles of actors in a virtual market place; and (c) Presence uses affect to enhance the believability of a virtual character, and produce a more natural conversational manner. Int...\n\n1-Hop Neighbour:\nEmile: Marshalling Passions in Training and Education Emotional reasoning can be an important contribution to auto- mated tutoring and training systems. This paper describes mile, a model of emotional reasoning that builds upon existing approaches and significantly generalizes and extends their capabilities. The main contribution is to show how an explicit planning model allows a more general treatment of several stages of the reasoning process. The model supports educational applications by allowing agents to appraise the emotional significance of events as they relate to students' (or their own) plans and goals, model and predict the emotional state of others, and alter behavior accordingly. 1 INTRODUCTION Emotional computers may seem an oxymoron but recent years have seen a flurry of computation accounts of emotion in a variety of applications. This paper describes mile, a model of emotional reasoning that extends and significantly generalizes prior work. mile illustrates how an explicit planning model supports a more general treatme...\n\n1-Hop Neighbour:\nA Social-Psychological Model for Synthetic Actors In the Virtual Theater project, we provide synthetic actors that portray fictive characters by improvising their behavior in a multimedia environment. Actors are either autonomous or avatars directed by users. Their improvisation is based on the directions they receive and the context. Directions can take different forms: high-level scenarios, user commands, and personality changes in the character portrayed. In this paper, we look at this last form of direction. We propose a social-psychological model, in which we can define personality traits that depend on the values of moods and attitudes. We show how such a model can be exploited by synthetic actors to produce performances that are theatrically interesting, believable, and diverse. An application, the Cybercafé, is used to test those features. Content Areas: synthetic actors, improvisation, believability. ____________________________________________________________________________________________ 3 1. Introduction In the Virt...\n\n2-Hop Neighbour:\nSocial Role Awareness in Animated Agents This paper promotes social role awareness as a desirable capability of animated agents, that are by now strong affective reasoners, but otherwise often lack the social competence observed with humans. In particular, humans may easily adjust their behavior depending on their respective role in a socio-organizational setting, whereas their synthetic pendants tend to be driven mostly by attitudes, emotions, and personality. Our main contribution is the incorporation of `social filter programs' to mental models of animated agents. Those programs may qualify an agent's expression of its emotional state by the social context, thereby enhancing the agent's believability as a conversational partner or virtual teammate. Our implemented system is entirely webbased and demonstrates socially aware animated agents in an environment similar to Hayes-Roth's Cybercaf'e. Keywords believability, social agents, human-like qualities of synthetic agents, social dimension in communication, affective reaso...\n\n2-Hop Neighbour:\nPersonality in Synthetic Agents ID: A043 Personality in Synthetic Agents Rousseau, Daniel KSL, Stanford University Hayes-Roth, Barbara KSL, Stanford University Abstract Personality characterizes an individual through a set of psychological traits that influence his or her behavior. Combining visions from psychology, artificial intelligence and theater, we are studying the use of personality by intelligent, automated actors able to improvise their behavior in order to portray characters, and to interact with users in a multimedia environment We show how psychological personality traits can be exploited to produce a performance that is theatrically interesting and believable without being completely predictable. We explain how personality can influence moods and interpersonal relationships. We describe the model of a synthetic actor that takes into account those concepts to choose its behavior in a given context. In order to test our approach, we observe the performance of autonomous actors portraying waiters with di...\n\n2-Hop Neighbour:\nAgents That Talk Back (Sometimes): Filter Programs for Affective Communication This paper introduces a model of interaction between users and animated agents as well as inter-agent interaction that supports basic features of affective conversation. As essential requirements for animated agents' capability to engage in and exhibit affective communication we motivate reasoning about emotion and emotion expression, personality, and social role awareness. The main contribution of our paper is the discussion of so-called `filter programs' that may qualify an agent's expression of its emotional state by its personality and the social context. All of the mental concepts that determine emotion expression, such as emotional state, personality, standards, and attitudes, have associated intensities for fine-tuning the agent's reactions in user-adapted environments.\n\n2-Hop Neighbour:\nExploiting Models of Personality and Emotions to Control the Behavior of Animated Interactive Agents The German Research Centre for Artificial Intelligence (DFKI) recently started three new projects1 to advance our understanding of the fundamental technology required to drive the social behaviour of interactive animated agents. This initiative has been timed to catch the current wave of research and commercial interest in the field of lifelike characters [1] and\n\n2-Hop Neighbour:\nAnimated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control This paper describes Steve, an animated agent that helps students learn to perform physical, procedural tasks. The student and Steve cohabit a three-dimensional, simulated mock-up of the student's work environment. Steve can demonstrate how to perform tasks and can also monitor students while they practice tasks, providing assistance when needed. This paper describes Steve's architecture in detail, including perception, cognition, and motor control. The perception module monitors the state of the virtual world, maintains a coherent representation of it, and provides this information to the cognition and motor control modules. The cognition module interprets its perceptual input, chooses appropriate goals, constructs and executes plans to achieve those goals, and sends out motor commands. The motor control module implements these motor commands, controlling Steve's voice, locomotion, gaze, and gestures, and allowing Steve to manipulate objects in the virtual world. 1 Introduction To ma...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Efficient and Effective Metasearch for Text Databases Incorporating Linkages among Documents Linkages among documents have a significant impact on the importance of documents, as it can be argued that important documents are pointed to by many documents or by other important documents. Metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). In a large-scale metasearch engine, the contents of each local database is represented by a representative. Each user query is evaluated against the set of representatives of all databases in order to determine the appropriate databases to search. In previous works, the linkage information between documents has not been utilized in determining the appropriate databases to search. In this paper, such information is employed to determine the degree of relevance of a document with respect to a given query. This information is then stored in each database representative to facilitate the selection of databases for each given query. We establish a necessary an...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Statistical Method for Estimating the Usefulness of Text Databases Searching desired data in the Internet is one of the most common ways the Internet is used. No single search engine is capable of searching all data in the Internet. The approach that provides an interface for invoking multiple search engines for each user query has the potential to satisfy more users. When the number of search engines under the interface is large, invoking all search engines for each query is often not cost effective because it creates unnecessary network traffic by sending the query to a large number of useless search engines and searching these useless search engines wastes local resources. The problem can be overcome if the usefulness of every search engine with respect to each query can be predicted. In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that our estimation method is much more accurate than existing methods.\n\n1-Hop Neighbour:\nEfficient and Effective Metasearch for a Large Number of Text Databases Metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). In a metasearch engine, the contents of each local database is represented by a representative. Each user query is evaluated against the set of representatives of all databases in order to determine the appropriate databases to search. When the number of databases is very large, say in the order of tens of thousands or more, then a traditional metasearch engine may become inefficient as each query needs to be evaluated against too many database representatives. Furthermore, the storage requirement on the site containing the metasearch engine can be very large. In this paper, we propose to use a hierarchy of database representatives to improve the efficiency. We provide an algorithm to search the hierarchy. We show that the retrieval effectiveness of our algorithm is the same as that of evaluating the user query against all database representatives. We als...\n\n1-Hop Neighbour:\nA Meta-search Method Reinforced by Cluster Descriptors A meta-search engine acts as an agent for the participant search engines. It receives queries from users and redirects them to one or more of the participant search engines for processing. A meta-search engine incorporating many participant search engines is better than a single global search engine in terms of the number of pages indexed and the freshness of the indexes. The meta-search engine stores descriptive data (i.e., descriptors) about the index maintained by each participant search engine so that it can estimate the relevance of each search engine when a query is received. The ability for the meta-search engine to select the most relevant search engines determines the quality of the final result. To facilitate the selection process, the document space covered by each search engine must be described not only concisely but also precisely. Existing methods tend to focus on the conciseness of the descriptors by keeping a descriptor for a search engine 's entire index. This paper proposes to cluster a search engine's document space into clusters and keep a descriptor for each cluster. We show that cluster descriptors can provide a finer and more accurate representation of the document space, and hence enable the meta-search engine to improve the selection of relevant search engines. Two cluster-based search engine selection scenarios (i.e., independent and high-correlation) are discussed in this paper. Experiments verify that the cluster-based search engine selection can effectively identify the most relevant search engines and improve the quality of the search results consequently. 1\n\n2-Hop Neighbour:\nThe Impact of Database Selection on Distributed Searching Abstract The proliferation of online information resources increases the importance of effective and efficient distributed searching. Distributed searching is cast in three parts – database selection, query processing, and results merging. In this paper we examine the effect of database selection on retrieval performance. We look at retrieval performance in three different distributed retrieval testbeds and distill some general results. First we find that good database selection can result in better retrieval effectiveness than can be achieved in a centralized database. Second we find that good performance can be achieved when only a few sites are selected and that the performance generally increases as more sites are selected. Finally we find that when database selection is employed, it is not necessary to maintain collection wide information (CWI), e.g. global idf. Local information can be used to achieve superior performance. This means that distributed systems can be engineered with more autonomy and less cooperation. This work suggests that improvements in database selection can lead to broader improvements in retrieval performance, even in centralized (i.e. single database) systems. Given a centralized database and a good selection mechanism, retrieval performance can be improved by decomposing that database conceptually and employing a selection step. 1\n\n2-Hop Neighbour:\nConcept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.\n\n2-Hop Neighbour:\nDetection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.\n\n2-Hop Neighbour:\nAutomatic Discovery of Language Models for Text Databases The proliferation of text databases within large organizations and on the Internet makes it difficult for a person to know which databases to search. Given language models that describe the contents of each database, a database selection algorithm such as GlOSS can provide assistance by automatically selecting appropriate databases for an information need. Current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. This paper demonstrates that cooperation is not required. Instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. Although random sampling is not possible, it can be approximated with carefully selected queries. This sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. Experimental results demonstrate th...\n\n2-Hop Neighbour:\nSTARTS: Stanford Proposal for Internet Meta-Searching Document sources are available everywhere, both within the internal networks of organizations and on the Internet. Even individual organizations use search engines from different vendors to index their internal document collections. These search engines are typically incompatible in that they support different query models and interfaces, they do not return enough information with the query results for adequate merging of the results, and finally, in that they do not export metadata about the collections that they index (e.g., to assist in resource discovery). This paper describes STARTS, an emerging protocol for Internet retrieval and search that facilitates the task of querying multiple document sources. STARTS has been developed in a unique way. It is not a standard, but a group effort coordinated by Stanford's Digital Library project, and involving over 11 companies and organizations. The objective of this paper is not only to give an overview of the STARTS protocol proposal, but...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Improved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n1-Hop Neighbour:\nTarget Seeking Crawlers and their Topical Performance Topic driven crawlers can complement search engines by targeting relevant portions of the Web. A topic driven crawler must exploit the information available about the topic and its underlying context. In this paper we extend our previous research on the design and evaluation of topic driven crawlers by comparing seven different crawlers on a harder problem, namely, seeking highly relevant target pages. We find that exploration is an important aspect of a crawling strategy. We also study how the performance of crawler strategies depends on a number of topical characteristics based on notions of topic generality, cohesiveness, and authoritativeness. Our results reveal that topic generality is an obstacle for most crawlers, that three crawlers tend to perform better when the target pages are clustered together, and that two of these also display better performance when topic targets are highly authoritative.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n2-Hop Neighbour:\nWeb Search -- Your Way We describe a metasearch engine architecture, in use at NEC Research Institute, that allows users to provide preferences in the form of an information need category. This extra information is used to direct the search process, providing more valuable results than by considering only the query. Using our architecture, identical keyword queries may be sent to different search engines, and results may be scored differently for different users.\n\n2-Hop Neighbour:\nArchitecture of a Metasearch Engine that Supports User Information Needs When a query is submitted to a metasearch engine, decisions are made with respect to the underlying search engines to be used, what modifications will be made to the query, and how to score the results. These decisions are typically made by considering only the user's keyword query, neglecting the larger information need. Users with specific needs, such as \"research papers\" or \"homepages,\" are not able to express these needs in a way that affects the decisions made by the metasearch engine. In this paper, we describe a metasearch engine architecture that considers the user's information need for each decision. Users with different needs, but the same keyword query, may search different sub-search engines, have different modifications made to their query, and have results ordered differently. Our architecture combines several powerful approaches together in a single general purpose metasearch engine. 1 Introduction Current metasearch engines make several decisions on behalf of the use...\n\n2-Hop Neighbour:\nExploiting Geographical Location Information of Web Pages Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, the geographical scope of web resources is largely ignored by web search engines. We make the case for identifying and exploiting the geographical location information of web sites so that web search engines can rank resources in a geographically sensitive fashion, in addition to using more traditional information-retrieval strategies. In this paper, we first consider how to compute the geographical location of web pages. Subsequently, we consider how to exploit such information in one specific \"proof-of-concept\" appl...\n\n2-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nAccelerated Focused Crawling through Online Relevance Feedback The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Quilt: An XML Query Language for Heterogeneous Data Sources The World Wide Web promises to transform human society by making virtually all types of information instantly available everywhere. Two prerequisites for this promise to be realized are a universal markup language and a universal query language. The power and flexibility of XML make it the leading candidate for a universal markup language. XML provides a way to label information from diverse data sources including structured and semi-structured documents, relational databases, and object repositories. Several XML-based query languages have been proposed, each oriented toward a specific category of information. Quilt is a new proposal that attempts to unify concepts from several of these query languages, resulting in a new language that exploits the full versatility of XML. The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and the way in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nOn Supporting Containment Queries in Relational Database Management Systems Virtually all proposals for querying XML include a class of query we term “containment queries”. It is also clear that in the foreseeable future, a substantial amount of XML data will be stored in relational database systems. This raises the question of how to support these containment queries. The inverted list technology that underlies much of Information Retrieval is well-suited to these queries, but should we implement this technology (a) in a separate loosely-coupled IR engine, or (b) using the native tables and query execution machinery of the RDBMS? With option (b), more than twenty years of work on RDBMS query optimization, query execution, scalability, and concurrency control and recovery immediately extend to the queries and structures that implement these new operations. But all this will be irrelevant if the performance of option (b) lags that of (a) by too much. In this paper, we explore some performance implications of both options using native implementations in two commercial relational database systems and in a special purpose inverted list engine. Our performance study shows that while RDBMSs are generally poorly suited for such queries, under certain conditions they can outperform an inverted list engine. Our analysis further identifies two significant causes that differentiate the performance of the IR and RDBMS implementations: the join algorithms employed and the hardware cache utilization. Our results suggest that contrary to most expectations, with some modifications, a native implementation in an RDBMS can support this class of query much more efficiently.\n\n1-Hop Neighbour:\nCommunity Webs (C-Webs): Technological Assessment and System Architecture this paper, our presentation mainly relies on examples taken from one of the potential C-Web applications, namely C-Web Portals for cultural communities.\n\n1-Hop Neighbour:\nA Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th\n\n2-Hop Neighbour:\nTypechecking for XML Transformers We study the typechecking problem for XML transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k- pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XMLQL and a fragment of XSLT. 1. INTRODUCTION Traditionally, database query languages have focused on data retrieval, with complex data transformations left to applications. The new XML data exchange standard for the Web, and emerging applications requiring data wrapping and integration, have shifted the focus towards data transformations....\n\n2-Hop Neighbour:\nLearning to Classify Text from Labeled and Unlabeled Documents In many important text classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled data is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.\n\n2-Hop Neighbour:\nIntegrating Keyword Search into XML Query Processing Due to the popularity of the XML data format, several query languages for XML have been proposed, specially devised to handle data whose structure is unknown, loose, or absent. While these languages are rich enough to allow for querying the content and structure of an XML document, a varying or unknown structure can make formulating queries a very difficult task. We propose an extension to XML query languages that enables keyword search at the granularity of XML elements, that helps novice users formulate queries, and also yields new optimization opportunities for the query processor. We present an implementation of this extension on top of a commercial RDBMS; we then discuss implementation choices and performance results. Keywords XML query processing, full-text index 1 Introduction There is no doubt that XML is rapidly becoming one of the most important data formats. It is already used for scientific data (e.g., DNA sequences), in linguistics (e.g., the Treebank database at the U...\n\n2-Hop Neighbour:\nInteraction between Path and Type Constraints XML [7], which is emerging as an important standard for data exchange on the World-Wide Web, highlights the importance of semistructured data. Although the XML standard itself does not require any schema or type system, a number of proposals [6, 17, 19] have been developed that roughly correspond to data definition languages. These allow one to constrain the structure of XML data by imposing a schema on it. These and other proposals also advocate the need for integrity constraints, another form of constraints that should, for example, be capable of expressing inclusion constraints and inverse relationships. The latter have recently been studied as path constraints in the context of semistructured data [4, 9]. It is likely that future XML proposals will involve both forms of constraints, and it is therefore appropriate to understand the interaction between them. This paper investigates that interaction. In particular it studies constraint implication problems, which are important both i...\n\n2-Hop Neighbour:\nPath Constraints on Semistructured and Structured Data We present a class of path constraints of interest in connection with both structured and semi-structured databases, and investigate their associated implication problems. These path constraints are capable of expressing natural integrity constraints that are not only a fundamental part of the semantics of the data, but are also important in query optimization. We show that in semistructured databases, despite the simple syntax of the constraints, their associated implication problem is r.e. complete and finite implication problem is co-r.e. complete. However, we establish the decidability of the implication problems for several fragments of the path constraint language, and demonstrate that these fragments suffice to express important semantic information such as inverse relationships and local database constraints commonly found in object-oriented databases. We also show that in the presence of types, the analysis of path constraint implication becomes more delicate. We demonstrate so...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'FEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAn Efficient Boosting Algorithm for Combining Preferences We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborativefiltering \" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.\n\n1-Hop Neighbour:\nAn Efficient Boosting Algorithm for Combining Preferences We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborativefiltering \" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n2-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.\n\n2-Hop Neighbour:\nInformation Retrieval on the Web: Selected Topics In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. In the rst section, we present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although the numerical gures vary, the overall trends cited by the sources are consistent and point to exponential growth during the coming decade. And Internet users are increasingly using search engines and search services to nd speci c information of interest. However, users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. The main body of our paper focuses on linear algebraic models and techniques for solving these problems. keywords: clustering, indexing, information retrieval, Internet, late...\n\n2-Hop Neighbour:\nPicASHOW: Pictorial Authority Search by Hyperlinks on the Web We describe PicASHOW, a fully automated WWW image retrieval system that is based on several link-structure analyzing algorithms. Our basic premise is that a page # displays (or links to) an image when the author of # considers the image to be of value to the viewers of the page. Wethus extend some well known link-based WWW #### ######### schemes to the context of image retrieval. PicASHOW's analysis of the link structure enables it to retrieve relevant images even when those are stored in les with meaningless names. The same analysis also allows it to identify ##### ########## and ##### ####. We dene these as Web pages that are rich in relevant images, or from which many images are readily accessible. PicASHOW requires no image analysis whatsoever and no creation of taxonomies for pre-classication of the Web's images. It can be implemented by standard WWW search engines with reasonable overhead, in terms of both computations and storage, and with no change to user query formats. It can thus be used to easily add image retrieving capabilities to standard search engines. Our results demonstrate that PicASHOW, while relying almost exclusively on link analysis, compares well with dedicated WWW image retrieval systems. We conclude that link analysis, a bona-de eective technique for Web page search, can improve the performance of Web image retrieval, as well as extend its denition to include the retrieval of image hubs and containers. Keywords Image Retrieval; Link Structure Analysis; Hubs and Authorities; Image Hubs. 1.\n\n2-Hop Neighbour:\nLearning to Order Things wcohen,schapire,singer¡ There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF¢¤£¦¥¨§� ©, which indicates whether it is advisable to rank £ before §. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the “Hedge ” algorithm, for finding a good linear combination of ranking “experts.” We use the ordering algorithm combined with the on-line learning algorithm to find a combination of “search experts, ” each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. 1\n\n2-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Searching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n1-Hop Neighbour:\nAn Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.\n\n1-Hop Neighbour:\nContext and Page Analysis for Improved Web Search NEC Research Institute has developed a metasearch engine that improves the efficiency of Web searches by downloading and analyzing each document and then displaying results that show the query terms in context. Several popular and useful search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light attempt to maintain full-text indexes of the World Wide Web. However, relying on a single standard search engine has limitations. The standard search engines have limited coverage, 1,2 outdated databases, and are sometimes unavailable due to problems with the network or the engine itself. The precision of standard engine results can also vary because they generally focus on handling queries quickly and use relatively simple ranking schemes. 3 Rankings can be further muddled by keyword spamming to increase a page's rank order. Often, the relevance of a particular page is obvious only after loading it and finding the query terms. Metasearch engines, such as MetaCrawler and SavvySearch, attempt to contend with the problem of limited coverage by submitting queries to several standard search engines at once. 4,5 The primary advantages of metasearch engines are that they combine the results of several search engines and present a consistent user interface. 5 However, most metasearch engines rely on the documents and summaries returned by standard search engines and so inherit their limited precision and vulnerability to keyword spamming. We developed the NEC Research Institute (NECI) metasearch engine [now called Inquirus] to improve the efficiency and precision of Web search by downloading and analyzing each document and then displaying results that show the query terms in\n\n2-Hop Neighbour:\nInformation Retrieval on the Web In this paper we review studies on the growth of the Internet and technologies which are useful for information search and retrieval on the Web. We present data on the Internet from several dierent sources, e.g., current as well as projected number of users, hosts and Web sites. Although numerical gures vary, overall trends cited by the sources are consistent and point to exponential growth in the past and in the coming decade. As such, it is not surprising that about 85% of Internet users surveyed claim to be using search engines and search services to nd speci c information of interest. The same surveys show, however, that users are not satis ed with the performance of the current generation of search engines; the slow speed of retrieval, communication delays, and poor quality of retrieved results (e.g., noise and broken links) are commonly cited problems. We discuss the development of new techniques which are targeted to resolve some of the problems associated with Web-...\n\n2-Hop Neighbour:\nComputing Geographical Scopes of Web Resources Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, most current web search engines largely ignore the geographical scope of web resources. In this paper, we introduce techniques for automatically computing the geographical scope of web resources, based on the textual content of the resources, as well as on the geographical distribution of hyperlinks to them. We report an extensive experimental evaluation of our strategies using real web data. Finally, we describe a geographically-aware search engine that we have built using our techniques for determining the geographical scope of web resources. 1\n\n2-Hop Neighbour:\nLearning Information Extraction Rules for Semi-structured and Free Text . A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically. WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semistructured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories. Keywords: natural language processing, information extraction, rule learning 1. Information extraction As more and more text becomes available on-line, there is a growing need for systems that extract information automatically from text data. An information extraction (IE) sys...\n\n2-Hop Neighbour:\nConfSeek - A Multi-user, Multi-Threaded Specialized Search Engine for Conferences The explosive growth of the World Wide Web - the latest estimates for its size are around 1,000,000,000 web-pages - has made critical the need to nd information more accurately than what the current generic search engines can deliver. This project implements a prototype specialized search engine that allows user to submit queries for conferences in a specic eld of interest, and returns the detailed information about those conferences (deadlines, etc.). It uses multiple existing search engines to provide better coverage of the information available on the Internet. It can interact with multiple users concurrently and makes use of multi-threading to achieve faster information processing. The goal is to make this tool available to the scientic community to provide researchers with improved access to conference information. We discuss possible extensions (e.g. ranking of conferences according to their `quality', trip planning, etc.). Contents 1 Introduction 2 2 Goals of this Project...\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Optimization Techniques For Queries with Expensive Methods Object-Relational database management systems allow knowledgeable users to de ne new data types, as well as new methods (operators) for the types. This exibility produces an attendant complexity, which must be handled in new ways for an Object-Relational database management system to be e cient. In this paper we study techniques for optimizing queries that contain time-consuming methods. The focus of traditional query optimizers has been on the choice of join methods and orders; selections have been handled by \\pushdown \" rules. These rules apply selections in an arbitrary order before as many joins as possible, using the assumption that selection takes no time. However, users of Object-Relational systems can embed complex methods in selections. Thus selections may take signi cant amounts of time, and the query optimization model must be enhanced. In this paper, we carefully de ne a query cost framework that incorporates both selectivity and cost estimates for selections. We develop an algorithm called Predicate Migration, and prove that it produces optimal plans for queries with expensive methods. We then describe our implementation of Predicate Migration in the commercial Object-Relational database management system Illustra, and discuss practical issues that a ect our earlier assumptions. We compare Predicate Migration to a variety of simpler optimization techniques, and demonstrate that Predicate Migration is the best general solution to date. The alternative techniques we presentmaybe useful for constrained workloads.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nEddies: Continuously Adaptive Query Processing In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are ineffective in these environments. In this paper we introduce a query processing mechanism called an eddy, which continuously reorders operators in a query plan as it runs. We characterize the moments of symmetry during which pipelined joins can be easily reordered, and the synchronization barriers that require inputs from different sources to be coordinated. By combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. This flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. Our initial implementation demonstrates prom...\n\n1-Hop Neighbour:\nEfficient Data and Program Integration Using Binding Patterns : In this work, we investigate data and program integration in a fully distributed peer-to-peer mediation architecture. The challenge in making such a system succeed at a large scale is twofold. First, sharing a resource should be easy; therefore, we need a simple concept for modeling resources. Second, we need an ecient architecture for distributed query execution, capable of handling well costly computations and large data transfers. To model heterogeneous resources, we propose using the unied abstraction of table with binding patterns, simple yet powerful enough to capture data and programs. To exploit a resource with restricted binding patterns, we propose an ecient BindJoin operator, following the classical iterator model, in which we build optimization techniques for minimizing large data transfers and costly computations, and maximizing parallelism. Furthermore, our BindJoin operator can be tuned to deliver most of its output in the early stages of the execution, which is an important asset in a system meant for human interaction. Our preliminary experimental evaluation validates the proposed BindJoin algorithms, and shows they can provide good performance in queries involving distributed data and expensive programs. Key-words: data and program integration, distributed query processing, binding patterns INRIA, Caravel project. Contact: [email protected] y PRISM laboratory, University of Versailles and INRIA, Caravel project. Contact: [email protected] z INRIA, Caravel project. Contact: [email protected] x INRIA, Caravel project. Contact: [email protected] Intgration ecace de donnes et de programmes utilisants des patterns d'accs Rsum : Dans ce rapport, nous tudions l'intgration de donnes et de programmes dans une archite...\n\n1-Hop Neighbour:\nScalable Trigger Processing + Current database trigger systems have extremely limited scalability. This paper proposes a way to develop a truly scalable trigger system. Scalability to large numbers of triggers is achieved with a trigger cache to use main memory effectively, and a memory-conserving selection predicate index based on the use of unique expression formats called expression signatures. A key observation is that if a very large number of triggers are created, many will have the same structure, except for the appearance of different constant values. When a trigger is created, tuples are added to special relations created for expression signatures to hold the trigger's constants. These tables can be augmented with a database index or main-memory index structure to serve as a predicate index. The design presented also uses a number of types of concurrency to achieve scalability, including token (tuple)-level, condition-level, rule action-level, and datalevel concurrency. 1. Introduction Trigger feature...\n\n2-Hop Neighbour:\nHow to Avoid Building DataBlades That Know the Value of Everything and the Cost of Nothing The object-relational database management system (ORDBMS) offers many potential benefits for scientific, multimedia and financial applications. However, work remains in the integration of domain-specific class libraries (data cartridges, extenders, DataBlades ® ) into ORDBMS query processing. A major problem is that the standard mechanisms for query selectivity estimation, taken from relational database systems, rely on properties specific to the standard data types; creation of new mechanisms remains extremely difficult because the software interfaces provided by vendors are relatively low-level. In this paper, we discuss extensions of the generalized search tree, or GiST, to support a higher-level but less type-specific approach. Specifically, we discuss the computation of selectivity estimates with confidence intervals using a variety of index-based approaches and present results from an experimental comparison of these methods with several estimators from the literature. 1. Intro...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'On-line 3D gesture recognition utilising dissimilarity measures In the field of Human-Computer Interaction (HCI), gesture recognition is becoming increasingly important as a mode of communication, in addition to the more common visual, aural and oral modes, and is of particular interest to designers of Augmentative and Alternative Communication (AAC) systems for people with disabilities. A complete microcomputer system is described, GesRec3D, which facilitates the data acquisition, segmentation, learning, and recognition of 3-Dimensional arm gestures. The gesture data is acquired from a Polhemus electro-magnetic tracker system, where sensors are placed on the finger, wrist and elbow of one arm. Coded gestures are linked to user-defined text, to be typed or spoken by a text-to-speech engine, which is integrated into the system. A segmentation method and an algorithm for classification are both presented, which includes acceptance/rejection thresholds based on intra-class and inter-class dissimilarity measures. Results of recognition hits, confusion ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nGlove-TalkII: A neural network interface which maps gestures to parallel formant speech synthesizer controls Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to 10 control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of GloveTalkII uses several input devices (including a Cyberglove, a ContactGlove, a 3-space tracker, and a foot-pedal), a parallel formant speech synthesizer and 3 neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed, user-defined relationship between hand-position and vowel sound and does not require any training ...\n\n1-Hop Neighbour:\nVisual Interpretation of Hand Gestures for Human-Computer Interaction: A Review The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. We survey the literature on vision-based hand gesture recognition within the context of its role in HCI. The number of approaches to video-based hand gesture recognition has grown in recent years. Thus, the need for systematization and analysis of different aspects of gestural interaction has developed. We discuss a complete model of hand gestures that possesses both spatial and dynamic properties of human hand gestures and can accommodate for all their natural types. Two classes of models that have been employed for interpretation of hand gestures for HCI are considered. The first utilizes 3D models of the human hand, while the second relies on the appearance of the human hand in the image. Investigation of model parameters and analysis feat...\n\n2-Hop Neighbour:\nInterpretation of Shape-related Iconic Gestures in Virtual Environments The interpretation of iconic gestures in spatial domains is a promising idea to improve the communicative capabilities of human-computer interfaces. So far, approaches towards gesture recognition focused mainly on deictic and emblematic gestures. Iconics, viewed as iconic signs in the sense of Peirce, are different from deictics and emblems, for their relation to the referent is based on similarity. In the work reported here, the breakdown of the complex notion of similarity provides the key idea towards a computational model of gesture semantics for iconic gestures. Based on an empirical study,we describe first steps towards a recognition model for shape-related iconic gestures and its implementation in a prototype gesture recognition system. Observations are focused on spatial concepts and their relation to features of iconic gestural expressions. The recognition model is based on a graphmatching method which compares the decomposed geometrical structures of gesture and object.\n\n2-Hop Neighbour:\nReal-Time Input of 3D Pose and Gestures of a User's Hand and Its Applications for HCI In this paper, we introduce a method for tracking a user's hand in 3D and recognizing the hand's gesture in real-time without the use of any invasive devices attached to the hand. Our method uses multiple cameras for determining the position and orientation of a user's hand moving freely in a 3D space. In addition, the method identifies predetermined gestures in a fast and robust manner by using a neural network which has been properly trained beforehand. This paper also describes results of user study of our proposed method and its application for several types of applications, including 3D object handling for a desktop system and 3D walk-through for a large immersive display system. 1.\n\n2-Hop Neighbour:\nFeedback From Video For Virtual Reality Navigation Important preconditions for wide acceptance of virtual reality systems include their comfort, ease and naturalness to use. Most existing trackers suer from discomfortrelated issues. For example, body-based trackers (such as hand controllers, joysticks or helmet attachments) restrict spontaneity and naturalness of motion, whereas groundbased devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. Controls have similar problems. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. A camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and a robust skin-color segmentation for handling illumination variations.\n\n2-Hop Neighbour:\nSpeech/Gesture Interface to a Visual Computing Environment for Molecular Biologists In recent years there has been tremendous progress in 3D, immersive display and virtual reality (VR) technologies. Scientific visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for \"natural\" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR visual computing environment for molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We concentrate on the visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability....\n\n2-Hop Neighbour:\nExtraction and Classification of Visual Motion Patterns for Hand Gesture Recognition We present a new method for extracting and classifying motion patterns to recognize hand gestures. First, motion segmentation of the image sequence is generated based on a multiscale transform and attributed graph matching of regions across frames. This produces region correspondences and their affine transformations. Second, color information of motion regions is used to determine skin regions. Third, human head and palm regions are identified based on the shape and size of skin areas in motion. Finally, affine transformations defining a region's motion between successive frames are concatenated to construct the region's motion trajectory. Gestural motion trajectories are then classified by a time-delay neural network trained with backpropagation learning algorithm. Our experimental results show that hand gestures can be recognized well using motion patterns. 1 Introduction This paper is concerned with the problem of detecting two-dimensional motion across image frames and classifyi...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node '3D Hand Pose Reconstruction Using Specialized Mappings A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values represent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nToward Scalability in ASL Recognition: Breaking Down Signs into Phonemes In this paper we present a novel approach to continuous, whole-sentence ASL recognition that uses phonemes instead of whole signs as the basic units. Our approach is based on a sequential phonological model of ASL. According to this model the ASL signs can be broken into movements and holds, which are both considered phonemes. This model does away with the distinction between whole signs and epenthesis movements that we made in previous work [13]. Instead, epenthesis movements are just like the other movements that constitute the signs. We subsequently train Hidden Markov Models (HMMs) to recognize the phonemes, instead of whole signs and epenthesis movements that we recognized previously [13]. Because the number of phonemes is limited, HMM-based training and recognition of the ASL signal becomes computationally more tractable and has the potential to lead to the recognition of large-scale vocabularies. We experimented with a 22 word vocabulary, and we achieved similar recognition r...\n\n1-Hop Neighbour:\nRotation Invariant Neural Network-Based Face Detection In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; the first is a \"router\" network which processes each input window to determine its orientation and then uses this information to prepare the window for one or more \"detector\" networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces which are rotated out of the image plane, such as profiles and semi-profiles. This work was partially supported by grants from Hewlett-Packard Corporation, Siemens Corporate Research, Inc., the Department of the Army, Army Research Office under grant number DAAH04-94-G-0006, and by the Office of Naval Research under grant number...\n\n1-Hop Neighbour:\nView-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o...\n\n2-Hop Neighbour:\nSigner-independent Continuous Sign Language Recognition Based on SRN/HMM A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. We combine the simple recurrent network(SRN) with the hidden Markov models(HMM) in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one.\n\n2-Hop Neighbour:\nFace Detection Using Mixtures of Linear Subspaces We present two methods using mixtures of linear subspaces for face detection in gray level images. One method uses a mixture of factor analyzers to concurrently perform clustering and, within each cluster, perform local dimensionality reduction. The parameters of the mixture model are estimated using an EM algorithm. A face is detected if the probability of an input sample is above a predefined threshold. The other mixture of subspaces method uses Kohonen’s self-organizing map for clustering and Fisher Linear Discriminant to find the optimal projection for pattern classification, and a Gaussian distribution to model the class-conditional density function of the projected samples for each class. The parameters of the class-conditional density functions are maximum likelihood estimates and the decision rule is also based on maximum likelihood. A wide range of face images including ones in different poses, with different expressions and under different lighting conditions are used as the training set to capture the variations of human faces. Our methods have been tested on three sets of 225 images which contain 871 faces. Experimental results on the first two datasets show that our methods perform as well as the best methods in the literature, yet have fewer false detects. 1\n\n2-Hop Neighbour:\nText Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.\n\n2-Hop Neighbour:\nEstimating Mixture Models of Images and Inferring Spatial Transformations Using the EM Algorithm Presented at the IEEE Conference on Computer Vision and Pattern Recognition, Ft. Collins, CO, June, 1999. Mixture modeling and clustering algorithms are effective, simple ways to represent images using a set of data centers. However, in situations where the images include background clutter and transformations such as translation, rotation, shearing and warping, these methods extract data centers that include clutter and represent dierent transformations of essentially the same data. Taking face images as an example, it would be more useful for the dierent clusters to represent dierent poses and expressions, instead of cluttered versions of dierent translations, scales and rotations. By including clutter and transformation as unobserved, latent variables in a mixture model, we obtain a new \\transformed mixture of Gaussians\", which is invariant to a specied set of transformations. We show how a linear-time EM algorithm can be used to t this model by jointly estimating a mixture mo...\n\n2-Hop Neighbour:\nCapturing Natural Hand Articulation Vision-based motion capturing of hand articulation is a challenging task, since the hand presents a motion of high degrees of freedom. Model-based approaches could be taken to approach this problem by searching in a high dimensional hand state space, and matching projections of a hand model and image observations. However, it is highly inefficient due to the curse of dimensionality. Fortunately, natural hand articulation is highly constrained, which largely reduces the dimensionality of hand state space. This paper presents a model-based method to capture hand articulation by learning hand natural constraints. Our study shows that natural hand articulation lies in a lower dimensional configurations space characterized by a union of linear manifolds spanned by a set of basis configurations. By integrating hand motion constraints, an efficient articulated motion-capturing algorithm is proposed based on sequential Monte Carlo techniques. Our experiments show that this algorithm is robust and accurate for tracking natural hand movements. This algorithm is easy to extend to other articulated motion capturing tasks.", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Topical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAdaptive Retrieval Agents: Internalizing Local Context and Scaling up to the Web . This paper focuses on two machine learning abstractions springing from ecological models: (i) evolutionary adaptation by local selection, and (ii) selective query expansion by internalization of environmental signals. We first outline a number of experiments pointing to the feasibility and performance of these methods on a general class of graph environments. We then describe how these methods have been applied to the intelligent retrieval of information distributed across networked environments. In particular, the paper discusses a novel distributed evolutionary algorithm and representation used to construct populations of adaptive Web agents. These InfoSpiders search on-line for information relevant to the user, by traversing hyperlinks in an autonomous and intelligent fashion. They can adapt to the spatial and temporal regularities of their local context. Our results suggest that InfoSpiders could complement current search engine technology by starting up where search engines stop...\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nCategorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...\n\n2-Hop Neighbour:\nExploiting Redundancy in Question Answering Our goal is to automatically answer brief factual questions of the form \"When was the Battle of Hastings?\" or \"Who wrote The Wind in the Willows?\". Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to \"vote\" for the most likely answer. The ...\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nCS 395T Large-Scale Data Mining Fall 2001 ojects the vectors x 1 : : : x n onto the principal components. Note that the eigenvectors of ^ are the left singular vectors of the matrix 1 p n 1 [x 1 ; x 2 ; : : : ; x n ]: Thus PCA can be obtained from the SVD of mean centered data. The mean centering is the important dierence between PCA and SVD and can yield qualitatively dierent results for data sets where the mean is not equal to 0, as shown in gure 1. SVD PCA SVD,PCA Figure 1: The leading singular vector may not be in the same direction as the principal component 1 2 CS 395T: Large-Scale Data Mining 2 Clustering Considerations Clustering is the grouping together of similar objects. Usually, the clustering problem is posed as a\n\n2-Hop Neighbour:\nA Case Study in Web Search using TREC Algorithms Web search engines rank potentially relevant pages/sites for a user query. Ranking documents for user queries has also been at the heart of the Text REtrieval Conference (TREC in short) under the label ###### retrieval. The TREC community has developed document ranking algorithms that are known to be the best for searching the document collections used in TREC, which are mainly comprised of newswire text. However, the web search community has developed its own methods to rank web pages/sites, many of which use link structure on the web, and are quite dierentfrom the algorithms developed at TREC. This study evaluates the performance of a state-of-the-art keyword-based document ranking algorithm (coming out of TREC) on a popular web search task: nding the web page/site of an entity, #### companies, universities, organizations, individuals, etc. This form of querying is quite prevalentonthe web. The results from the TREC algorithms are compared to four commercial web search engines. Results show that for nding the web page/site of an entity, commercial web search engines are notably better than a state-of-the-art TREC algorithm. These results are in sharp contrast to results from several previous studies. Keywords Search engines, TREC ad-hoc, keyword-based ranking, linkbased ranking 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Reasoning within Fuzzy Description Logics Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e. set of individuals with co#hfiP pro# erties. The experience in using DLs inapplicatio#& has sho wn that in many cases we wo#6H like to extend their capabilities. In particular, their use in the co# texto# Multimedia Info#mediafi Retrieval (MIR) leadsto the co# vincement that such DLssho#PF allo w the treatmento f the inherentimprecisio# in multimediao# ject co# tent representatio# and retrieval. In this paper we will present a fuzzyextensio# ALC,co# bining Zadeh's fuzzy lo#zy with a classical DL. In particular,co#rticu beco#FK fuzzy and, thus,reaso#HO6 ab o#fi impreciseco#recis is suppo#ppfi6 We will define its syntax, its semantics, describe its pro# erties and present a co#PHOSfi9 tpro#F&fi9KFS calculus for reasoning in it.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUsing an Expressive Description Logic: FaCT or Fiction? Description Logics form a family of formalisms closely related to semantic networks but with the distinguishing characteristic that the semantics of the concept description language is formally defined, so that the subsumption relationship between two concept descriptions can be computed by a suitable algorithm. Description Logics have proved useful in a range of applications but their wider acceptance has been hindered by their limited expressiveness and the intractability of their subsumption algorithms. This paper addresses both these issues by describing a sound and complete tableaux subsumption testing algorithm for a relatively expressive Description Logic which, in spite of the logic's worst case complexity, has been shown to perform well in realistic applications. 1 INTRODUCTION Description Logics (DLs) form a family of formalisms which have grown out of knowledge representation techniques using frames and semantic networks\n\n1-Hop Neighbour:\nA Description Logic for Vague Knowledge This work introduces the concept language ALCFM which is an extension of ALC to many-valued logics. ALCFM allows to express vague concepts, e.g. more or less enlarged or very small. To realize this extension to many-valued logics, the classical notions of satisfiability and subsumption had to be modified appropriately. For example, ALCFM -concepts are no longer either satisfiable or unsatisfiable, but they are satisfiable to a certain degree. The main contribution of this paper is a sound and complete method for computing the degree of subsumption between two ALCFM -concepts. 1 Introduction This work takes its motivation from the occurrence of vague concept descriptions in different application areas. Often, application--inherent information is characterized by a very high degree of vagueness. Appropriate information systems must be able to process this kind of data. So far, there are no systems that really solve the corresponding problems due to the lack of powerful basic methods. A...\n\n2-Hop Neighbour:\nAuthoring and Annotation of Web Pages in CREAM Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.\n\n2-Hop Neighbour:\nA Description Logic with Transitive and Inverse Roles and Role Hierarchies transitive roles play an important rôle in the adequate representation of aggregated objects: they allow these objects to be described by referring to their parts without specifying a level of decomposition. In [Horrocks & Gough, 1997], the Description Logic (DL) ALCH R + is presented, which extends ALC with transitive roles and a role hierarchy. It is argued in [Sattler, 1998] that ALCH R + is well-suited to the representation of aggregated objects in applications that require various part-whole relations to be distinguished, some of which are transitive. For example, a medical knowledge base could contain the following entries defining two different parts of the brain, namely the gyrus and the cerebellum. In contrast to a gyrus, a cerebellum is an integral organ and, furthermore, a functional component of the brain. Hence the role is component (which is a non-transitive sub-role of is part) is used to describe the relation between the brain and the cerebellum: is component ⊑ is part gyrus:= (∀consists.brain mass) ⊓ (∃is part.brain) cerebellum:= organ ⊓ (∃is component.brain) However, ALCH R + does not allow the simultaneous description of parts by means of the whole to which they belong and of wholes by means of their constituent parts: one or other is possible, but not both. To overcome this limitation, we present the DL ALCHI R + which extends ALCH R + with inverse (converse) roles, allowing, for example, the use of has part as well as is part. 1 Using ALCHIR +, we can define a tumorous brain as: tumorous brain:= brain ⊓ (tumorous ⊔ (∃has part.tumorous)) Part of this work was carried out while being a guest at IRST,\n\n2-Hop Neighbour:\nOntological Overhearing The collaboration between two intelligent agents can be greatly enhanced if a third agent, who has some understanding of the communication between the first two, intervenes giving appropriate information or acting helpfully without having been explicitly involved. The behavior of this third agent, quite common in human interaction, is called overhearing. We present an agent architecture modeling this behavior. In particular, we focus on overhearing based on ontological reasoning; that is, the overhearer semantically selects pieces of communication according to his own knowledge (ontologically organized) and goals.\n\n2-Hop Neighbour:\nOntoEdit: Collaborative Ontology Development for the Semantic Web Abstract. Ontologies now play an important role for enabling the semantic web. They provide a source of precisely defined terms e.g. for knowledge-intensive applications. The terms are used for concise communication across people and applications. Typically the development of ontologies involves collaborative efforts of multiple persons. OntoEdit is an ontology editor that integrates numerous aspects of ontology engineering. This paper focuses on collaborative development of ontologies with OntoEdit which is guided by a comprehensive methodology. 1\n\n2-Hop Neighbour:\nCombinations of Modal Logics Combining logics for modelling purposes has become a rapidly expanding enterprise that is inspired mainly by concerns about modularity and the wish to join together different kinds of information. As any interesting real world system is a complex, composite entity, decomposing its descriptive requirements (for design, verification, or maintenance purposes) into simpler, more restricted, reasoning tasks is not only appealing but is often the only plausible way forward. It would be an exaggeration to claim that we currently have a thorough understanding of `combined methods.' However, a core body of notions, questions and results has emerged for an important class of combined logics, and we are beginning to understand how this core theory behaves when it is applied outside this particular class. In this paper we will consider the combination of modal (including temporal) logics, identifying leading edge research that we, and others, have carried out. Such combined sys", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Mining Source Coverage Statistics for Data Integration Recent work in data integration has shown the importance of statistical information about the coverage and overlap of sources for efficient query processing. Despite this recognition there are no effective approaches for learning the needed statistics. The key challenge in learning such statistics is keeping the number of needed statistics low enough to have the storage and learning costs manageable. Naive approaches can become infeasible very quickly. In this paper we present a set of connected techniques that estimate the coverage and overlap statistics while keeping the needed statistics tightly under control. Our approach uses a hierarchical classification of the queries, and threshold based variants of familiar data mining techniques to dynamically decide the level of resolution at which to learn the statistics. We describe the details of our method, and present preliminary experimental results showing the feasibility of the approach. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nConcept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.\n\n1-Hop Neighbour:\nJoint Optimization of Cost and Coverage of Query Plans in Data Integration Existing approaches for optimizing queries in data integration use decoupled strategies--attempting to optimize coverage and cost in two separate phases. Since sources tend to have a variety of access limitations, such phased optimization of cost and coverage can unfortunately lead to expensive planning as well as highly inefficient plans. In this paper we present techniques for joint optimization of cost and coverage of the query plans. Our algorithms search in the space of parallel query plans that support multiple sources for each subgoal conjunct. The refinement of the partial plans takes into account the potential parallelism between source calls, and the binding compatibilities between the sources included in the plan. We start by introducing and motivating our query plan representation. We then briefly review how to compute the cost and coverage of a parallel plan. Next, we provide both a System-R style query optimization algorithm as well as a greedy local search algorithm for searching in the space of such query plans. Finally we present a simulation study that demonstrates that the plans generated by our approach will be significantly better, both in terms of planning cost, and in terms of plan execution cost, compared to the existing approaches. 1.\n\n2-Hop Neighbour:\nQuery Optimization in the Presence of Limited Access Patterns 1 Introduction The goal of a query optimizer of a database system is to translate a declarative query expressed on a logical schema into an imperative query execution plan that accesses the physical storage of the data, and applies a sequence of relational operators. In building query execution plans, traditional relational query optimizers try to find the most efficient method for accessing the necessary data. When possible, a query optimizer will use auxiliary data structures such as an index on a file in order to efficiently retrieve a certain set of tuples in a relation. However, when such structures do not exist or are not useful for the given query, the alternative of scanning the entire relation always exists. The existence of the fall back option to perform a complete scan is an important assumption in traditional query optimization. Several recent query processing applications have the common characteristic that it is not always possible to perform complete scans on the data. Instead, the query optimization problem is complicated by the fact that there are only limited access patterns to the data. One such\n\n2-Hop Neighbour:\nFinding the Most Similar Documents across Multiple Text Databases In this paper, we present a methodology for finding the n most similar documents across multiple text databases for any given query and for any positive integer n. This methodology consists of two steps. First, databases are ranked in a certain order. Next, documents are retrieved from the databases according to the order and in a particular way. If the databases containing the n most similar documents for a given query can be ranked ahead of other databases, the methodology will guarantee the retrieval of the n most similar documents for the query. A statistical method is provided to identify databases, each of which is estimated to contain at least one of the n most similar documents. Then, a number of strategies is presented to retrieve documents from the identified databases. Experimental results are given to illustrate the relative performance of different strategies. 1 Introduction The Internet has become a vast information source in recent years and can be considered as the w...\n\n2-Hop Neighbour:\nExperiences with Selecting Search Engines Using Metasearch Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex...\n\n2-Hop Neighbour:\nQuery Optimization in the Presence of Limited Access Patterns 1 Introduction The goal of a query optimizer of a database system is to translate a declarative query expressed on a logical schema into an imperative query execution plan that accesses the physical storage of the data, and applies a sequence of relational operators. In building query execution plans, traditional relational query optimizers try to find the most efficient method for accessing the necessary data. When possible, a query optimizer will use auxiliary data structures such as an index on a file in order to efficiently retrieve a certain set of tuples in a relation. However, when such structures do not exist or are not useful for the given query, the alternative of scanning the entire relation always exists. The existence of the fall back option to perform a complete scan is an important assumption in traditional query optimization. Several recent query processing applications have the common characteristic that it is not always possible to perform complete scans on the data. Instead, the query optimization problem is complicated by the fact that there are only limited access patterns to the data. One such\n\n2-Hop Neighbour:\nA Methodology to Retrieve Text Documents from Multiple Databases In this paper, we present a methodology for finding the n most similar documents across multiple text databases for any given query and for any positive integer n. This methodology consists of two steps. First, the contents of databases are indicated approximately by database representatives. Databases are ranked using their representatives in a certain order with respect to the given query. We provide a necessary and sufficient condition to rank the databases optimally. In order to satisfy this necessary and sufficient condition, we provide three estimation methods. One estimation method is intended for short queries; the other two are for all queries. Second, we provide an algorithm, OptDocRetrv, to retrieve documents from the databases according to their rank and in a particular way. We show that if the databases containing the n most similar documents for a given query are ranked ahead of other databases, our methodology will guarantee the retrieval of the n most similar d...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Context in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nTowards Web-Scale Web Archeology Web-scale Web research is difficult. Information on the Web is vast in quantity, unorganized and uncatalogued, and available only over a network with varying reliability. Thus, Web data is difficult to collect, to store, and to manipulate efficiently. Despite these difficulties, we believe performing Web research at Web-scale is important. We have built a suite of tools that allow us to experiment on collections that are an order of magnitude or more larger than are typically cited in the literature. Two key components of our current tool suite are a fast, extensible Web crawler and a highly tuned, in-memory database of connectivity information. A Web page repository that supports easy access to and storage for billions of documents would allow us to study larger data sets and to study how the Web evolves over time.\n\n2-Hop Neighbour:\nUsing Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.\n\n2-Hop Neighbour:\nInverted files and dynamic signature files for optimisation of Web Directories Web directories are taxonomies for the classification of Web documents. This kind of IR systems present a specific type of search where the document collection is restricted to one area of the category graph. This paper introduces a specific data architecture for Web directories which improves the performance of restricted searches. That architecture is based on a hybrid data structure composed of an inverted file with multiple embedded signature files. Two variants based on the proposed model are presented: hybrid architecture with total information and hybrid architecture with partial information. The validity of this architecture has been analysed by means of developing both variants to be compared with a basic model. The performance of the restricted queries was clearly improved, specially the hybrid model with partial information, which yielded a positive response under any load of the search system.\n\n2-Hop Neighbour:\nRank Aggregation Methods for the Web We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. Wedevelop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can effectively combat \"spam,\" a serious problem in Web searches. Experiments show that our methods are simple, efficient, and effective. Keywords: rank aggregation, ranking functions, metasearch, multi-word queries, spam 1.\n\n2-Hop Neighbour:\nExploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nWhat can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1\n\n1-Hop Neighbour:\nUsing PageRank to Characterize Web Structure Recent work on modeling the Web graph has dwelt on capturing the degree distributions observed on the Web. Pointing out that this represents a heavy reliance on \"local\" properties of the Web graph, we study the distribution of PageRank values (used in the Google search engine) on the Web. This distribution is of independent interest in optimizing search indices and storage. We show that PageRank values on the Web follow a power law. We then develop detailed models for the Web graph that explain this observation, and moreover remain faithful to previously studied degree distributions. We analyze these models, and compare the analyses to both snapshots from the Web and to graphs generated by simulations on the new models. To our knowledge this represents the first modeling of the Web that goes beyond fitting degree distributions on the Web.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nEngineering a multi-purpose test collection for Web retrieval experiments Past research into text retrieval methods for the Web has been restricted by the lack of a test collection capable of supporting experiments which are both realistic and reproducible. The 1.69 million document WT10g collection is proposed as a multi-purpose testbed for experiments with these attributes, in distributed IR, hyperlink algorithms and conventional ad hoc retrieval. WT10g was constructed by selecting from a superset of documents in such a way that desirable corpus properties were preserved or optimised. These properties include: a high degree of inter-server connectivity, integrity of server holdings, inclusion of documents related to a very wide spread of likely queries, and a realistic distribution of server holding sizes. We conrm that WT10g contains exploitable link information using a site (homepage) nding experiment. Our results show that, on this task, Okapi BM25 works better on propagated link anchor text than on full text. Keywords: Web retrieval; Link-based ranking; Distributed information retrieval; Test collections 1\n\n2-Hop Neighbour:\nDetection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.\n\n2-Hop Neighbour:\nLearning to Create Customized Authority Lists The proliferation of hypertext and the popularity of Kleinberg's HITS algorithm have brought about an increased interest in link analysis. While HITS and its older relatives from the Bibliometrics provide a method for finding authoritative sources on a particular topic, they do not allow individual users to inject their own opinions on what sources are authoritative. This paper presents a technique for learning a user's internal model of authority. We present experimental results based on Cora on-line index, a database of approximately one million on-line computer science literature references. 1. Introduction Bibliometrics (White & McCain, 1989; Small, 1973) involves studying the structure that emerges from sets of linked documents. Traditionally, these links have taken the form of citations among journal articles, although Kleinberg (1997) and others (e.g., Brin & Page, 1998) have found that they adapt well to sets of hyperlinked documents. Bibliometric techniques exis...\n\n2-Hop Neighbour:\nCategorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'CBR for Dynamic Situation Assessment in an Agent-Oriented Setting . In this paper, we describe an approach of using case-based reasoning in an agent-oriented setting. CBR is used in a real-time environment to select actions of soccer players based on previously collected experiences encoded as cases. Keywords: Case-based reasoning, artificial soccer, agentoriented programming. 1 Introduction In recent years, agent-oriented technologies have caught much attention both in research and commercial areas. To provide a testbed for developing, evaluating, and testing various agent architectures, RoboCup Federation started the Robot World Cup Initiative (RoboCup), which is an attempt to foster AI and intelligent robotics research by providing a somewhat standardized problem where wide range of technologies from AI and robotics can be integrated and examined. In particular, during the annual RoboCup championships different teams utilizing different models and methods compete with each other in the domain of soccer playing. Key issues in this domain are that...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nAT Humboldt in RoboCup-99 The paper describes the architecture and the scientific goals of the virtual soccer team \"AT Humboldt 99\", which is the successor of vice champion \"AT Humboldt 98\" from RoboCup-98 in Paris and world champion \"AT Humboldt 97\" from RoboCup-97 in Nagoya. Scientific goals are the development of agent-oriented techniques and learning methods.\n\n2-Hop Neighbour:\nRapid Concurrent Software Engineering in Competitive Situations This article is an experience report on the evolutionary development process of AT Humboldt, a multi agent system which has become World Champion 1997 and Vice World Champion 1998 of RoboCup simulator league. It details why the artifical soccer initiative RoboCup is a tempting domain for rapid concurrent software engineering. Both the development processes in 1997 and 1998 are described, compared and evaluated. Lessons learned for development projects in distributed control conclude this report. 1 Introduction In this article the project managers describe the evolutionary development process of the software project AT (AgentTeam) Humboldt, which has become World Champion 1997 and Vice World Champion 1998 in the simulator league of the artifical soccer contest RoboCup ([10]). The RoboCup initiative recently gets more and more popular among scientists in robotics, distributed systems and distributed artificial intelligence because of its strong competitive character and tight resource b...\n\n2-Hop Neighbour:\nSupporting Internet-Scale Multi-Agent Systems ts a model of AgentScape from the agent perspective, that is, the location comprising the middleware and the resources are represented by a location manager agent and resource objects. Calls from an agent to the middleware are modeled by requests to the location manager agent to, for example, create an agent or move an agent. Information about resources residing at the location can be retrieved by binding to the resource objects, which are local distributed objects. These objects can be accessed only within the location they reside, not from outside the location. For development of agent applications, an application programming interface (API) and a runtime system (RTS) are provided, see Fig. 1. The default API and RTS can be extended to provide a higher-level application programming interface with, for example, a model that offers more structure and semantics to the agent application developer. Within AgentScape, management of large-scale agent systems is an important issue, includi\n\n2-Hop Neighbour:\nHow to Avoid Knowing It All Beliefs have been formally modelled in the last decades using doxastic logics. The possible worlds model and its associated Kripke semantics provide an intuitive semantics for these logics, but they seem to commit us to model agents that are logically omniscient (they believe every classical tautology) and perfect reasoners (their beliefs are closed under classical deductive closure). Thus, this model would not be appropriate to model non-ideal agents, that have resource limitations that prevent them from attaining such levels of doxastic competence. This report contains a statement of these problems and a brief survey of some of the most interesting approaches that have been suggested to overcome them. Contents 1 Formal models of belief 3 1.1 Possible worlds and Kripke semantics . . . . . . . . . . . . . . . . . . . . . 3 1.2 Logical omniscience and perfect reasoning . . . . . . . . . . . . . . . . . . 5 2 Avoiding logical omniscience 7 2.1 Syntactic approaches . . . . . . . ...\n\n2-Hop Neighbour:\nBDI Design Principles and Cooperative Implementation - A Report on RoboCup Agents This report discusses two major views on BDI deliberation for autonomous agents. The first view is a rather conceptual one, presenting general BDI design principles, namely heuristic options, decomposed reasoning and layered planning, which enable BDI deliberation in realtime domains. The second view is focused on the practical application of the design principles in RoboCup Simulation League. This application not only evaluates the usefulness in deliberation but also the usefulness in rapid cooperative implementation. We compare this new approach, which has been used in the Vice World Champion team AT Humboldt 98, to the old approach of AT Humboldt 97, and we outline the extensions for AT Humboldt 99, which are still under work. Conditions faced by deliberation in multi agent contexts differ significantly from the basic assumption of classical AI search and planning. Traditional game playing methods for example assume a static well-known setting and a fixed round-based interaction of ...\n\n2-Hop Neighbour:\nA Knowledge-Based Approach for Designing Intelligent Team Training Systems This paper presents a knowledge approach to designing team training systems using intelligent agents. We envision a computer-based training system in which teams are trained by putting them through scenarios, which allow them to practice their team skills. There are two important roles that intelligent agents can play; these are virtual team members, and tutors. To carry out these functions, these agents must be equipped with an understanding of the task domain, the team structure, the selected decision-making process and their beliefs about other team members' mental states. Even though existing agent teamwork models incorporate many of the elements listed above, they have not focused on analyzing information needs of team members to support proactive agent interactions. To encode the team knowledge, we have developed a representation language, based on the BDI model, called MALLET. A Petri Net model of an individual agent's plans and information needs can be derived from the role des...", "role": "user" }, { "content": "<answer>Artificial Intelligence</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nInteraction between Path and Type Constraints XML [7], which is emerging as an important standard for data exchange on the World-Wide Web, highlights the importance of semistructured data. Although the XML standard itself does not require any schema or type system, a number of proposals [6, 17, 19] have been developed that roughly correspond to data definition languages. These allow one to constrain the structure of XML data by imposing a schema on it. These and other proposals also advocate the need for integrity constraints, another form of constraints that should, for example, be capable of expressing inclusion constraints and inverse relationships. The latter have recently been studied as path constraints in the context of semistructured data [4, 9]. It is likely that future XML proposals will involve both forms of constraints, and it is therefore appropriate to understand the interaction between them. This paper investigates that interaction. In particular it studies constraint implication problems, which are important both i...\n\n1-Hop Neighbour:\nKeys for XML this paper. The proposal extends the key speci cation of XML Data by allowing one to specify keys in terms of XPath [24] expressions. There are a number of technical problems in connection with XPath. XPath is a relatively complex language in which one can not only move down the document tree, but also sideways or upwards, not to mention that predicates and functions can be embedded as well. The problem with XPath is that questions about equivalence or inclusion of XPath expressions are, as far as the authors are aware, unresolved; and these issues are importantifwewant to reason about keys as wedo in relational databases. Yet until we know how to determine the equivalence of XPath expressions, there is no general method of saying whether two such speci cations are equivalent. Another technical issue is value equality. XML Schema restricts equality to text, but the authors have encountered cases in whichkeys are not so restricted. See Section 7.1 for a more detailed discussion. However, the main reason for writing this paper is that none of the existing key proposals address the issue of hierarchical keys, which appear to be ubiquitous in hierarchically structured databases, especially in scienti c data formats. A top-level key may be used to identify components of a document, and within each component a secondary key is used to identify sub-components, and so on. Moreover, the authors believe that the use of keys for citing parts of a document is suciently important that it is appropriate to consider key speci cation independently of other proposals for constraining the structure of XML documents\n\n1-Hop Neighbour:\nTypechecking for XML Transformers We study the typechecking problem for XML transformers: given an XML transformation program and a DTD for the input XML documents, check whether every result of the program conforms to a specified output DTD. We model XML transformers using a novel device called a k- pebble transducer, that can express most queries without data-value joins in XML-QL, XSLT, and other XML query languages. Types are modeled by regular tree languages, a robust extension of DTDs. The main result of the paper is that typechecking for k-pebble transducers is decidable. Consequently, typechecking can be performed for a broad range of XML transformation languages, including XMLQL and a fragment of XSLT. 1. INTRODUCTION Traditionally, database query languages have focused on data retrieval, with complex data transformations left to applications. The new XML data exchange standard for the Web, and emerging applications requiring data wrapping and integration, have shifted the focus towards data transformations....\n\n2-Hop Neighbour:\nSubsumption for XML Types . XML data is often used (validated, stored, queried, etc) with respect to different types. Understanding the relationship between these types can provide important information for manipulating this data. We propose a notion of subsumption for XML to capture such relationships. Subsumption relies on a syntactic mapping between types, and can be used for facilitating validation and query processing. We study the properties of subsumption, in particular the notion of the greatest lower bound of two schemas, and show how this can be used as a guide for selecting a storage structure. While less powerful than inclusion, subsumption generalizes several other mechanisms for reusing types, notably extension and refinement from XML Schema, and subtyping. 1 Introduction XML [5] is a data format for Web applications. As opposed to e.g., relational databases, XML documents do not have to be created and used with respect to a fixed, existing schema. This is particularly useful in Web ap...\n\n2-Hop Neighbour:\nMonadic Queries over Tree-Structured Data Monadic query languages over trees currently receive considerable interest in the database community, as the problem of selecting nodes from a tree is the most basic and widespread database query problem in the context of XML. Partly a survey of recent work done by the authors and their group on logical query languages for this problem and their expressiveness, this paper provides a number of new results related to the complexity of such languages over so-called axis relations (such as \"child\" or \"descendant\") which are motivated by their presence in the XPath standard or by their utility for data extraction (wrapping).", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Boosting and Rocchio Applied to Text Filtering We discuss two learning algorithms for text filtering: modified Rocchio and a boosting algorithm called AdaBoost. We show how both algorithms can be adapted to maximize any general utility matrix that associates cost (or gain) for each pair of machine prediction and correct label. We first show that AdaBoost significantly outperforms another highly effective text filtering algorithm. We then compare AdaBoost and Rocchio over three large text filtering tasks. Overall both algorithms are comparable and are quite effective. AdaBoost produces better classifiers than Rocchio when the training collection contains a very large number of relevant documents. However, on these tasks, Rocchio runs much faster than AdaBoost. 1 Introduction With the explosion in the amount of information available electronically, information filtering systems that automatically send articles of potential interest to a user are becoming increasingly important. If users indicate their interests to a filtering system...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMachine Learning and Natural Language Processing In this report, some collaborative work between the fields of Machine Learning (ML) and Natural Language Processing (NLP) is presented. The document is structured in two parts. The first part includes a superficial but comprehensive survey covering the state--of--the--art of machine learning techniques applied to natural language learning tasks. In the second part, a particular problem, namely Word Sense Disambiguation (WSD), is studied in more detail. In doing so, four algorithms for supervised learning, which belong to different families, are compared in a benchmark corpus for the WSD task. Both qualitative and quantitative conclusions are drawn. This document stands for the complementary documentation for the conference \"Aprendizaje autom 'atico aplicado al procesamiento del lenguaje natural\", given by the author within the course: \"Curso de Industrias de la Lengua: La Ingenier'ia Lingu'istica en la Sociedad de la Informaci'on\", Fundaci'on Duques de Soria. Soria. July 2000. 1 Con...\n\n1-Hop Neighbour:\nIntelligent Information Triage In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many informationretrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether a news story will be followed by many, very similar news stories, and also whether the stock price of one or more companies associated with a news story will move significantly following the appearance of that story. We conclude by discussing how the comprehensibility of the learned classifiers can be critical to success. 1.\n\n1-Hop Neighbour:\nBoosting for Document Routing RankBoost is a recently proposed algorithm for learning ranking functions. It is simple to implement and has strong justifications from computational learning theory. We describe the algorithm and present initial experimental results on applying it to the document routing problem. The first set of results applies RankBoost to a text representation produced using modern term weighting methods. Performance of RankBoost is somewhat inferior to that of a state-of-the-art routing algorithm which is, however, more complex and less theoretically justified than RankBoost. RankBoost achieves comparable performance to the state-of-the-art algorithm when combined with feature or example selection heuristics. Our second set of results examines the behavior of RankBoost when it has to learn not only a ranking function but also all aspects of term weighting from raw data. Performance is usually, though not always, less good here, but the term weighting functions implicit in the resulting ranking fun...\n\n2-Hop Neighbour:\nProbability Based Clustering for Document and User Properties . Information Retrieval systems can be improved by exploiting context information such as user and document features. This article presents a model based on overlapping probabilistic or fuzzy clusters for such features. The model is applied within a fusion method which linearly combines several retrieval systems. The fusion is based on weights for the different retrieval systems which are learned by exploiting relevance feedback information. This calculation can be improved by maintaining a model for each document and user cluster. That way, the optimal retrieval system for each document or user type can be identified and applied. The extension presented in this article allows overlapping, probabilistic clusters of features to further refine the process. 1.\n\n2-Hop Neighbour:\nImproving Text Categorization Methods for Event Tracking Automated tracking of events from chronologically ordered document streams is a new challenge for statistical text classification. Existing learning techniques must be adapted or improved in order to effectively handle difficult situations where the number of positive training instances per event is extremely small, the majority of training documents are unlabelled, and most of the events have a short duration in time. We adapted several supervised text categorization methods, specifically several new variants of the k-Nearest Neighbor (kNN) algorithm and a Rocchio approach, to track events. All of these methods showed significant improvement (up to 71% reduction in weighted error rates) over the performance of the original kNN algorithm on TDT benchmark collections, making kNN among the top-performing systems in the recent TDT3 official evaluation. Furthermore, by combining these methods, we significantly reduced the variance in performance of our event tracking system over different ...\n\n2-Hop Neighbour:\nAn Evaluation of Statistical Approaches to Text Categorization Abstract. This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine the impact of configuration variations in five versions of Reuters on the observed performance of classifiers. Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature. Using the results evaluated on the other versions of Reuters which exclude the unlabelled documents, the performance of twelve methods are compared directly or indirectly. For indirect compararions, kNN, LLSF and WORD were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performance; except for a Naive Bayes approach, the other learning algorithms also performed relatively well.\n\n2-Hop Neighbour:\nA Theory of Term Weighting Based on Exploratory Data Analysis Techniques of exploratory data analysis are used to study the weight of evidence that the occurrence of a query term provides in support of the hypothesis that a document is relevant to an information need. In particular, the relationship between the document frequency and the weight of evidence is investigated. A correlation between document frequency normalized by collection size and the mutual information between relevance and term occurrence is uncovered. This correlation is found to be robust across a variety of query sets and document collections. Based on this relationship, a theoretical explanation of the efficacy of inverse document frequency for term weighting is developed which differs in both style and content from theories previously put forth. The theory predicts that a \"flattening\" of idf at both low and high frequency should result in improved retrieval performance. This altered idf formulation is tested on all TREC query sets. Retrieval results corroborate the predicti...\n\n2-Hop Neighbour:\nLanguage Models for Financial News Recommendation We present a unique approach to identifying news stories that influence the behavior of financial markets. Specifically, we describe the design and implementation of &AElig;nalyst, a system that can recommend interesting news stories -- stories that are likely to affect market behavior. &AElig;nalyst operates by correlating the content of news stories with trends in financial time series. We identify trends in time series using piecewise linear fitting and then assign labels to the trends according to an automated binning procedure. We use language models to represent patterns of language that are highly associated with particular labeled trends. &Aelig;nalyst can then identify and recommend news stories that are highly indicative of future trends. We evaluate the system in terms of its ability to recommend the stories that will a ect the behavior of the stock market. We demonstrate that stories recommended by &Aelig;nalyst could be used to profitably predict forthcoming trends in stock prices.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'On using degrees of belief in BDI agents : The past few years have seen a rise in the popularity of the use of mentalistic attitudes such as beliefs, desires and intentions to describe intelligent agents. Many of the models which formalise such attitudes do not admit degrees of belief, desire and intention. We see this as an understandable simplification, but as a simplification which means that the resulting systems cannot take account of much of the useful information which helps to guide human reasoning about the world. This paper starts to develop a more sophisticated system based upon an existing formal model of these mental attributes. 1 Introduction In the past few years there has been a lot of attention given to building formal models of autonomous software agents; pieces of software which operate to some extent independently of human intervention and which therefore may be considered to have their own goals and the ability to determine how to achieve those goals. Many of these formal models are based on the use of ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nArgumentation and Multi-Agent Decision Making This paper summarises our on-going work on mixedinitiative decision making which extends both classical decision theory and a symbolic theory of decision making based on argumentation to a multi-agent domain. Introduction One focus of our work at Queen Mary and Westfield College is the development of multi-agent systems which deal with real world problems, an example being the diagnosis of faults in electricity distribution networks (Jennings et al. 1996). These systems are mixed-initiative in the sense that they depend upon interactions between agents---no single agent has sufficient skills or resources to carry out the tasks which the multi-agent system as a whole is faced with. Because the systems are built to operate in the real world, the agents are forced to deal with the usual problems of incomplete and uncertain information, and increasingly we are turning to the use of techniques from decision theory, both classical and non-standard, in order to ensure that our agents make so...\n\n1-Hop Neighbour:\nA Hybrid Model For Sharing Information Between Fuzzy, Uncertain And Default Reasoning Models In Multi-Agent Systems This paper develops a hybrid model which provides a unified framework for the fol- lowing four kinds of reasoning: 1) Zadeh's fuzzy approximate reasoning; 2) truthqualification uncertain reasoning with respect to fuzzy propositions; 3) fuzzy default reasoning (proposed, in this paper, as an extension of Reiter's default reasoning); and 4) truth-qualification uncertain default reasoning associated with fuzzy statements (developed in this paper to enrich fuzzy default reasoning with uncertain information). Our hybrid model has the following characteristics: 1) basic uncertainty is estimated in terms of words or phrases in natural language and basic propositions are fuzzy; 2) uncertainty, linguistically expressed, can be handled in default reasoning; and 3) the four kinds of rea- soning models mentioned above and their combination models will be the special cases of our hybrid model. Moreover, our model allows the reasoning to be performed in the case in which the information is fuzzy, uncertain and partial. More importantly, the problems of sharing the information among heterogeneous fuzzy, uncertain and default reasoning models can be solved efficiently by using our model. Given this, our framework can be used as a basis for information sharing and exchange in knowledge-based multi-agent systems for practical applications such as automated group negotiations. Actually, to build such a foundation is the motivation of this paper\n\n2-Hop Neighbour:\nBilateral Negotiation with Incomplete and Uncertain Information: a Decision-Theoretic Approach using a Model of the Opponent . The application of software agents to e-commerce has made a radical change in the way businesses and consumer to consumer transactions take place. Agent negotiation is an important aspect of e-commerce to bring satisfactory agreement in business transactions. We approach e-commerce and negotiation in the context of a distributed multiagent peer help system, I-Help, supporting students in a university course. Personal agents keep models of student preferences and negotiate on their behalf to acquire resources (help) from other agents. We model negotiation among personal agents by means of influence diagram, a decision theoretic tool. To cope with the uncertainty inherent in a dynamic market with self-interested participants, the agents create models of their opponents during negotiation, which help them predict better their opponents' actions. We carried out experiments comparing the proposed negotiation mechanism with influence diagram, one using in addition a model of t...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Combining Labeled and Unlabeled Data with Co-Training We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our analysis, we provide new re-' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nApplying Co-Training methods to Statistical Parsing We propose a novel Co-Training method for statistical parsing. The algorithm takes as input a small corpus (9695 sentences) annotated with parse trees, a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled text. The algorithm iteratively labels the entire data set with parse trees. Using empirical results based on parsing the Wall Street Journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly outperforms training only on the labeled data. 1\n\n1-Hop Neighbour:\nCombining Labeled and Unlabeled Data for Text Classification With a Large Number of Categories A major concern with supervised learning techniques for text classification is that they often require a large number of labeled examples to learn accurately. One way to reduce the amount of labeled data required is to develop algorithms that can learn effectively from a small number of labeled examples augmented with a large number of unlabeled examples. In this paper, we develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by decomposing multiclass problems into multiple binary problems and then use Co-Training to learn the individual binary classification problems. We show that our method is especially useful for classification tasks involving a large number of categories where Co-training doesn't perform very well by itself and when combined with ECOC, outperforms several other algorithms that combine labeled and unlabeled data for text classification. 1\n\n1-Hop Neighbour:\nUsing Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.\n\n2-Hop Neighbour:\nError-Correcting Output Coding for Text Classification This paper applies error-correcting output coding (ECOC) to the task of document categorization. ECOC, of recent vintage in the AI literature, is a method for decomposing a multiway classification problem into many binary classification tasks, and then combining the results of the subtasks into a hypothesized solution to the original problem. There has been much recent interest in the machine learning community about algorithms which integrate \"advice\" from many subordinate predictors into a single classifier, and error-correcting output coding is one such technique. We provide experimental results on several real-world datasets, extracted from the Internet, which demonstrate that ECOC can offer significant improvements in accuracy over conventional classification algorithms. 1 Introduction Error-correcting output coding is a recipe for solving multi-way classification problems. It works in two stages: first, independently construct many subordinate classifiers, each responsible for r...\n\n2-Hop Neighbour:\nEfficient Identification of Web Communities We dene a community on the web as a set of sites that have more links (in either direction) to members of the community than to non-members. Members of such a community can be eciently identied in a maximum ow / minimum cut framework, where the source is composed of known members, and the sink consists of well-known non-members. A focused crawler that crawls to a xed depth can approximate community membership by augmenting the graph induced by the crawl with links to a virtual sink node. The effectiveness of the approximation algorithm is demonstrated with several crawl results that identify hubs, authorities, web rings, and other link topologies that are useful but not easily categorized. Applications of our approach include focused crawlers and search engines, automatic population of portal categories, and improved ltering. Categories and Subject Descriptors H.2.8 [Database Management]: Database Applications| data mining ; H.3.3 [Information Storage and Retrieval]: Information...\n\n2-Hop Neighbour:\nEnhancing Supervised Learning with Unlabeled Data In many practical learning scenarios, there is a small amount of labeled data along with a large pool of unlabeled data. Many supervised learning algorithms have been developed and extensively studied. We present a new \"co-training\" strategy for using unlabeled data to improve the performance of standard supervised learning algorithms. Unlike much of the prior work, such as the co-training procedure of Blum and Mitchell (1998), we do not assume there are two redundant views both of which are sufficient for perfect classification. The only requirement our co-training strategy places on each supervised learning algorithm is that its hypothesis partitions the example space into a set of equivalence classes (e.g. for a decision tree each leaf defines an equivalence class). We evaluate our co-training strategy via experiments using data from the UCI repository. 1. Introduction In many practical learning scenarios, there is a small amount of labeled data along with a lar...\n\n2-Hop Neighbour:\nFeature Selection in Web Applications Using ROC Inflections and Power Set Pruning A basic problem of information processing is selecting enough features to ensure that events are accurately represented for classification problems, while simultaneously minimizing storage and processing of irrelevant or marginally important features. To address this problem, feature selection procedures perform a search through the feature power set to find the smallest subset meeting performance requirements. Major restrictions of existing procedures are that they typically explicitly or implicitly assume a fixed operating point, and make limited use of the statistical structure of the feature power set. We present a method that combines the Neyman-Pearson design procedure on finite data, with the directed set structure of the Receiver Operating Curves on the feature subsets, to determine the maximal size of the feature subsets that can be ranked in a given problem. The search can then be restricted to the smaller subsets, resulting in significant reductions in computational...\n\n2-Hop Neighbour:\nText Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Focused Crawls, Tunneling, and Digital Libraries Crawling the Web to build collections of documents related to pre-specified topics became an active area of research during the late 1990's after crawler technology was developed for the benefit of search engines. Now, Web crawling is being seriously considered as an important strategy for building large scale digital libraries. This paper considers some of the crawl technologies that might be exploited for collection building. For example, to make such collection-building crawls more effective, focused crawling was developed, in which the goal was to make a \"best-first\" crawl of the Web. We are using powerful crawler software to implement a focused crawl but use tunneling to overcome some of the limitations of a pure best-first approach. Tunneling has been described by others as not only prioritizing links from pages according to the page's relevance score, but also estimating the value of each link and prioritizing on that as well. We add to this mix by devising a tunneling focused crawling strategy which evaluates the current crawl direction on the fly to determine when to terminate a tunneling activity. Results indicate that a combination of focused crawling and tunneling could be an e#ective tool for building digital libraries.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nHigh-Performance Web Crawling SRC’s charter is to advance the state of the art in computer systems by doing basic and applied research in support of our company’s business objectives. Our interests and projects span scalable systems (including hardware, networking, distributed systems, and programming-language technology), the Internet (including the Web, e-commerce, and information retrieval), and human/computer interaction (including user-interface technology, computer-based appliances, and mobile computing). SRC was established in 1984 by Digital Equipment Corporation. We test the value of our ideas by building hardware and software prototypes and assessing their utility in realistic settings. Interesting systems are too complex to be evaluated solely in the abstract; practical use enables us to investigate their properties in depth. This experience is useful in the short term in refining our designs and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this approach, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical character. Some of\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nAn Autonomous Page Ranking Method for Metasearch Engines this paper, the topics are derived from the user's query; the reputation of each result page on the query topic is computed, and the value used to rank the result pages across all participating search engines, without biasing the ranking towards any of the sources\n\n2-Hop Neighbour:\nUser Modeling for Information Access Based on Implicit Feedback User modeling can be used in information filtering and retrieval systems to improve the representation of a user's information needs. User models can be constructed by hand, or learned automatically based on feedback provided by the user about the relevance of documents that they have examined. By observing user behavior, it is possible to infer implicit feedback without requiring explicit relevance judgments. Previous studies based on Internet discussion groups (USENET news) have shown reading time to be a useful source of implicit feedback for predicting a user's preferences. The study reported in this paper extends that work by providing framework for considering alternative sources of implicit feedback, examining whether reading time is useful for predicting a user's preferences for academic and professional journal articles, and exploring whether retention behavior can usefully augment the information that reading time provides. Two user studies were conducted in which undergradua...\n\n2-Hop Neighbour:\nText-Based Content Search and Retrieval in ad hoc P2P Communities We consider the problem of content search and retrieval in peer-to-peer (P2P) communities. P2P computing is a potentially powerful model for information sharing between ad hoc groups' of users because of its' low cost of entry and natural model for resource scaling with community size. As P2P communities grow in size, however, locating information distributed across the large number of peers becomes problematic. We present a distributed text-based content search and retrieval algorithm to address this' problem. Our algorithm is' based on a state-of-the-art text-based document ranking algorithm: the vector-space model instantiated with the TFxlDF ranking rule. A naive application of TFxlDF wouM require each peer in a community to collect an inverted index of the entire community. This' is' costly both in terms of bandwidth and storage. Instea & we show how TFxlDF can be approximated given compact summaries of peers' local inverted indexes. We make three contributions: (a) we show how the TFxlDF rule can be adapted to use the index summaries, (b) we provide a heuristic for adaptively determining the set of peers that shouM be contacted for a query, and (c) we show that our algorithm tracks' TFxlDF's performance very closely, regardless of how documents' are distributed throughout the community. Furthermore, our algorithm preserves the main flavor of TFxlDF by retrieving close to the same set of documents for any given query.\n\n2-Hop Neighbour:\nDocument Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...\n\n2-Hop Neighbour:\nChart of Darkness: Mapping a Large Intranet We introduce and de ne the concept of dark matter on the Web. Dark matter for a person or Web crawler consists of pages that they cannot reach and view, but which another observer can. Dark matter is important to our understanding of the Web in that the portion of the Web any of us can see depends on our viewpoint. Diffrent observers see different overlapping sections of the Web. However, no one can see all of the Web, even if they want to. We categorise the various types of dark matter that exist and how they may be discovered. Formal definitions of what constitutes lightness and darkness on the Web are formulated in terms of reachability. Our case study of dark matter within the Australian National University's intranet is reported. We estimate that 87% of the ANU intranet's information is dark to our local search service, and 37% is potentially loadable Web data unreachable to almost every Web user. Finally, we discuss some of the implications of dark matter for estimating the size of the Web and for general Web searching.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'LIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAbstractions and Infrastructures for the Design and Development of Mobile Agent Organizations Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, suitable abstractions and infrastructures are required for the effective engineering of such applications. In this paper, we argue that a conceptual framework for context-dependent coordination, supported by an infrastructure based on programmable media, can promote a modular and easy to manage approach to the design and development of mobile agent applications in terms of computational organizations. The MARS coordination infrastructure is presented as an implementation of a coordination infrastructure promoting context-dependent coordination. A case study in the area of workflow management is introduced to clarify the concepts presented. Keywords: Mobile Agents, Agent Organizations, Coordination Infrastructures, Agent-oriented Software Engineering. 1\n\n1-Hop Neighbour:\nA Framework For Designing, Modeling and Analyzing Agent Based Software Systems The agent paradigm is gaining popularity because it brings intelligence, reasoning and autonomy to software systems. Agents are being used in an increasingly wide variety of applications from simple email filter programs to complex mission control and safety systems. However there appears to be very little work in defining practical software architecture, modeling and analysis tools that can be used by software engineers. This should be contrasted with object-oriented paradigm that is supported by models such as UML and CASE tools that aid during the analysis, design and implementation phases of object-oriented software systems. In our research we are developing a framework and extensions to UML to address this need. Our approach is rooted in the BDI formalism, but stresses the practical software design methods instead of reasoning about agents. In this paper we describe our preliminary ideas Index Terms: Agent-Oriented programming, ObjectOriented programming, BDI, UML 1.\n\n1-Hop Neighbour:\nEngineering Mobile-agent Applications via Context-dependent Coordination The design and development of Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, mobility introduces peculiar coordination problems in agent-based Internet applications. First, it suggests the exploitation of an infrastructure based on a multiplicity of local interaction spaces. Second, it may require coordination activities to be adapted both to the characteristics of the execution environment where they occur and to the needs of the application to which the coordinating agents belong. In this context, this paper introduces the concept of context-dependent coordination based on programmable interaction spaces. On the one hand, interaction spaces associated to different execution environments may be independently programmed so as to lead to differentiated, environment-dependent, behaviors. On the other hand, agents can program the interaction spaces of the visited execution environments to obtain an application-dependent behavior of the interaction spaces themselves. Several examples show how an infrastructure enforcing context-dependent coordination can be effectively exploited to simplify and make more modular the design of Internet applications based on mobile agents. In addition, the MARS coordination infrastructure is presented as an example of a system in which the concept of context-dependent coordination has found a clean and efficient implementation.\n\n2-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering ion: The process of defining a simplified model of the system that emphasises some of the details or properties, while suppressing others. . Organisation 1 : The process of identifying and managing interrelationships between various problem solving components. Next, the characteristics of complex systems need to be enumerated [8]: . Complexity frequently takes the form of a hierarchy. That is, a system that is composed of inter-related sub-systems, each of which is in turn hierarchic in structure, until the lowest level of elementary sub-system is reached. The precise nature of these organisational relationships varies between sub-systems, however some generic forms (such as client-server, peer, team, etc.) can be identified. These relationships are not static: they often vary over time. . The choice of which components in the system are primitive is relatively arbitrary and is defined by the observer's aims and objectives. . Hierarchic systems evolve more quickly than non-hiera...\n\n2-Hop Neighbour:\nFrom Design to Intention: Signs of a Revolution ce such environment and be influenced by it. ii. openness: software systems will be subject to decentralized management and will dynamically change their structure: new components can be dynamically created or destroyed and, via mobility, will be able, to roam in and out the permeable boundaries of different software systems. Thus, the problem of openness is currently much broader than being simply a problem of interoperability; iii. locality in control: the components of software systems will represent autonomous loci of control. In fact, most components of software systems will be active, and will have local control over their activities, although will be in need of coordinating these activities with other active components. iv. locality in interactions: despite living in a fully connected world, software components interact with each other accordingly to local (geographical or logical) patterns. In other words, systems will have to be modeled around clusters of locally interactin\n\n2-Hop Neighbour:\nAutomated Derivation of Complex Agent Architectures from Analysis Specifications . Multiagent systems have been touted as a way to meet the need for distributed software systems that must operate in dynamic and complex environments. However, in order for multiagent systems to be effective, they must be reliable and robust. Engineering multiagent systems is a non-trivial task, providing ample opportunity for even experts to make mistakes. Formal transformation systems can provide automated support for synthesizing multiagent systems, which can greatly improve their correctness and reliability. This paper describes a semi-automated transformation system that generates an agent's internal architecture from an analysis specification in the MaSE methodology. 1\n\n2-Hop Neighbour:\nCoordination Middleware for XML-centric Applications This paper focuses on coordination middleware for distributed ap-plications based on active documents and XML technologies. It in-troduces the main concepts underlying active documents and XML Then, the paper goes into details about the problem of defining a suitable middleware architecture to effectively support coordina-tion activities in applications including active documents and mo-bile agents, by specifically focusing on the role played by XML technologies in that context. According to a simple taxonomy, the characteristics of several middleware systems are analyzed and evaluated. This analysis enables us to identify the advantages and the shortcoming of the different approaches, and to identify the ba-sic requirements of a middleware for XML-centric applications. 1.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Uniform Approach to Programming the World Wide Web We propose a uniform model for programming distributed web applications. The model is based on the concept of web computation places and provides mechanisms to coordinate distributed computations at these places, including peer-to-peer communication between places and a uniform mechanism to initiate computation in remote places. Computations can interact with the flow of http requests and responses, typically as clients, proxies or servers in the web architecture. We have implemented the model using the global pointers and remote service requests provided by the Nexus communication library. We present the model and its rationale, with some illustrative examples, and we describe the implementation. 1 Introduction Many web applications require a significant amount of computation which may be distributed and requires coordination; these applications use the web infrastructure to good advantage but are often constrained by the architecture, which is fundamentally client-server. A variety...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMobile Objects in Java Mobile Objects in Java provides support for object mobility in Java. Similarly to the RMI technique, a notion of client-side stub, called startpoint, is used to communicate transparently with a server-side stub, called endpoint. Objects and associated endpoints are allowed to migrate. Our approach takes care of routing method calls using an algorithm that we studied in [22]. The purpose of this paper is to present and evaluate the implementation of this algorithm in Java. In particular, two different strategies for routing method invocations are investigated, namely call forwarding and referrals. The result of our experimentation shows that the latter can be more efficient by up to 19%. 1\n\n2-Hop Neighbour:\nScalability Issues for Query Routing Service Discovery In this paper, we discuss the relationship between mediatorbased systems for service discovery in multi-agent systems, and the technique of query routing used for resource discovery in distributed information systems. We then construct a model of the query routing task which we use to examine the complexity and scalability characteristics of a number of commonly encountered architectures for resource or service discovery.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Theory of Proximity Based Clustering: Structure Detection by Optimization In this paper, a systematic optimization approach for clustering proximity or similarity data is developed. Starting from fundamental invariance and robustness properties, a set of axioms is proposed and discussed to distinguish different cluster compactness and separation criteria. The approach covers the case of sparse proximity matrices, and is extended to nested partitionings for hierarchical data clustering. To solve the associated optimization problems, a rigorous mathematical framework for deterministic annealing and mean--field approximation is presented. Efficient optimization heuristics are derived in a canonical way, which also clarifies the relation to stochastic optimization by Gibbs sampling. Similarity-based clustering techniques have a broad range of possible applications in computer vision, pattern recognition, and data analysis. As a major practical application we present a novel approach to the problem of unsupervised texture segmentation, which relies on statistical...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUnsupervised Learning from Dyadic Data Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event co-occurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applications ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework for unsupervised learning from dyadic data by statistical mixture models. Our approach covers different models with flat and hierarchical latent class structures and unifies probabilistic modeling and structure discovery. Mixture models provide both, a parsimonious yet flexible parameterization of probability distributions with good generalization performance on sparse data, as well as structural information about data-inherent grouping structure. We propose an annealed version of the standard Expectation Maximization algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.\n\n2-Hop Neighbour:\nKernel Expansions With Unlabeled Examples Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a...\n\n2-Hop Neighbour:\nAn Introduction to Variational Methods for Graphical Methods . This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.\n\n2-Hop Neighbour:\nProbabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.\n\n2-Hop Neighbour:\nEstimating Dependency Structure as a Hidden Variable This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. 1 INTRODUCTION A fundamental feature of a good model is the ability to uncover and exploit independencies in the data it is presented with. For many commonly used models, such as neural nets and belief networks, the dependency structure encoded in the model is fixed, in the sense that it is not allowed to vary depending on actual values of the variables or with the current case. However, dependency structures that are conditional on values of variables abound in the world around us. Consider for example bitmaps of handwritten digits. They obviously contain many dependencies between pixels; however, the pattern of these dependencies will vary acr...\n\n2-Hop Neighbour:\nLearning to Order Things wcohen,schapire,singer¡ There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF¢¤£¦¥¨§� ©, which indicates whether it is advisable to rank £ before §. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the “Hedge ” algorithm, for finding a good linear combination of ranking “experts.” We use the ordering algorithm combined with the on-line learning algorithm to find a combination of “search experts, ” each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. 1", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Implementing Teams of Agents Playing Simulated Robot Soccer . This article is intended to present an overview of the issues related to implementing teams of cooperating agents playing simulated robot soccer. First we introduce the concept of robot soccer and the simulated environment. Then we discuss why the (simulated) robot soccer is an interesting application area from the point of view of robotics and articial intelligence. The main part of the paper contains a discussion of agent architectures, both from theoretical and practical point of view. Later we discuss how to combine individual agents into teams having common strategies and goals. We also discuss learning, both on individual and team levels. 1 Introduction Robot soccer is a growing area of interest for the robotics and articial intelligence communities. There are many reasons for that. The main one is the complexity of the domain together with a set of well-dened rules governing the behaviour of the agents in this domain. The domain is suited for experiments both in ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe RoboCup Synthetic Agent Challenge 97 RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers --- it not only supplies a concrete domain to evalute their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork. 1 Introduction RoboCup (The World Cup Robot Soccer) is an attempt to promote AI and robotics research by providing a common task, Soccer, for evaluation of various theories, algorithms, and agent architectur...\n\n2-Hop Neighbour:\nCMUnited-97: RoboCup-97 Small-Robot World Champion Team Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.\n\n2-Hop Neighbour:\nControlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...\n\n2-Hop Neighbour:\nThe CMUnited-97 Small Robot Team Abstract. Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment toachieve speci c objectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold speci c roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present anoverview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan. 1\n\n2-Hop Neighbour:\nCooperation Mechanisms in a Multi-Agent Distributed Environment In the paper we present our work on design and analysis of agent cooperation in distributed systems. The work is not completed yet, so that some parts of it, especially the formal framework, should be viewed as a preliminary version. Multi-agent systems are represented by BDI-automata, i.e., asynchronous automata composed of non-deterministic agents equipped with mental attitudes like belief, desire, and intentions. These attitudes are acquired by the agents by executing so called mental actions. Behaviours of multi-agent systems are represented by prime event structures. The prime event structure when augmented with utility functions defined on the terminal nodes of the structure may be viewed as games in extensive form defined on local states rather than on global states as in the classical definition. The definition of knowledge, in our framework, captures the change of state due to action executions. A notion of local knowledge-based protocols is defined. A game theory method of backwards induction is applied in order to obtain a very natural definition of rationality in agent's behaviours. All the notions are exemplified using the running example. For this example we construct several cooperation mechanisms for the agents. A team formation mechanism is one of them.\n\n2-Hop Neighbour:\nGenetic Programming and Multi-Agent Layered Learning by Reinforcements We present an adaptation of the standard genetic program (GP) to hierarchically decomposable, multi-agent learning problems. To break down a problem that requires cooperation of multiple agents, we use the team objective function to derive a simpler, intermediate objective function for pairs of cooperating agents. We apply GP to optimize first for the intermediate, then for the team objective function, using the final population from the earlier GP as the initial seed population for the next. This layered learning approach facilitates the discovery of primitive behaviors that can be reused and adapted towards complex objectives based on a shared team goal.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Survey of Factory Control Algorithms which Can be Implemented in a Multi-Agent Heterarchy: Dispatching, Scheduling, and Pull This paper has not seriously addressed the question of what new algorithms are inspired by the multi-agent heterarchical paradigm, except in two cases. The Market-Driven Contract Net uses a new form of forward / backward scheduling called `cost-based forward / backwards continuum scheduling' which was inspired by the common agent concept of bidding. Also, Duffie's current work on developing agent-based deterministic simulation would be a new form of deterministic simulation as it attempts to automate the human interaction normally required for successful implementation of such systems. But in general we still have an open question of what new algorithms heterarchical agent architectures imply and what would be the performance of these new algorithms. Because a multi-agent heterarchy is a distributed computing paradigm, it would be worthwhile to investigate the communications overhead of these algorithms. Such evaluations are not generally reported. Such research would assure that not only can these algorithms be implemented in a multi-agent heterarchy, but that their implementation does not require excessive communications overhead. 6. REFERENCES [1] H. Hayashi, \"The IMS International Collaborative Program,\" in 24th ISIR, 1993, Japan Industrial Robot Association [2] National Center for Manufacturing Sciences, FOCUS: Exceeding Partner Expectations. Ann Arbor, MI: September, 1994. [3] S. Goldman and K. Preiss, Ed., 21st Century Manufacturing Enterprise Strategy: An Industry-Led View. Bethlehem, PA: Iacocca Institute, Lehigh University, 1991. [4] S. L. Goldman, R. N. Nagel and K. Preiss, Agile Competitors and Virtual Organizations: Strategies for Enriching the Customer. New York, NY: Van Nostrand Reinhold, 1995. [5] P. T. Kidd, Agile Manufacturing: Forging New Frontiers. W...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAgent-based Distributed Planning and Scheduling in Global Manufacturing Scheduling and resource allocation problems are pervasive and important in the management of industrial and government organizations. With advent of new technology and fast evolvement in industry, the enterprise is gradually moving toward global manufacturing for efficient operational management and competent strategic decision making. In the past two decades, researchers and practitioners have been applying various techniques (such as artificial intelligence, optimization methodologies, information system design, human factors, etc.) to design and develop planning and scheduling methodologies and systems for different applications. However, the inherent complexity of problems, short life-cycle of planner/scheduler, unrealistic investment of generous-purpose systems, and profligacy of scattering computing resource, accessibility, integration and re-configurability become the essential factors for new planners/schedulers in global manufacturing. The specific objectives of this research ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Genetic Programming and Multi-Agent Layered Learning by Reinforcements We present an adaptation of the standard genetic program (GP) to hierarchically decomposable, multi-agent learning problems. To break down a problem that requires cooperation of multiple agents, we use the team objective function to derive a simpler, intermediate objective function for pairs of cooperating agents. We apply GP to optimize first for the intermediate, then for the team objective function, using the final population from the earlier GP as the initial seed population for the next. This layered learning approach facilitates the discovery of primitive behaviors that can be reused and adapted towards complex objectives based on a shared team goal.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLayered Learning This paper presents layered learning, a hierarchical machine learning paradigm. Layered learning applies to tasks for which learning a direct mapping from inputs to outputs is intractable with existing learning algorithms. Given a hierarchical task decomposition into subtasks, layered learning seamlessly integrates separate learning at each subtask layer. The learning of each subtask directly facilitates the learning of the next higher subtask layer by determining at least one of three of its components: (i) the set of training examples; (ii) the input representation; and/or (iii) the output representation. We introduce layered learning in its domainindependent general form. We then present a full implementation in a complex domain, namely simulated robotic soccer. 1. Introduction Machine learning (ML) algorithms select a hypothesis from a hypothesis space based on a set of training examples such that the chosen hypothesis is predicted to characterize unseen examples...\n\n1-Hop Neighbour:\nLayered Learning in Genetic Programming for a Cooperative Robot Soccer Problem We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial generation of the next subproblem. This method is used to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP.\n\n1-Hop Neighbour:\nA Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge...\n\n2-Hop Neighbour:\nRapid Concurrent Software Engineering in Competitive Situations This article is an experience report on the evolutionary development process of AT Humboldt, a multi agent system which has become World Champion 1997 and Vice World Champion 1998 of RoboCup simulator league. It details why the artifical soccer initiative RoboCup is a tempting domain for rapid concurrent software engineering. Both the development processes in 1997 and 1998 are described, compared and evaluated. Lessons learned for development projects in distributed control conclude this report. 1 Introduction In this article the project managers describe the evolutionary development process of the software project AT (AgentTeam) Humboldt, which has become World Champion 1997 and Vice World Champion 1998 in the simulator league of the artifical soccer contest RoboCup ([10]). The RoboCup initiative recently gets more and more popular among scientists in robotics, distributed systems and distributed artificial intelligence because of its strong competitive character and tight resource b...\n\n2-Hop Neighbour:\nCMUnited-97: RoboCup-97 Small-Robot World Champion Team Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan.\n\n2-Hop Neighbour:\nThe CMUnited-97 Robotic Soccer Team: Perception and Multiagent Control Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specificroles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Task Models, Intentions, and Agent Conversation Policies . It is possible to define conversation policies, such as communication or dialogue protocols, that are based strictly on what messages and, respectively, what performatives may follow each other. While such an approach has many practical applications, such protocols support only \"local coherence\" in a conversation. Lengthy message exchanges require some infrastructure to lend them \"global coherence.\" Recognition of agent intentions about the joint task is essential for this global coherence, but there are further mechanisms needed to ensure that both local and global coherence are jointly maintained. This paper presents a general yet practical approach to designing, managing, and engineering agents that can do simple run-time intention recognition without creating complex multi-state protocols. In this approach we promote developing abstract task models and designing conversation policies in terms of such models. An implemented agent assistant based on these ideas is brie...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n2-Hop Neighbour:\nMulti-User and Security Support for Multi-Agent Systems This paper discusses the requirements an agent system needs to be secure. In particular, the paper introduces a classification of modern distributed systems, and examines the delegation concept from a security point of view. After discussing the peculiar security and delegation issues present in distributed object systems, mobile agent systems and in multi agent systems, a case study is presented, describing the multi-user and security support that is being built into the JADE platform.\n\n2-Hop Neighbour:\nTowards socially sophisticated BDI agents We present an approach to social reasoning that integrates prior work on norms and obligations with the BDI approach to agent architectures. Norms and obligations can be used to increase the eficiency of agent reasoning, and their explicit representation supports reasoning about a wide range of behaviour types in a single framework. We propose a modified BDI interpreter loop that takes norms and obligations into account in an agent's deliberation.\n\n2-Hop Neighbour:\nTowards Active Logic Programming In this paper we present the new logic programming language DALI, aimed at defining agents and agent systems. A main design objective for DALI has been that of introducing in a declarative fashion all the essential features, while keeping the language as close as possible to the syntax and semantics of the plain Horn--clause language. Special atoms and rules have been introduced, for representing: external events, to which the agent is able to respond (reactivity); actions (reactivity and proactivity); internal events (previous conclusions which can trigger further activity); past and present events (to be aware of what has happened). An extended resolution is provided, so that a DALI agent is able to answer queries like in the plain Horn--clause language, but is also able to cope with the different kinds of events, and exhibit a (rational) reactive and proactive behaviour. 1 Introduction In this paper we address the issue of defining a logic programming language for reac...\n\n2-Hop Neighbour:\nA Logic of BDI Agents with Procedural Knowledge In this paper, we present a new logic for specifying the behaviour of multi-agent systems. In this logic, agents are viewed as BDI systems, in that their state is characterised in terms of beliefs, desires, and intentions: the semantics of the BDI component of the logic are based on the wellknown system of Rao and Georgeff. In addition, agents have available to them a library of plans, representing their `know-how': procedural knowledge about how to achieve their intentions. These plans are, in effect, programs, that specify how a group of agents can work in parallel to achieve certain ends. The logic provides a rich set of constructs for describing the structure and execution of plans. Some properties of the logic are investigated, (in particular, those relating to plans), and some comments on future work are presented. 1 Introduction There is currently much international interest in computer systems that go under the banner of intelligent agents [17]. Crudely, an intelligent agent i...\n\n2-Hop Neighbour:\nUpdating Mental States from Communication . In order to perform effective communication agents must be able to foresee the effects of their utterances on the addressee's mental state. In this paper we investigate on the update of the mental state of an hearer agent as a consequence of the utterance performed by a speaker agent. Given an agent communication language with a STRIPSlike semantics, we propose a set of criteria that allow to bind the speaker's mental state to its uttering of a certain sentence. On the basis of these criteria, we give an abductive procedure that the hearer can adopt to partially recognize the speaker's mental state that led to a specific utterance. This procedure can be adopted by the hearer to update its own mental state and its image of the speaker's mental state. 1 Introduction In multi-agent systems, communication is necessary for the agents to cooperate and coordinate their activities or simply to avoid interfering with one another. If agents are not designed with embedded pre-compiled...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Feature Selection in Web Applications Using ROC Inflections and Power Set Pruning A basic problem of information processing is selecting enough features to ensure that events are accurately represented for classification problems, while simultaneously minimizing storage and processing of irrelevant or marginally important features. To address this problem, feature selection procedures perform a search through the feature power set to find the smallest subset meeting performance requirements. Major restrictions of existing procedures are that they typically explicitly or implicitly assume a fixed operating point, and make limited use of the statistical structure of the feature power set. We present a method that combines the Neyman-Pearson design procedure on finite data, with the directed set structure of the Receiver Operating Curves on the feature subsets, to determine the maximal size of the feature subsets that can be ranked in a given problem. The search can then be restricted to the smaller subsets, resulting in significant reductions in computational...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n1-Hop Neighbour:\nUsing Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.\n\n2-Hop Neighbour:\nAutomated Text Categorization Using Support Vector Machine In this paper, we study the use of support vector machine in text categorization. Unlike other machine learning techniques, it allows easy incorporation of new documents into an existing trained system. Moreover, dimension reduction, which is usually imperative, now becomes optional. Thus, SVM adapts efficiently in dynamic environments that require frequent additions to the document collection. Empirical results on the Reuters-22173 collection are also discussed. 1. Introduction The increasingly widespread use of information services made possible by the Internet and World Wide Web (WWW) has led to the so-called information overloading problem. Today, millions of online documents on every topic are easily accessible via the Internet. As the available information increases, the inability of people to assimilate and profitably utilize such large amounts of information becomes more and more apparent. Developing user-friendly, automatic tools for efficient as well as effective retrieval ...\n\n2-Hop Neighbour:\nWrapper Induction: Efficiency and Expressiveness (Extended Abstract) Recently, many systems have been built that automatically interact with Internet information resources. However, these resources are usually formatted for use by people; e.g., the relevant content is embedded in HTML pages. Wrappers are often used to extract a resource's content, but hand-coding wrappers is tedious and error-prone. We advocate wrapper induction, a technique for automatically constructing wrappers. We have identified several wrapper classes that can be learned quickly (most sites require only a handful of examples, consuming a few CPU seconds of processing), yet which are useful for handling numerous Internet resources (70% of surveyed sites can be handled by our techniques). Introduction The Internet presents a stunning variety of on-line information resources: telephone directories, retail product catalogs, weather forecasts, and many more. Recently, there has been much interest in systems (such as software agents (Etzioni & Weld 1994; Kwok & Weld 1996) or informati...\n\n2-Hop Neighbour:\nDigital Libraries and Autonomous Citation Indexing The World Wide Web is revolutionizing the way that researchers access scientific information. Articles are increasingly being made available on the homepages of authors or institutions, at journal Web sites, or in online archives. However, scientific information on the Web is largely disorganized. This article introduces the creation of digital libraries incorporating Autonomous Citation Indexing (ACI). ACI autonomously creates citation indices similar to the Science Citation Index R . An ACI system autonomously locates articles, extracts citations, identifies identical citations that occur in different formats, and identifies the context of citations in the body of articles. ACI can organize the literature and provide most of the advantages of traditional citation indices, such as literature search using citation links, and the evaluation of articles based on citation statistics. Furthermore, ACI can provide significant advantages over traditional citation indices. No manual effort is required for indexing, which should result in a reduction in cost and an increase in the availability of citation indices. An ACI system can also provide more comprehensive and up-to-date indices of the literature by indexing articles on the Web, technical reports, conference papers, etc. Furthermore, ACI makes it easy to browse the context of citations to given articles, allowing researchers to quickly and easily see what subsequent researchers have said about a given article. digital libraries incorporating ACI may significantly improve scientific dissemination and feedback.\n\n2-Hop Neighbour:\nDiscoWeb: Applying Link Analysis to Web Search How often does the search engine of your choice produce results that are less than satisfying, generating endless links to irrelevant pages even though those pages may contain the query keywords? How often are you given pages that tell you things you already know? While the search engines and related tools continue to make improvements in their information retrieval algorithms, for the most part they continue to ignore an essential part of the web – the links. We have found that link analysis can have significant contributions to web page retrieval from search engines, to web community discovery, and to the measurement of web page influence. It can help to rank results and find high-quality index/hub/link pages that contain links to the best sites on the topic of interest. Our work is based on research from IBM’s CLEVER project [7, 4, 6], Stanford’s Google [3], and the Web Archaeology research [2, 1] at Compaq’s Systems Research Center. These research teams have demonstrated some of the contributions that link analysis can make in the web. In our work, we have attempted to generalize and improve upon these approaches. Just as in citation analysis of published works, the most influential documents on the web will have many other documents recommending (pointing to) them. This idea underlies all link analysis efforts, from the straightforward technique of counting the number of incoming edges to a page, to the deeper eigenvector analysis used in our work and in those projects mentioned above. It turns out that the identification of “high-quality ” web pages reduces to a sparse eigenvalue of the adjacency matrix\n\n2-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Semantics-Based Information Retrieval : In this paper we investigate the use of conceptual descriptions based on description logics for contentbased information retrieval and present several innovative contributions. We provide a query-byexamples retrieval framework which avoids the drawback of a sophisticated query language. We extend an existing DL to deal with spatial concepts. We provide a content-based similarity measure based on the least common subsumer which extracts conceptual similarities of examples. 1 Introduction As more and more information of various kinds becomes available for an increasing number of users, one major challenge for Computer Science is to provide e#cient access and retrieval mechanisms. This is not only true for Web-based information which by its nature tends to be highly unorganized and heterogeneous, but also for dedicated databases which are designed to provide a particular service. The guiding example of this paper is a \"TV-Assistant\" with a database containing TV-program information. Its...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nQUEST - Querying specialized collections on the Web Ensuring access to specialized web-collections in a fast evolving web environment requires flexible techniques for orientation and querying. The adoption of meta search techniques for web-collections is hindered by the enormous heterogeneity of the resources. In this paper we introduce QUEST --- a system for querying specialized collections on the web.\n\n1-Hop Neighbour:\nFoundations of Spatioterminological Reasoning with Description Logics This paper presents a method for reasoning about spatial objects and their qualitative spatial relationships. In contrast to existing work, which mainly focusses on reasoning about qualitative spatial relations alone, we integrate quantitative and qualitative information with terminological reasoning. For spatioterminological reasoning we present the description logic ALCRP(D) and define an appropriate concrete domain D for polygons. The theory is motivated as a basis for knowledge representation and query processing in the domain of deductive geographic information systems. 1 Introduction Qualitative relations play an important role in formal reasoning systems that can be part of, for instance, geographic information systems (GIS). In this context, inferences about spatial relations should not be considered in isolation but should be integrated with formal inferences about structural descriptions of domain objects (e.g. automatic consistency checking and classification) and infer...\n\n2-Hop Neighbour:\nQualitative Spatial Representation and Reasoning: An Overview . The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning including reasoning about spatial change. Finally there is a discussion of theoretical results and a glimpse of future work. The paper is a revised and condensed version of [33, 34]. Keywords: Qualitative Spatial Reasoning, Ontology. The text is in a slightly di erent format from the FI format. Cohn\n\n2-Hop Neighbour:\nEffective and Efficient Automatic Database Selection We examine a class of database selection algorithms that require only document frequency information. The CORI algorithm is an instance of this class of algorithms. In previous work, we showed that CORI is more effective than gGlOSS when evaluated against a relevance-based standard. In this paper, we introduce a family of other algorithms in this class and examine components of these algorithms and of the CORI algorithm to begin identifying the factors responsible for their performance. We establish that the class of algorithms studied here is more effective and efficient than gGlOSS and is applicable to a wider variety of operational environments. In particular, this methodology is completely decoupled from the database indexing technology so is as useful in heterogeneous environments as in homogeneous environments. 1 Introduction Database or collection selection [2, 6, 10, 8, 13, 14, 15, 9] is a fundamental problem in distributed searching. Given an environment containing many dat...\n\n2-Hop Neighbour:\nA Description Logic with Concrete Domains and a Role-forming Predicate Operator Description Logics (DLs) are a family of logic-based knowledge representation formalisms designed to represent and reason about conceptual knowledge. Due to a nice compromise between expressivity and the complexity of reasoning, DLs have found applications in many areas such as, e.g., modelling database schemas and the semantic web. However, description logics represent knowledge in an abstract way and lack the power to describe more concrete (quantitative) qualities like size, duration, or amounts. The standard solution is to equip DLs with concrete domains, e.g., natural numbers with predicates =, <, + or strings with a string concatenation predicate. Moreover, recently it has been suggested that the expressive power of DLs with concrete domains can be further enhanced by providing them with database-like key constraints. Key constraints can be a source of additional inconsistencies in database schemas, and DLs applied in reasoning about database schemas are thus wanted to be able to capture such constraints. Up to now, only the integration of uniqueness key constraints into DLs with concrete", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'CaseLP, A Rapid Prototyping Environment For Agent Based Software Intelligent agents and multi-agent systems are increasingly recognized as an innovative approach for analyzing, designing and implementing complex, heterogeneous and distributed software applications. The agent-based view offers a powerful and high level conceptualization that software engineers can exploit to considerably improve the way in which software is realized. Agent-based software engineering is a recent and very interesting research area. Due to its novelty, there is still no evidence of well-established practices for the development of agent-based applications and thus experimentation in this direction is very important. This dissertation' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Multi-Agent Approach to Vehicle Monitoring in Motorway . This paper describes CaseLP, a prototyping environment for MultiAgent Systems (MAS), and its adoption for the development of a distributed industrial application. CaseLP employs architecture definition, communication, logic and procedural languages to model a MAS from the top-level architecture down to procedural behavior of each agent's instance. The executable specification which is obtained can be employed as a rapid prototype which helps in taking quick decisions on the best possible implementation solutions. Such capabilities have been applied to a distributed application of Elsag company, in order to assess the best policies for data communication and database allocation before the concrete implementation. The application consists in remote traffic control and surveillance over service areas on an Italian motorway, employing automatic detection and car plate reading at monitored gates. CaseLP allowed to predict data communication performance statistics under differe...\n\n1-Hop Neighbour:\nHEMASL: A Flexible Language to Specify Heterogeneous Agents In the realization of agent-based applications the developer generally needs to use heterogeneous agent architectures, so that each application component can optimally perform its task. Languages that easily model the heterogeneity of agents' architectures are very useful in the early stages of the application development. This paper presents HEMASL, a simple meta-language used to specify heterogeneous agent architectures, and sketches how HEMASL should be implemented in an object-oriented commercial programming language as Java. Moreover, the paper briefly discusses the benefits of adding HEMASL to CaseLP, a LP-based specification and prototyping environment for multi-agent systems, in order to enhance its flexibility and usability. 1. Introduction Intelligent agents and multi-agent systems (MASs) are more and more recognized as the \"new\" modeling techniques to be used to engineer complex and distributed software applications [12]. Agent-based software engineering is concerned with ...\n\n1-Hop Neighbour:\nA Survey of Agent-Oriented Methodologies . This article introduces the current agent-oriented methodologies. It discusseswhat approacheshave been followed (mainly extending existing objectoriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey. 1 Introduction Agent technology has received a great deal of attention in the last few years and, as a result, the industry is beginning to get interested in using this technology to develop its own products. In spite of the different developed agent theories, languages, architectures and the successful agent-based applications, very little work for specifying (and applying) techniques to develop applications using agent technology has been done. The role of agent-oriented methodologies is to assist in all the phases of the life cycle of an agent-based application, including its management. This article reviews the current approaches to the development of an agent-oriented (AO) methodology. ...\n\n2-Hop Neighbour:\nAn Overview of the Multiagent Systems Engineering Methodology . To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents. 1.\n\n2-Hop Neighbour:\nMultiagent Systems Engineering: A Methodology And Language for Designing Agent Systems This paper overviews MaSE and provides a high-level introduction to one critical component used within MaSE, the Agent Modeling Language. Details on the Agent Definition Language and detailed agent design are left for a future paper.\n\n2-Hop Neighbour:\nArchitecture-Centric Object-Oriented Design Method for Multi-Agent Systems This paper introduces an architecture-centric object-oriented design method for MAS (Multi-Agent Systems) using the extended UML (Unified Modeling Language). The UML extension is based on design principles that are derived from characteristics of MAS and concept of software architecture which helps to design reusable and wellstructured multi-agent architecture. The extension allows one to use original objectoriented method without syntactic or semantic changes which implies the preservation of OO productivity, i.e., the availability of developers and tools, the utilization of past experiences and knowledge, and the seamless integration with other systems. Keywords: multi-agent systems, architecture, object-oriented development methods 1. Introduction Software agents provide a new way of analyzing, designing, and implementing complex software systems. Currently, agent technology is used in wide variety of applications with range from comparatively small systems such as\n\n2-Hop Neighbour:\nAnalysis and Design using MaSE and agentTool This paper provides an overview of the work being done at the Air Force Institute of Technology on the Multiagent Systems Engineering methodology and the associated agentTool environment. Our research is focused on discovering methods and techniques for engineering practical multiagent systems. It uses the abstraction provided by multiagent systems for developing intelligent, distributed software systems.\n\n2-Hop Neighbour:\nSpecification of Heterogeneous Agent Architectures . Agent-based software applications need to incorporate agents having heterogeneous architectures in order for each agent to optimally perform its task. HEMASL is a simple meta-language used to specify intelligent agents and multi-agent systems when different and heterogeneous agent architectures must be used. HEMASL specifications are based on an agent model that abstracts several existing agent architectures. The paper describes some of the features of the language, presents examples of its use and outlines its operational semantics. We argue that adding HEMASL to CaseLP, a specification and prototyping environment for MAS, can enhance its flexibility and usability. 1 Introduction Intelligent agents and multi-agent systems (MAS) are increasingly being acknowledged as the \"new\" modelling techniques to be used to engineer complex and distributed software applications [17, 9]. Agent-based software development is concerned with the realization of software applications modelled ...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Methods for Sampling Pages Uniformly from the World Wide Web We present two new algorithms for generating uniformly random samples of pages from the World Wide Web, building upon recent work by Henzinger et al. (Henzinger et al. 2000) and Bar-Yossef et al. (Bar-Yossef et al. 2000). Both algorithms are based on a weighted random-walk methodology. The first algorithm (DIRECTED-SAMPLE) operates on arbitrary directed graphs, and so is naturally applicable to the web. We show that, in the limit, this algorithm generates samples that are uniformly random. The second algorithm (UNDIRECTED-SAMPLE) operates on undirected graphs, thus requiring a mechanism for obtaining inbound links to web pages (e.g., access to a search engine). With this additional knowledge of inbound links, the algorithm can arrive at a uniform distribution faster than DIRECTEDSAMPLE, and we derive explicit bounds on the time to convergence. In addition, we evaluate the two algorithms on simulated web data, showing that both yield reliably uniform samples of pages. We also compare our results with those of previous algorithms, and discuss the theoretical relationships among the various proposed methods.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n2-Hop Neighbour:\nCreating a Semantic web Interface with Virtual Reality Novel initiatives amongst the Internet community such as Internet2 [1] and Qbone [2] are based on the use of high bandwidth and powerful computers. However the experience amongst the majority of Internet users is light-years from these emerging technologies. We describe the construction of a distributed high performance search engine, utilizing advanced threading techniques on a diskless Linux cluster. The resulting Virtual Reality scene is passed to a standard client machine for viewing. This search engine bridges the gap between the Internet of today, and the Internet of the future. Keywords: Internet Searching, High Performance VRML, Visualization. 1.\n\n2-Hop Neighbour:\nTopical Locality in the Web: Experiments and Observations Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable World-Wide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results present the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents. 1 Introduction Most web pages are linked to others with related content...\n\n2-Hop Neighbour:\nToward Learning Based Web Query Processing In this paper, we describe a novel Web query processing approach with learning capabilities. Under this approach, user queries are in the form of keywords and search engines are employed to find URLs of Web sites that might contain the required information. The first few URLs are presented to the user for browsing. Meanwhile, the query processor learns both the information required by the user and the way that the user navigates through hyperlinks to locate such information. With the learned knowledge, it processes the rest URLs and produces precise query results in the form of segments of Web pages without user involvement. The preliminary experimental results indicate that the approach can process a range of Web queries with satisfactory performance. The architecture of such a query processor, techniques of modeling HTML pages, and knowledge for query processing are discussed. Experiments on the effectiveness of the approach, the required knowledge, and the training strategies are presented.\n\n2-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n2-Hop Neighbour:\nAn Approach to Relate the Web Communities Through Bipartite Graphs The Web harbors a large number of community structures. Early detection of community structures has many purposes such as reliable searching and selective advertising. In this paper we investigate the problem of extracting and relating the web community structures from a large collection of Web-pages by performing hyper-link analysis. The proposed algorithm extracts the potential community signatures by extracting the corresponding dense bipartite graph (DBG) structures from the given data set of web pages. Further, the proposed algorithm can also be used to relate the extracted community signatures. We report the experimental results conducted on 10 GB TREC (Text REtrieval Conference) data collection that contains 1.7 million pages and 21.5 million links. The results demonstrate that the proposed approach extracts meaningful community signatures and relates them.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'An Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nContext and Page Analysis for Improved Web Search NEC Research Institute has developed a metasearch engine that improves the efficiency of Web searches by downloading and analyzing each document and then displaying results that show the query terms in context. Several popular and useful search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light attempt to maintain full-text indexes of the World Wide Web. However, relying on a single standard search engine has limitations. The standard search engines have limited coverage, 1,2 outdated databases, and are sometimes unavailable due to problems with the network or the engine itself. The precision of standard engine results can also vary because they generally focus on handling queries quickly and use relatively simple ranking schemes. 3 Rankings can be further muddled by keyword spamming to increase a page's rank order. Often, the relevance of a particular page is obvious only after loading it and finding the query terms. Metasearch engines, such as MetaCrawler and SavvySearch, attempt to contend with the problem of limited coverage by submitting queries to several standard search engines at once. 4,5 The primary advantages of metasearch engines are that they combine the results of several search engines and present a consistent user interface. 5 However, most metasearch engines rely on the documents and summaries returned by standard search engines and so inherit their limited precision and vulnerability to keyword spamming. We developed the NEC Research Institute (NECI) metasearch engine [now called Inquirus] to improve the efficiency and precision of Web search by downloading and analyzing each document and then displaying results that show the query terms in\n\n2-Hop Neighbour:\nLearning Search Engine Specific Query Transformations for Question Answering We introduce a method for learning query transformations that improves the ability to retrieve answers to questions from an information retrieval system. During the training stage the method involves automatically learning phrase features for classifying questions into different types, automatically generating candidate query transformations from a training set of question/answer pairs, and automatically evaluating the candidate transforms on target information retrieval systems such as real-world general purpose search engines. At run time, questions are transformed into a set of queries, and re-ranking is performed on the documents retrieved. We present a prototype search engine, Tritus, that applies the method to web search engines. Blind evaluation on a set of real queries from a web search engine log shows that the method significantly outperforms the underlying web search engines as well as a commercial search engine specializing in question answering. Keywords Web search, quer...\n\n2-Hop Neighbour:\nWeb Search -- Your Way We describe a metasearch engine architecture, in use at NEC Research Institute, that allows users to provide preferences in the form of an information need category. This extra information is used to direct the search process, providing more valuable results than by considering only the query. Using our architecture, identical keyword queries may be sent to different search engines, and results may be scored differently for different users.\n\n2-Hop Neighbour:\nMySpiders : Evolve your own intelligent Web crawlers Abstract. The dynamic nature of the World Wide Web makes it a challenge to find information that is bothrelevant and recent. Intelligent agents can complement the power of searchengines to meet this challenge. We present a Web tool called MySpiders, which implements an evolutionary algorithm managing a population of adaptive crawlers who browse the Web autonomously. Each agent acts as an intelligent client on behalf of the user, driven by a user query and by textual and linkage clues in the crawled pages. Agents autonomously decide which links to follow, which clues to internalize, when to spawn offspring to focus the search near a relevant source, and when to starve. The tool is available to the public as a threaded Java applet. We discuss the development and deployment of such a system. Keywords: web informational retrieval, topic-driver crawlers, online search, InfoSpiders, MySpiders, applet\n\n2-Hop Neighbour:\nGenerating a Topically Focused VirtualReality Internet Surveys highlight that Internet users are frequently frustrated by failing to locate useful information, and by difficulty in browsing anarchically linked web-structures. We present a new Internet browsing application (called VR-net) that addresses these problems. It first identifies semantic domains consisting of tightly interconnected web-page groupings. The second part populates a 3D virtual world with these information sources, representing all relevant pages plus appropriate structural relations. Users can then easily browse through around a semantically focused virtual library. 1 Introduction The Internet is a probably the most significant global information resource ever created, allowing access to an almost unlimited amount of information. In this paper we describe two inter-related difficulties suffered by Internet users, and their combined influence on web use. We then introduce an integrated \"search and browse\" solution tool that directly tackles both issues. We also examin...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Multi-Agent Reinforcement Learning: Weighting and Partitioning This paper addresses weighting and partitioning in complex reinforcement learning tasks, with the aim of facilitating learning. The paper presents some ideas regarding weighting of multiple agents and extends them into partitioning an input/state space into multiple regions with differential weighting in these regions, to exploit differential characteristics of regions and differential characteristics of agents to reduce the learning complexity of agents (and their function approximators) and thus to facilitate the learning overall. It analyzes, in reinforcement learning tasks, different ways of partitioning a task and using agents selectively based on partitioning. Based on the analysis, some heuristic methods are described and experimentally tested. We find that some off-line heuristic methods performed the best, significantly better than single-agent models. Keywords: weighting, averaging, neural networks, partitioning, gating, reinforcement learning, 1 Introduction Multiple ag...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSome Experiments with a Hybrid Model for Learning Sequential Decision Making To deal with sequential decision tasks, we present a learning model Clarion, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in Sun (1995). The model learns and utilizes procedural and declarative knowledge, tapping into the synergy of the two types of processes. It unifies neural, reinforcement, and symbolic methods to perform on-line, bottom-up learning. Experiments in various situations are reported that shed light on the working of the model. 1 Introduction This paper presents a hybrid model that unifies neural, symbolic, and reinforcement learning into an integrated architecture. It addresses the following three issues: (1) It deals with concurrent on-line learning: It allows a situated agent to learn continuously from on-going experience in the world, without the use of preconstructed data sets or preconceived concepts. (2) The model learns not only low-level procedural skills but also hi...\n\n2-Hop Neighbour:\nFrom Implicit Skills to Explicit Knowledge: A Bottom-Up Model of Skill Learning This paper presents a skill learning model CLARION. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun, 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects.\n\n2-Hop Neighbour:\nSymbol Grounding: A New Look At An Old Idea Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or \\objectively\". They are formed in relation to the experience of agents, through their perceptual /motor apparatuses, in their world and linked to their goals and actions. In this paper, we will take a detailed look at this relatively old issue, using a new perspective, aided by our new work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions. 1 Symbol Grounding: A New Look At An Old Idea Symbols should be grounded, as has been argued before. But we insist that they should be ground...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Employing EM and Pool-Based Active Learning for Text Classification This paper shows how a text classifier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization (EM) on a pool of unlabeled data. Query-by-Committee is used to actively select documents for labeling, then EM with a naive Bayes model further improves classification accuracy by concurrently estimating probabilistic labels for the remaining unlabeled documents and using them to improve the model. We also present a metric for better measuring disagreement among committee members; it accounts for the strength of their disagreement and for the distribution of the documents. Experimental results show that our method of combining EM and active learning requires only half as many labeled training examples to achieve the same accuracy as either EM or active learning alone. Keywords: text classification active learning unsupervised learning information retrieval 1 Introduction In many settings for learning text classifiers, obtaining lab...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n1-Hop Neighbour:\nToward Optimal Active Learning through Sampling Estimation of Error Reduction This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods. 1.\n\n1-Hop Neighbour:\nAutomatically Labeling Web Pages Based on Normal User Actions For agents attempting to learn a user's interests, the cost of obtaining labeled training instances is prohibitive because the user must directly label each training instance, and few users are willing to do so. We present an approach that circumvents the need for human-labeled pages. Instead, we learn `surrogate' tasks where the desired output is easily measured, such as the number of hyperlinks clicked on a page or the amount of scrolling performed. Our assumption is that these outputs will highly correlate with the user's interests. In other words, by unobtrusively `observing' the user's behavior we are able to learn functions of value. For example, an agent could silently observe the user's browser behavior during the day, then use these training examples to learn such functions and gather, during the middle of the night, pages that are likely to be of interest to the user. Previous work has focused on learning a user profile by passively observing the hyperlinks clicked on and tho...\n\n2-Hop Neighbour:\nAn Integrated Ontology for the WWW . Knowledge-intensive processing of WWW information should be founded on clear and uniform conceptualisation. An integrated ontology covering different aspects of the WWW (documents, sites, network addressing, HTML code) has been laid down, upon which a knowledge base of the WWW domain is being built. This knowledge base should support \"intelligent\" metasearch of the Web, in particular, postprocessing of hit-lists returned by external search engines. 1 Introduction During the last few years, the World-Wide Web has become one of the most widespread technologies of information presentation. It is thus not surprising that many Knowledge Engineering (KE) projects focus on it: some use HTML as a cheap, ready-made user-interface, other thrive to mine valuable information hidden inside existing WWW pages. A necessary prerequisite of mutual comprehensibility and knowledge reuse among different KE communities and projects dealing with the Web is a clear and unified conceptualisation, wh...\n\n2-Hop Neighbour:\nSupport Vector Machine Active Learning with Applications to Text Classification . Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings. Keywords: Active Learning, Selective Sampling, Support Vector Machines, Classification, Relevance Feedback Abbreviations: SVM -- Support Vector Machine; TSVM -- Transductive Support Vector Machine 1.\n\n2-Hop Neighbour:\nInducing Conceptual User Models Usually, performance is the primary objective in systems that make use of user modeling (Um) techniques. But since machine learning (Ml) in user modeling addresses several issues in the context of human computer interaction (Hci), the requirements on \"performance\" are manifold. Thus, machine learning for user modeling (Ml4Um) has to meet several demands in order to satisfy the aims of involved disciplines. In this article we describe the application of a rather unusual Ml method to Um, namely inductive logic programming (Ilp). Though not primarily associated with efficient learning methods, we motivate the use of Ilp by showing translucency of derived user models and the explanatory potential of such models during a user adapted filtering process. In course of the OySTER project our goal is to induce conceptual user models that allow for a transparent query refinement and information filtering in the domain of Www meta--search. 1 Introduction: Why Conceptual User Models?...\n\n2-Hop Neighbour:\nToward Learning Based Web Query Processing In this paper, we describe a novel Web query processing approach with learning capabilities. Under this approach, user queries are in the form of keywords and search engines are employed to find URLs of Web sites that might contain the required information. The first few URLs are presented to the user for browsing. Meanwhile, the query processor learns both the information required by the user and the way that the user navigates through hyperlinks to locate such information. With the learned knowledge, it processes the rest URLs and produces precise query results in the form of segments of Web pages without user involvement. The preliminary experimental results indicate that the approach can process a range of Web queries with satisfactory performance. The architecture of such a query processor, techniques of modeling HTML pages, and knowledge for query processing are discussed. Experiments on the effectiveness of the approach, the required knowledge, and the training strategies are presented.\n\n2-Hop Neighbour:\nKeyword Spices: A New Method for Building Domain-Specific Web Search Engines This paper presents a new method for building domain-specific web search engines. Previous methods eliminate irrelevant documents from the pages accessed using heuristics based on human knowledge about the domain in question. Accordingly, they are hard to build and can not be applied to other domains. The keyword spice method, in contrast, improves search performance by adding", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Knowledge-based Wrapper Generation by Using XML Information extraction is the process of recognizing the particular fragments of a document that constitute its core semantic content. However, most previous information extraction systems were not effective for real-world information sources due to difficulties in acquiring and representing useful domain knowledge and in dealing with structural heterogeneity inherent in different sources. In order to resolve these problems, this paper proposes a scheme of knowledge-based wrapper generation for semi-structured and labeled documents. The implementation of an agent-oriented information extraction system, XTROS, is described. In contrast with previous wrapper learning agents, XTROS represents both the domain knowledge and the wrappers by XML documents to increase modularity, flexibility, and interoperability among multiple parties. XTROS also facilitates simpler implementation of the wrapper generator by exploiting XML parsers and interpreters. XTROS shows good performance on several Web sites in the domain of real estates, and it is expected to be easily adaptable to different domains by plugging in appropriate XML-based domain knowledge. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Hierarchical Approach to Wrapper Induction With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of easier extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that stalker does significantly better then other approaches; on one hand, stalker requires up to two orders of magnitude fewer examples than other algorithms, while on the other hand...\n\n1-Hop Neighbour:\nA Shopping Agent That Automatically Constructs Wrappers for Semi-Structured Online Vendors . This paper proposes a shopping agent with a robust inductive learning method that automatically constructs wrappers for semistructured online stores. Strong biases assumed in many existing systems are weakened so that the real stores with reasonably complex document structures can be handled. Our method treats a logical line as a basic unit, and recognizes the position and the structure of product descriptions by finding the most frequent pattern from the sequence of logical line information in output HTML pages. This method is capable of analyzing product descriptions that comprise multiple logical lines, and even those with extra or missing attributes. Experimental tests on over 60 sites show that it successfully constructs correct wrappers for most real stores. 1 Introduction A shopping agent is a mediator system that extracts the product descriptions from several online stores on a user's behalf. Since the stores are heterogeneous, a procedure for extracting the cont...\n\n1-Hop Neighbour:\nLearning Information Extraction Rules for Semi-structured and Free Text . A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically. WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semistructured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories. Keywords: natural language processing, information extraction, rule learning 1. Information extraction As more and more text becomes available on-line, there is a growing need for systems that extract information automatically from text data. An information extraction (IE) sys...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nWeb Mining Research: A Survey With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues.\n\n2-Hop Neighbour:\n(LP)2, an Adaptive Algorithm for Information Extraction from Web-related Texts (LP) 2 is an algorithm for adaptive Information Extraction from Web-related text that induces symbolic rules by learning from a corpus tagged with SGML tags. Induction is performed by bottom-up generalisation of examples in a training corpus. Training is performed in two steps: initially a set of tagging rules is learned; then additional rules are induced to correct mistakes and imprecision in tagging. Shallow NLP is used to generalise rules beyond the flat word structure. Generalization allows a better coverage on unseen texts, as it limits data sparseness and overfitting in the training phase. In experiments on publicly available corpora the algorithm outperforms any other algorithm presented in literature and tested on the same corpora. Experiments also show a significant gain in using NLP in terms of (1) effectiveness (2) reduction of training time and (3) training corpus size. In this paper we present the machine learning algorithm for rule induction. In particular we focus on the NLP-based generalisation and the strategy for pruning both the search space and the final rule set. 1.\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nRelational Learning via Propositional Algorithms: An Information Extraction Case Study This paper develops a new paradigm for relational learning which allows for the representation and learning of relational information using propositional means. This paradigm suggests different tradeoffs than those in the traditional approach to this problem -- the ILP approach -- and as a result it enjoys several significant advantages over it. In particular, the new paradigm is more flexible and allows the use of any propositional algorithm, including probabilistic algorithms, within it. We evaluate the new approach on an important and relation-intensive task - Information Extraction - and show that it outperforms existing methods while being orders of magnitude more efficient. 1", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Context in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSTARTS: Stanford Proposal for Internet Meta-Searching Document sources are available everywhere, both within the internal networks of organizations and on the Internet. Even individual organizations use search engines from different vendors to index their internal document collections. These search engines are typically incompatible in that they support different query models and interfaces, they do not return enough information with the query results for adequate merging of the results, and finally, in that they do not export metadata about the collections that they index (e.g., to assist in resource discovery). This paper describes STARTS, an emerging protocol for Internet retrieval and search that facilitates the task of querying multiple document sources. STARTS has been developed in a unique way. It is not a standard, but a group effort coordinated by Stanford's Digital Library project, and involving over 11 companies and organizations. The objective of this paper is not only to give an overview of the STARTS protocol proposal, but...\n\n1-Hop Neighbour:\nComputing Geographical Scopes of Web Resources Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, most current web search engines largely ignore the geographical scope of web resources. In this paper, we introduce techniques for automatically computing the geographical scope of web resources, based on the textual content of the resources, as well as on the geographical distribution of hyperlinks to them. We report an extensive experimental evaluation of our strategies using real web data. Finally, we describe a geographically-aware search engine that we have built using our techniques for determining the geographical scope of web resources. 1\n\n1-Hop Neighbour:\nSearching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices\n\n2-Hop Neighbour:\nThe Overview of Web Search Engines The World Wide Web allows people to share information globally. The amount of information grows without bound. In order to extract information that we are interested in, we need a tool to search the Web. The tool is called a search engine. This survey covers different components of the search engine and how the search engine really works. It provides a background understanding of information retrieval. It discusses different search engines that are commercially available. It investigates how the search engines find information in the Web and how they rank its pages to the give n query. Also, the paper provides guidelines for users on how to use search engines.\n\n2-Hop Neighbour:\nTheme-Based Retrieval of Web News Efficient information retrieval of highly dynamic information, such as Web news, is a complex task. As a result, search and retrieval environments for continuously updated news from other sources than the largest media conglomerates are almost absent on the Internet. Global search engines do not index or classify news information from smaller network communities. To address this problem, I developed NewsSearch, a news information management environment designed to improve retrieval efficiency of online news for the smaller networked communities. NewsSearch search achieves its goal through a combination of techniques: . Multiple indexing queues, defining multiple gathering schedules, to deal with different publication periodicities. . Information Retrieval techniques to news, in order to classify them into a pre-defined set of themes. . Support Vector Machines, which proved to be a fast and reliable classification technique. NewsSearch proved to be a scalable solution with acceptable storage needs even while managing a fairly large collection of daily publications. A combination of fine tuning of training strategies, noise filtering of Web news documents and multiple classifications, enable NewsSearch to achieve a classification accuracy of 95%. ACKNOWLEDGEMENTS This work was supported in part by the PRAXIS project ARIADNE (Pblico Digital -- Praxis XXI, Medida 3.1b) and project SAGRES (Praxis/P/TIT/1676/95). TABLE OF CONTENTS CHAPTER I\n\n2-Hop Neighbour:\nEvaluating Database Selection Techniques: A Testbed and Experiment We describe a testbed for database selection techniques and an experiment conducted using this testbed. The testbed is a decomposition of the TREC/TIPSTER data that allows analysis of the data along multiple dimensions, including collection-based and temporal-based analysis. We characterize the subcollections in this testbed in terms of number of documents, queries against which the documents have been evaluated for relevance, and distribution of relevant documents. We then present initial results from a study conducted using this testbed that examines the effectiveness of the gGlOSS approach to database selection. The databases from our testbed were ranked using the gGlOSS techniques and compared to the gGlOSS Ideal(l) baseline and a baseline derived from TREC relevance judgements. We have examined the degree to which several gGlOSS estimate functions approximate these baselines. Our initial results confirm that the gGlOSS estimators are excellent predictors of the Ideal(l) ranks but...\n\n2-Hop Neighbour:\nSDLIP + STARTS = SDARTS A Protocol and Toolkit for Metasearching In this paper we describe how we combined SDLIP and STARTS, two complementary protocols for searching over distributed document collections. The resulting protocol, which we call SDARTS, is simple yet expressible enough to enable building sophisticated metasearch engines. SDARTS can be viewed as an instantiation of SDLIP with metasearchspecific elements from STARTS. We also report on our experience building three SDARTS-compliant wrappers: for locally available plain-text document collections, for locally available XML document collections, and for external webaccessible collections. These wrappers were developed to be easily customizable for new collections. Our work was developed as part of Columbia University's Digital Libraries Initiative--Phase 2 (DLI2) project, which involves the departments of Computer Science, Medical Informatics, and Electrical Engineering, the Columbia University libraries, and a large number of industrial partners. The main goal of the project is to provide personalized access to a distributed patient-care digital library.\n\n2-Hop Neighbour:\nAutomatic Discovery of Language Models for Text Databases The proliferation of text databases within large organizations and on the Internet makes it difficult for a person to know which databases to search. Given language models that describe the contents of each database, a database selection algorithm such as GlOSS can provide assistance by automatically selecting appropriate databases for an information need. Current practice is that each database provides its language model upon request, but this cooperative approach has important limitations. This paper demonstrates that cooperation is not required. Instead, the database selection service can construct its own language models by sampling database contents via the normal process of running queries and retrieving documents. Although random sampling is not possible, it can be approximated with carefully selected queries. This sampling approach avoids the limitations that characterize the cooperative approach, and also enables additional capabilities. Experimental results demonstrate th...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Fuzzy Concepts and Formal Methods: A Fuzzy Logic Toolkit for Z It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper suggests fuzzy set theory as a possible representation scheme for this imprecision or approximation. We provide a summary of a toolkit that defines the operators, measures and modifiers necessary for the manipulation of fuzzy sets and relations. We also provide some examples of the laws which establishes an isomorphism between the extended notation presented here and conventional Z when applied to boolean sets and relations.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFuzzy Concepts and Formal Methods: Some Illustrative Examples It has been recognised that formal methods are useful as a modelling tool in requirements engineering. Specification languages such as Z permit the precise and unambiguous modelling of system properties and behaviour. However some system problems, particularly those drawn from the IS problem domain, may be difficult to model in crisp or precise terms. It may also be desirable that formal modelling should commence as early as possible, even when our understanding of parts of the problem domain is only approximate. This paper identifies the problem types of interest and argues that they are characterised by uncertainty and imprecision. It suggests fuzzy set theory as a useful formalism for modelling aspects of this imprecision. The paper illustrates how a fuzzy logic tooklit for Z can be applied to such problem domains. Several examples are presented illustrating the representation of imprecise concepts as fuzzy sets and relations, soft pre- conditions and system requirements as a series of linguistically quantified propositions. 1", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Uso Di Piani Di Problem-Solving Nel Riconoscimento Di Piani E Obiettivi In questo articolo si discute il ruolo dei piani di problem-solving nell'interpretazione dei dialoghi in linguaggio naturale. Per \"piano di problem-solving\" si intende una descrizione dichiarativa dei passi del processo di pianificazione ed esecuzione di azioni linguistiche e di dominio. L'articolo mostra che una rappresentazione appropriata di questi piani e` la base per modellare il comportamento cooperativo degli agenti che partecipano ad un dialogo. I piani di problem-solving sono parte di un'architettura di agente in grado di cooperare con altri agenti. 1 Introduzione L'analisi del Linguaggio Naturale ha un ruolo importante nello sviluppo di interfacce intelligenti: infatti, per rendere un sistema amichevole nei confronti dei suoi utenti, e` importante arricchirlo con una teoria del linguaggio che descrive le strategie comunicative adottate dalle persone per interagire. Inoltre, e` necessario esplicitare nel sistema i concetti basilari di razionalita` e cooperativita`. L'idea e` ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Repair of Speech Act Misunderstandings by Abductive Inference this paper, we have concentrated on the repair of mis-understanding. Our colleagues Heeman and Edmonds have looked at the repair of non-understanding. The difference between the two situations is that in the former, the agent derives exactly one interpretation of an utterance and hence is initially unaware of any problem; in the latter, the agent derives either more than one interpretation, with no way to choose between them, or no interpretation at all, and so the problem is immediately apparent. Heeman and Edmonds looked in particular at cases in which a referring expression uttered by one conversant was not understood by the other (Heeman and Hirst 1995; Edmonds 1994; Hirst et. al. 1994). Clark and his colleagues (Clark and Wilkes-Gibbs 1986; Clark 1993) have shown that in such situations, conversants will collaborate on repairing the problem by, in effect, negotiating a reconstruction or elaboration of the referring expression. Heeman and Edmonds model this with a plan recognition and generation system that can recognize faulty plans and try to repair them. Thus (as in our own model) two copies of the system can converse with each other, negotiating referents of referring expressions that are not understood by trying to recognize the referring plans of the other, repairing them where necessary, and presenting the new referring plan to the other for approval.\n\n2-Hop Neighbour:\nAchieving Robust Human-Computer Communication This paper describes a computational approach to robust human-computer interaction. The approach relies on an explicit, declarative representation of the content and structure of the interaction that a computer system builds over the course of the interaction. In this paper, we will show how this representation allows the system to recognize and repair misunderstandings between the human and the computer. We demonstrate the utility of the representations by showing how they facilitate the repair process. 1 Introduction In dialogs between people or between people and machines, understanding is an uncertain process. If the goals or beliefs of two discourse 1 participants differ, one of them might interpret an event in the dialog in a way that she believes is complete and correct, although her interpretation is not the one that the other one had intended. When this happens, we as analysts (or observers) would say that a misunderstanding has occurred. The participants themselves might...\n\n2-Hop Neighbour:\nSpeech Acts for Dialogue Agents this paper by the U.S. Army Research Office under contract/grant number DAAH 04 95 10628 and the U.S. National Science Foundation under grant IRI9311988. Some of the work described above was developed in collaboration with James Allen and supported by ONR/DARPA under grant number N00014-92J -1512, by ONR under research grant number N00014-90-J-1811, and by NSF under grant number IRI-9003841.\n\n2-Hop Neighbour:\nA Plan Based Agent Architecture for Interpreting Natural Language Dialogue This paper describes a plan-based agent architecture for modeling NL cooperative dialogue; in particular, the paper focuses on the interpretation dialogue and explanation of its coherence by means of the recognition of the speakers' underlying intentions. The approach we propose makes it possible to analyze and explain in a uniform way several apparently unrelated linguistic phenomena, which have been often studied separately and treated via ad-hoc methods in the models of dialogue presented in the literature. Our model of linguistic interaction is based on the idea that dialogue can be seen as any other interaction among agents: therefore, domain-level and linguistic actions are treated in a similar way. Our agent architecture is based on a two-level representation of the knowledge about acting: at the metalevel, the Agent Modeling plans describe the recipes for plan formation and execution (they are a declarative representation of a reactive planner); at the object level, the domain ...\n\n2-Hop Neighbour:\nMixed Depth Representations for Dialog Processing We describe our work on developing a general purpose tutoring system that will allow students to practice their decision-making skills in a number of domains. The tutoring system, B2, supports mixed-initiative natural language interaction. The natural language processing and knowledge representation components are also general purpose|which leads to a tradeo between the limitations of super cial processing and syntactic representations and the di culty of deeper methods and conceptual representations. Students ' utterances may be short and ambiguous, requiring extensive reasoning about the domain or the discourse model to fully resolve. However, full disambiguation is rarely necessary. Our solution is to use a mixed-depth representation, one that encodes syntactic and conceptual information in the same structure. As a result, we can use the same representation framework to produce a detailed representation of requests (which tend to be well-speci ed) and to produce a partial representation of questions (which tend to require more inference about the context). Moreover, the representations use the same knowledge representation framework that is used to reason about discourse processing and domain information|so that the system can reason with (and about) the utterances, if necessary.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Efficient Learning of Semi-structured Data from Queries This paper studies the polynomial-time learnability of the classes of ordered gapped tree patterns (OGT) and ordered gapped forests (OGF) under the into-matching semantics in the query learning model of Angluin. The class OGT is a model of semi-structured database query languages, and a generalization of both the class of ordered/unordered tree pattern languages and the class of non-erasing regular pattern languages.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nWrapper Induction: Efficiency and Expressiveness The Internet presents numerous sources of useful information---telephone directories, product catalogs, stock quotes, event listings, etc. Recently, many systems have been built that automatically gather and manipulate such information on a user's behalf. However, these resources are usually formatted for use by people (e.g., the relevant content is embedded in HTML pages), so extracting their content is difficult. Most systems use customized wrapper procedures to perform this extraction task. Unfortunately, writing wrappers is tedious and error-prone. As an alternative, we advocate wrapper induction, a technique for automatically constructing wrappers. In this article, we describe six wrapper classes, and use a combination of empirical and analytical techniques to evaluate the computational tradeoffs among them. We first consider expressiveness: how well the classes can handle actual Internet resources, and the extent to which wrappers in one class can mimic those in another. We then...\n\n2-Hop Neighbour:\nIntegration of Heterogeneous Databases Without Common Domains Using Queries Based on Textual Similarity Most databases contain \"name constants\" like course numbers, personal names, and place names that correspond to entities in the real world. Previous work in integration of heterogeneous databases has assumed that local name constants can be mapped into an appropriate global domain by normalization. However, in many cases, this assumption does not hold; determining if two name constants should be considered identical can require detailed knowledge of the world, the purpose of the user's query, or both. In this paper, we reject the assumption that global domains can be easily constructed, and assume instead that the names are given in natural language text. We then propose a logic called WHIRL which reasons explicitly about the similarity of local names, as measured using the vector-space model commonly adopted in statistical information retrieval. We describe an efficient implementation of WHIRL and evaluate it experimentally on data extracted from the World Wide Web. We show that WHIR...\n\n2-Hop Neighbour:\nRelevance Feedback and Query Expansion for Searching the Web: A Model for Searching a Digital Library : A fully operational large scale digital library is likely to be based on a distributed architecture and because of this it is likely that a number of independent search engines may be used to index different overlapping portions of the entire contents of the library. In any case, different media, text, audio, image, etc., will be indexed for retrieval by different search engines so techniques which provide a coherent and unified search over a suite of underlying independent search engines are thus likely to be an important part of navigating in a digital library. In this paper we present an architecture and a system for searching the world's largest DL, the world wide web. What makes our system novel is that we use a suite of underlying web search engines to do the bulk of the work while our system orchestrates them in a parallel fashion to provide a higher level of information retrieval functionality. Thus it is our meta search engine and not the underlying direct search engines th...\n\n2-Hop Neighbour:\nAuthoring and Annotation of Web Pages in CREAM Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.\n\n2-Hop Neighbour:\nA Hierarchical Approach to Wrapper Induction With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of easier extraction tasks. We introduce an inductive algorithm, stalker, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that stalker does significantly better then other approaches; on one hand, stalker requires up to two orders of magnitude fewer examples than other algorithms, while on the other hand...\n\n2-Hop Neighbour:\nAccurately and Reliably Extracting Data from the Web: A Machine Learning Approach A critical problem in developing information agents for the Web is accessing data that is formatted for human use. We have developed a set of tools for extracting data from web sites and transforming it into a structured data format, such as XML. The resulting data can then be used to build new applications without having to deal with unstructured data. The advantages of our wrapping technology over previous work are the the ability to learn highly accurate extraction rules, to verify the wrapper to ensure that the correct data continues to be extracted, and to automatically adapt to changes in the sites from which the data is being extracted. 1 Introduction There is a tremendous amount of information available on the Web, but much of this information is not in a form that can be easily used by other applications. There are hopes that XML will solve this problem, but XML is not yet in widespread use and even in the best case it will only address the problem within application domains...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Routing Documents According to Style Most research on automated text categorization has focused on determining the topic of a given text. While topic is generally the main characteristic of an information need, there are other characteristics that are useful for information retrieval. In this paper we consider the problem of text categorization according to style. For example, in searching the web, we may wish to automatically determine if a given page is promotional or informative, was written by a native English speaker or not, and so on. Learning to determine the style of a document is a dual to that of determining its topic, in that those document features which capture the style of a document are precisely those which are independent of its topic. We here define the features of a document to be the frequencies of each of a set of function words and parts-of-speech triples. We then use machine learning techniques to classify documents. We test our methods on four collections of downloaded newspaper and magazine articl...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nGenre Classification and Domain Transfer for Information Filtering The World Wide Web is a vast repository of information, but the sheer volume makes it difficult to identify useful documents. We identify document genre is an important factor in retrieving useful documents and focus on the novel document genre dimension of subjectivity.\n\n2-Hop Neighbour:\nIntegrating Automatic Genre Analysis into Digital Libraries With the number and types of documents in digital library systems increasing, tools for automatically organizing and presenting the content have to be found. While many approaches focus on topic-based organization and structuring, hardly any system incorporates automatic structural analysis and representation. Yet, genre information (unconsciously) forms one of the most distinguishing features in conventional libraries and in information searches. In this paper we present an approach to automatically analyze the structure of documents and to integrate this information into an automatically created content-based organization. In the resulting visualization, documents on similar topics, yet representing dierent genres, are depicted as books in diering colors. This representation supports users intuitively in locating relevant information presented in a relevant form. Keywords Genre Analysis, Self-Organizing Map (SOM), SOMLib, Document Clustering, Visualization, Metaphor Graphics 1.\n\n2-Hop Neighbour:\nIterative Information Retrieval Using Fast Clustering and Usage-Specific Genres This paper describes how collection specific empirically defined stylistics based genre prediction can be brought together together with rapid topical clustering to build an interactive information retrieval interface with multi-dimensional presentation of search results. The prototype presented addresses two specific problems of information retrieval: how to enrich the information seeking dialog by encouraging and supporting iterative refinement of queries, and how to enrich the document representation past the shallow semantics allowed by term frequencies. Searching For More Than Words Today's tools for searching information in a document database are based on term occurrence in texts. The searcher enters a number of terms and a number of documents where those terms or closely related terms appear comparatively frequently are retrieved and presented by the system in list form. This method works well up to a point. It is intuitively understandable, and for competent users and well e...\n\n2-Hop Neighbour:\nGleaning Answers From the Web Introduction This position paper summarizes my recent and ongoing research on Web information extraction and retrieval. I describe wrapper induction and verification techniques for extracting data from structured sources; boosted wrapper induction, an extension of these techniques to handle natural text; ELIXIR, our e#cient and expressive language for XML information retrieval ; techniques and applications for text genre classification; and stochastic models for XML schema alignment. The unifying theme of these various research projects is to develop enabling technologies that facilitate the rapid development of large Web services for data access and integration. 2 Wrapper induction and verification A wide variety of valuable textual information resides on the Web, but very little is in a machineunderstandable form such as XML. Instead, the content\n\n2-Hop Neighbour:\nStylistic Experiments For Information Retrieval . A discussion on various experiments to utilize stylistic variation among texts for information retrieval purposes. 1. Stylistics Texts vary in many ways. Authors make choices when they write a text: they decide how to organize the material they have planned to introduce; they make choices between synonyms and syntactic constructions; they choose an intended audience for the text. Authors will make these choices in various ways and for various reasons: based on personal preferences, on their view of the reader, and on what they know and like about other similar texts. A style is a consistent and distinguishable tendency to make some of these linguistic choices. Style is, on a surface level, very obviously detectable as the choice between items in a vocabulary, between types of syntactical constructions, between the various ways a text can be woven from the material it is made of. It is the information carried in a text when compared to other texts, or in a sense compared to language...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Probabilistic Hierarchical Clustering with Labeled and Unlabeled Data . This paper presents hierarchical probabilistic clustering methods for unsupervised and supervised learning in datamining applications, where supervised learning is performed using both labeled and unlabeled examples. The probabilistic clustering is based on the previously suggested Generalizable Gaussian Mixture model and is extended using a modified Expectation Maximization procedure for learning with both unlabeled and labeled examples. The proposed hierarchical scheme is agglomerative and based on probabilistic similarity measures. Here, we compare a L 2 dissimilarity measure, error confusion similarity, and accumulated posterior cluster probability measure. The unsupervised and supervised schemes are successfully tested on artificially data and for e-mails segmentation. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nText Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.\n\n1-Hop Neighbour:\nWebmining: Learning from the World Wide Web : Automated analysis of the world wide web is a new challenging area relevant in many applications, e.g., retrieval, navigation and organization of information, automated information assistants, and e-commerce. This paper discusses the use of unsupervised and supervised learning methods for user behavior modeling and content-based segmentation and classification of web pages. The modeling is based on independent component analysis and hierarchical probabilistic clustering techniques. Keywords: Webmining, unsupervised learning, hierarchical probabilistic clustering 1.\n\n1-Hop Neighbour:\nHierarchical Clustering for Datamining . This paper presents hierarchical probabilistic clustering methods for unsupervised and supervised learning in datamining applications. The probabilistic clustering is based on the previously suggested Generalizable Gaussian Mixture model. A soft version of the Generalizable Gaussian Mixture model is also discussed. The proposed hierarchical scheme is agglomerative and based on a L 2 distance metric. Unsupervised and supervised schemes are successfully tested on artificially data and for segmention of e-mails. 1\n\n2-Hop Neighbour:\nLearning to Construct Knowledge Bases from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) andrelations (e.g., employed by, produced by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n2-Hop Neighbour:\nAutomatic Multi-Lingual Information Extraction Information Extraction (IE) is a burgeoning technique because of the explosion of internet. So far, most of the IE systems are focusing on English text; and most of them are in the supervised learning framework, which requires large amount of human labor; and most of them can only work in narrow domain, which is domain dependent. These systems are difficult to be ported to other languages, other domains because of these inherent shortcomings. Currently, besides western languages like English, there are many other Asian languages which are much di erent from English. In English, words are delimited by white-spaces so computer can easily tokenize the input text string. In many languages like Chinese, Japanese, Thai and Korea, they do not have word boundaries between words. This poses a difficult problem for the information extraction for those languages. In this thesis, we intend to implement a self-contained, language independent automatic IE system. The system is automatic because we are using a unsupervised learning framework in which no labeled data is required for training or a semi-supervised learning framework in which small amount of labeled data and large amount of unlabeled data are used. Specifically, we deal with Chinese and English languages name entity recognition and entity relation extraction, but the system can be easily extended to any other languages and other tasks. We implement an unsupervised Chinese word segmenter, a Chinese POS tagger, and we extend maximum entropy models to incorporate unlabeled data for general information extraction.\n\n2-Hop Neighbour:\nApplying Co-Training methods to Statistical Parsing We propose a novel Co-Training method for statistical parsing. The algorithm takes as input a small corpus (9695 sentences) annotated with parse trees, a dictionary of possible lexicalized structures for each word in the training set and a large pool of unlabeled text. The algorithm iteratively labels the entire data set with parse trees. Using empirical results based on parsing the Wall Street Journal corpus we show that training a statistical parser on the combined labeled and unlabeled data strongly outperforms training only on the labeled data. 1\n\n2-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n2-Hop Neighbour:\nKernel Expansions With Unlabeled Examples Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCollection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.\n\n1-Hop Neighbour:\nMining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...\n\n1-Hop Neighbour:\nInternet search for Indian languages With the Internet growing at an exponential rate, no single search engine can index all of the web. It is therefore necessary to build specialized search engines that fulfill particular needs of a community of people. An example is Citeseer, which indexes research papers on the web. Also, as the web is increasingly hosting web pages in different languages, it is essential to be able to search for information stored in a specific language. For a search engine aimed at information in a particular language, an easy to use user interface is as essential as good response time and relevance of results. We introduce Shodh, a search engine for an Indian language. A prototype has been developed for selected set of pages and results are satisfactory. The user interface of the search engine includes both querying facilities as well as display of query results in the same language in which the information is stored. Contents 1\n\n2-Hop Neighbour:\nA Query Paradigm to Discover the Relation between Text and Images This paper studies the relation between images and text in image databases. An analysis of this relation results in the definition of three distinct query modalities: (1) linguistic scenario: images are part of a whole including a self-contained linguistic discourse, and their meaning derives from their interaction with the linguistic discourse. A typical case of this scenario is constituted by images on the World Wide Web; (2) closed world scenario: images are defined in a limited domain, and their meaning is anchored by conventions and norms in that domain. (3) user scenario: the linguistic discourse is provided by the user. This is the case of highly interactive systems with relevance feedback. This paper deals with image databases of the first type. It shows how the relation between images (or parts of images) and text can be inferred, and exploited for search. The paper develops a similarity model in which the similarity between two images is given by both their visual similarity...\n\n2-Hop Neighbour:\nGreenstone: A Comprehensive Open-Source Digital Library Software System This paper describes the Greenstone digital library software, a comprehensive, open-source system for the construction and presentation of information collections. Collections built with Greenstone offer effective full-text searching and metadata-based browsing facilities that are attractive and easy to use. Moreover, they are easily maintainable and can be augmented and rebuilt entirely automatically. The system is extensible: software “plugins ” accommodate different document and metadata types.\n\n2-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n2-Hop Neighbour:\nPower to the people: End-user building of digital library collections Naturally, digital library systems focus principally on the reader: the consumer of the material that constitutes the library. In contrast, this paper describes an interface that makes it easy for people to build their own library collections. Collections may be built and served locally from the user's own web server, or (given appropriate permissions) remotely on a shared digital library host. End users can easily build new collections styled after existing ones from material on the Web or from their local files---or both, and collections can be updated and new ones brought on-line at any time. The interface, which is intended for non-professional end users, is modeled after widely used commercial software installation packages. Lest one quail at the prospect of end users building their own collections on a shared system, we also describe an interface for the administrative user who is responsible for maintaining a digital library installation.\n\n2-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Partial-Order Planning with Concurrent Interacting Actions In order to generate plans for agents with multiple actuators, agent teams, or distributed controllers, we must be able to represent and plan using concurrent actions with interacting effects. This has historically been considered a challenging task requiring a temporal planner with the ability to reason explicitly about time. We show that with simple modifications, the STRIPS action representation language can be used to represent interacting actions. Moreover, algorithms for partial-order planning require only small modifications in order to be applied in such multiagent domains. We demonstrate this fact by developing a sound and complete partial-order planner for planning with concurrent interacting actions, POMP, that extends existing partial-order planners in a straightforward way. These results open the way to the use of partial-order planners for the centralized control of cooperative multiagent systems. 1. Introduction In order to construct plans for agents with mul...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nOBDD-based Universal Planning for Synchronized Agents in Non-Deterministic Domains Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (obdds) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. obdds can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present umop, a new universal obdd-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to bu...\n\n1-Hop Neighbour:\nOn the Knowledge Requirements of Tasks In order to successfully perform a task, a situated system requires some information about its domain. If we can understand what information the system requires, we may be able to equip it with more suitable sensors or make better use of the information available to it. These considerations have motivated roboticists to examine the issue of sensor design, and in particular, the minimal information required to perform a task. We show here that reasoning in terms of what the robot knows and needs to know to perform a task is a useful approach for analyzing these issues. We extend the formal framework for reasoning about knowledge, already used in AI and distributed computing, by developing a set of basic concepts and tools for modeling and analyzing the knowledge requirements of tasks. We investigate properties of the resulting framework, and show how it can be applied to robotics tasks. 1 Introduction The notion of computational complexity has had a profound effect on the development o...\n\n2-Hop Neighbour:\nFrom the Specification of Multiagent Systems by Statecharts to Their Formal Analysis by Model Checking: Towards Safety-Critical Applications A formalism for the specification of multiagent systems should be expressive and illustrative enough to model not only the behavior of one single agent, but also the collaboration among several agents and the influences caused by external events from the environment. For this, state machines [25] seem to provide an adequate means. Furthermore, it should be easily possible to obtain an implementation for each agent automatically from this specification. Last but not least, it is desirable to be able to check whether the multiagent system satisfies some interesting properties. Therefore, the formalism should also allow for the verification or formal analysis of multiagent systems, e.g. by model checking [6]. In this paper, a framework is introduced, which allows us to express declarative aspects of multiagent systems by means of (classical) propositional logic and procedural aspects of these systems by means of state machines (statecharts). Nowadays statecharts are a well accepted means to specify dynamic behavior of software systems. They are a part of the Unified Modeling Language (UML). We describe in a rigorously formal manner, how the specification of spatial knowledge and robot interaction and its verification by model checking can be done, integrating different methods from the field of artificial intelligence such as qualitative (spatial) reasoning and the situation calculus. As example application domain, we will consider robotic soccer, see also [24, 31], which present predecessor work towards a formal logic-based approach for agents engineering.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Fact or fiction: Content classification for digital libraries The World-Wide Web (WWW) is a vast repository of information, much of which is valuable but very often hidden to the user. The anarchic nature of the WWW presents unique challenges when it comes to information extraction and categorization. We view the WWW as a valuable resource for the gathering of information for Digital Libraries. In this paper we will describe the process of extracting and classifying information from the WWW for the purpose of integrating it into digital libraries. Our eorts focus on ways to automatically classify news articles according to whether they present opinions or reported facts. We describe and evaluate a system in development that automatically classies and recommends Web news articles from sports and politics domains. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nLearning to Extract Symbolic Knowledge from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more e ective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach istodevelop a trainable information extraction system that takes two inputs. The rst is an ontology that de nes the classes (e.g., Company, Person, Employee, Product) and relations (e.g., Employed.By, Produced.By) ofinterest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.\n\n2-Hop Neighbour:\nInducing Content Based User Models with Inductive Logic Programming Techniques . In this paper we describe an approach for conceptual user\n\n2-Hop Neighbour:\nIntelligent Information Access in the Web: ML based User Modeling for high precision Meta-Search It is a well known fact that high precision search for documents concerning a certain topic in the World Wide Web (Www) is a tough problem. Index based search engines vary in recall (with a coverage of at most 30% of the web) and offer a very bad precision by (simple) keyword search. Meta search engines provide a specialised monolithic architecture for information extraction, and integration of heterogenous information resources which ensures a bigger recall and may yield a better precision. Few search engines (as, e. g., HuskySearch) employ intelligent techniques in order to increase precision. On the other hand, many personalized agents for web browsing are currently developed. It is a straightforward idea to incorporate the idea of user modeling with machine learning (Ml) methods into web search services. We propose an abstract prototype, OySTER, which makes use of machine learning based user modeling and which for reasons of robustness, performance and quality is realised as a mult...\n\n2-Hop Neighbour:\nOZONE: A Zoomable Interface for Navigating Ontology Information We present OZONE (Zoomable Ontology Navigator), for searching and browsing ontological information. OZONE visualizes query conditions and provides interactive, guided browsing for DAML (DARPA Agent Markup Language) ontologies. To visually represent objects in DAML, we define a visual model for its classes, properties and relationships between them. Properties can be expanded into classes for query refinement. The visual query can be formulated incrementally as users explore class and property structures interactively. Zoomable interface techniques are employed for effective navigation and usability. Keywords: Ontology, DAML, Browsing, Zoomable User Interface (ZUI), Jazz, WWW.\n\n2-Hop Neighbour:\nText Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.\n\n2-Hop Neighbour:\nEmploying EM and Pool-Based Active Learning for Text Classification This paper shows how a text classifier's need for labeled training data can be reduced by a combination of active learning and Expectation Maximization (EM) on a pool of unlabeled data. Query-by-Committee is used to actively select documents for labeling, then EM with a naive Bayes model further improves classification accuracy by concurrently estimating probabilistic labels for the remaining unlabeled documents and using them to improve the model. We also present a metric for better measuring disagreement among committee members; it accounts for the strength of their disagreement and for the distribution of the documents. Experimental results show that our method of combining EM and active learning requires only half as many labeled training examples to achieve the same accuracy as either EM or active learning alone. Keywords: text classification active learning unsupervised learning information retrieval 1 Introduction In many settings for learning text classifiers, obtaining lab...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Rational and Convergent Learning in Stochastic Games This paper investigates the problem of policy learning in multiagent environments using the stochastic game framework, which we briefly overview. We introduce two properties as desirable for a learning agent when in the presence of other learning agents, namely rationality and convergence. We examine existing reinforcement learning algorithms according to these two properties and notice that they fail to simultaneously meet both criteria. We then contribute a new learning algorithm, WoLF policy hillclimbing, that is based on a simple principle: \"learn quickly while losing, slowly while winning.\" The algorithm is proven to be rational and we present empirical results for a number of stochastic games showing the algorithm converges. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMultiagent Learning Using a Variable Learning Rate Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may not be optimal against the specific opponents ’ policies, or they may not converge at all. In this article we examine this learning problem in the framework of stochastic games. We look at a number of previous learning algorithms showing how they fail at one of the above criteria. We then contribute a new reinforcement learning technique using a variable learning rate to overcome these shortcomings. Specifically, we introduce the WoLF principle, “Win or Learn Fast, ” for varying the learning rate. We examine this technique theoretically, proving convergence in self-play on a restricted class of iterated matrix games. We also present empirical results on a variety of more general stochastic games, in situations of self-play and otherwise, demonstrating the wide applicability of this method. Key words: Multiagent learning, reinforcement learning, game theory 1\n\n1-Hop Neighbour:\nConvergence of Gradient Dynamics with a Variable Learning Rate As multiagent environments become more prevalent we need to understand how this changes the agent-based paradigm. One aspect that is heavily affected by the presence of multiple agents is learning. Traditional learning algorithms have core assumptions, such as Markovian transitions, which are violated in these environments.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Searching the World Wide Web in Low-Connectivity Communities The Internet has the potential to deliver information to communities around the world that have no other information resources. High telephone and ISP fees- in combination with lowbandwidth connections- make it unaffordable for many people to browse the Web online. We are developing the TEK system to enable users to search the Web using only email. TEK stands for &quot;Time Equals Knowledge, &quot; since the user exchanges time (waiting for email) for knowledge. The system contains three components: 1) the client, which provides a graphical interface for the end user, 2) the server, which performs the searches from MIT, and 3) a reliable email-based communication protocol between the client and the server. The TEK search engine differs from others in that it is designed to return low-bandwidth results, which are achieved by special filtering, analysis, and compression on the server side. We believe that TEK will bring Web resources to people who otherwise would not be able to afford them.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nWeb Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions This paper summarizes the different characteristics of web data, the basic components of web mining and its different types, and their current states of the art. The reason for considering web mining, a separate field from data mining, is explained. The limitations of some of the existing web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) highlighted. A survey of the existing literature on \"soft web mining\" is provided along with the commercially available systems. The prospective areas of web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing \"soft web mining\" systems is explained. An extensive bibliography is also provided.\n\n2-Hop Neighbour:\nSearching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices\n\n2-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n2-Hop Neighbour:\nVisual Ranking of Link Structures (Extended Abstract) Methods for ranking World Wide Web resources according to their position in the link structure of the Web are receiving considerable attention, because they provide the first effective means for search engines to cope with the explosive growth and diversification of the Web.\n\n2-Hop Neighbour:\nSynchronizing a database to Improve Freshness In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. As the size of the data grows, it becomes more difficult to maintain the copy \"fresh,\" making it crucial to synchronize the copy effectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \"freshness\" very significantly compared to current policies in use.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Web Metasearch as Belief Aggregation Web metasearch requires a mechanism for combining rank-ordered lists of ratings returned by multiple search engines in response to a given user query. We view this as being analogous to the need for combining degrees of belief in probabilistic and uncertain reasoning in artificial intelligence. This paper describes a practical method for performing web metasearch based on a novel transformationbased theory of belief aggregation. The consensus ratings produced by this method take into account the item ratings/rankings output by individual search engines as well as the user's preferences. Copyright c fl 2000, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Introduction Web search engines (WSE) use tools ranging from simple text-based search to more sophisticated methods that attempt to understand the intended meanings of both queries and data items. There has been much work in this area in recent years. The link structure of the web has...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n1-Hop Neighbour:\nArchitecture of a Metasearch Engine that Supports User Information Needs When a query is submitted to a metasearch engine, decisions are made with respect to the underlying search engines to be used, what modifications will be made to the query, and how to score the results. These decisions are typically made by considering only the user's keyword query, neglecting the larger information need. Users with specific needs, such as \"research papers\" or \"homepages,\" are not able to express these needs in a way that affects the decisions made by the metasearch engine. In this paper, we describe a metasearch engine architecture that considers the user's information need for each decision. Users with different needs, but the same keyword query, may search different sub-search engines, have different modifications made to their query, and have results ordered differently. Our architecture combines several powerful approaches together in a single general purpose metasearch engine. 1 Introduction Current metasearch engines make several decisions on behalf of the use...\n\n1-Hop Neighbour:\nAn Efficient Boosting Algorithm for Combining Preferences We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborativefiltering \" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.\n\n2-Hop Neighbour:\nContext and Page Analysis for Improved Web Search NEC Research Institute has developed a metasearch engine that improves the efficiency of Web searches by downloading and analyzing each document and then displaying results that show the query terms in context. Several popular and useful search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light attempt to maintain full-text indexes of the World Wide Web. However, relying on a single standard search engine has limitations. The standard search engines have limited coverage, 1,2 outdated databases, and are sometimes unavailable due to problems with the network or the engine itself. The precision of standard engine results can also vary because they generally focus on handling queries quickly and use relatively simple ranking schemes. 3 Rankings can be further muddled by keyword spamming to increase a page's rank order. Often, the relevance of a particular page is obvious only after loading it and finding the query terms. Metasearch engines, such as MetaCrawler and SavvySearch, attempt to contend with the problem of limited coverage by submitting queries to several standard search engines at once. 4,5 The primary advantages of metasearch engines are that they combine the results of several search engines and present a consistent user interface. 5 However, most metasearch engines rely on the documents and summaries returned by standard search engines and so inherit their limited precision and vulnerability to keyword spamming. We developed the NEC Research Institute (NECI) metasearch engine [now called Inquirus] to improve the efficiency and precision of Web search by downloading and analyzing each document and then displaying results that show the query terms in\n\n2-Hop Neighbour:\nEstimating the Usefulness of Search Engines In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. The estimates can be used by a metasearch engine to choose local search engines to invoke. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that the proposed estimation method is quite accurate. 1 Introduction Many search engines have been created on the Internet to help ordinary users find desired data. Each search engine has a corresponding database that defines the set of documents that can be searched by the search engine. Usually, an index for all documents in the database is created and stored in the search engine to speed up query processing. The amount of data in the Internet is huge (it is believed that by the end of 1997, there were more than 300 mil...\n\n2-Hop Neighbour:\nDigital Libraries and Autonomous Citation Indexing The World Wide Web is revolutionizing the way that researchers access scientific information. Articles are increasingly being made available on the homepages of authors or institutions, at journal Web sites, or in online archives. However, scientific information on the Web is largely disorganized. This article introduces the creation of digital libraries incorporating Autonomous Citation Indexing (ACI). ACI autonomously creates citation indices similar to the Science Citation Index R . An ACI system autonomously locates articles, extracts citations, identifies identical citations that occur in different formats, and identifies the context of citations in the body of articles. ACI can organize the literature and provide most of the advantages of traditional citation indices, such as literature search using citation links, and the evaluation of articles based on citation statistics. Furthermore, ACI can provide significant advantages over traditional citation indices. No manual effort is required for indexing, which should result in a reduction in cost and an increase in the availability of citation indices. An ACI system can also provide more comprehensive and up-to-date indices of the literature by indexing articles on the Web, technical reports, conference papers, etc. Furthermore, ACI makes it easy to browse the context of citations to given articles, allowing researchers to quickly and easily see what subsequent researchers have said about a given article. digital libraries incorporating ACI may significantly improve scientific dissemination and feedback.\n\n2-Hop Neighbour:\nFeature Selection in Web Applications By ROC Inflections and Powerset Pruning coetzee,compuman,lawrence,giles¥ A basic problem of information processing is selecting enough features to ensure that events are accurately represented for classification problems, while simultaneously minimizing storage and processing of irrelevant or marginally important features. To address this problem, feature selection procedures perform a search through the feature power set to find the smallest subset meeting performance requirements. Major restrictions of existing procedures are that they typically explicitly or implicitly assume a fixed operating point, and make limited use of the statistical structure of the feature power set. We present a method that combines the Neyman-Pearson design procedure on finite data, with the directed set structure of the Receiver Operating Curves on the feature subsets, to determine the maximal size of the feature subsets that can be ranked in a given problem. The search can then be restricted to the smaller subsets, resulting in significant reductions in computational complexity. Optimizing the overall Receiver Operating Curve also allows for end users to select different operating points and cost functions to optimize. The algorithm also produces a natural method of Boolean representation of the minimal feature combinations that best describe the data near a given operating point. These representations are especially appropriate when describing data using common text-related features useful on the web, such as thresholded TFIDF data. We show how to use these results to perform automatic Boolean query modification generation for distributed databases, such as niche metasearch engines. 1\n\n2-Hop Neighbour:\nBackground Readings for Collection Synthesis", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Legion System: A Novel Approach to Evolving Heterogeneity for Collective Problem Solving . We investigate the dynamics of agent groups evolved to perform a collective task, and in which the behavioural heterogeneity of the group is under evolutionary control. Two task domains are studied: solutions are evolved for the two tasks using an evolutionary algorithm called the Legion system. A new metric of heterogeneity is also introduced, which measures the heterogeneity of any evolved group behaviour. It was found that the amount of heterogeneity evolved in an agent group is dependent of the given problem domain: for the rst task, the Legion system evolved heterogeneous groups; for the second task, primarily homogeneous groups evolved. We conclude that the proposed system, in conjunction with the introduced heterogeneity measure, can be used as a tool for investigating various issues concerning redundancy, robustness and division of labour in the context of evolutionary approaches to collective problem solving. 1 Introduction Investigations into heterogeneous a...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nHeterogeneity in the Coevolved Behaviors of Mobile Robots: The Emergence of Specialists Many mobile robot tasks can be most efficiently solved when a group of robots is utilized. The type of organization, and the level of coordination and communication within a team of robots affects the type of tasks that can be solved. This paper examines the tradeoff of homogeneity versus heterogeneity in the control systems by allowing a team of robots to coevolve their high-level controllers given different levels of difficulty of the task. Our hypothesis is that simply increasing the difficulty of a task is not enough to induce a team of robots to create specialists. The key factor is not difficulty per se, but the number of skill sets necessary to successfully solve the task. As the number of skills needed increases, the more beneficial and necessary heterogeneity becomes. We demonstrate this in the task domain of herding, where one or more robots must herd another robot into a confined space. 1\n\n2-Hop Neighbour:\nAdaptive Heterogeneous Multi-Robot Teams This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. We describe a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modi cations in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, we describe in detail our experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability ofALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.\n\n2-Hop Neighbour:\nTrends In Evolutionary Robotics A review is given on the use of evolutionary techniques for the automatic design of adaptive robots. The focus is on methods which use neural networks and have been tested on actual physical robots. The chapter also examines the role of simulation and the use of domain knowledge in the evolutionary process. It concludes with some predictions about future directions in robotics. Appeared in Soft Computing for Intelligent Robotic Systems, edited by L.C. Jain and T. Fukuda, PhysicaVerlag, New York, NY, 215--233, 1998. 1 Introduction To be truly useful, robots must be adaptive. They should have a collection of basic abilities that can be brought to bear in tackling a variety of tasks in a wide range of environments. These fundamental abilities might include navigation to a goal location, obstacle avoidance, object recognition, and object manipulation. However, to date, this desired level of adaptability has not been realized. Instead, robots have primarily been successful when deploye...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Adaptive Retrieval Agents: Internalizing Local Context and Scaling up to the Web . This paper focuses on two machine learning abstractions springing from ecological models: (i) evolutionary adaptation by local selection, and (ii) selective query expansion by internalization of environmental signals. We first outline a number of experiments pointing to the feasibility and performance of these methods on a general class of graph environments. We then describe how these methods have been applied to the intelligent retrieval of information distributed across networked environments. In particular, the paper discusses a novel distributed evolutionary algorithm and representation used to construct populations of adaptive Web agents. These InfoSpiders search on-line for information relevant to the user, by traversing hyperlinks in an autonomous and intelligent fashion. They can adapt to the spatial and temporal regularities of their local context. Our results suggest that InfoSpiders could complement current search engine technology by starting up where search engines stop...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n1-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.\n\n1-Hop Neighbour:\nTopic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.\n\n2-Hop Neighbour:\nWeb Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1\n\n2-Hop Neighbour:\nPower to the people: End-user building of digital library collections Naturally, digital library systems focus principally on the reader: the consumer of the material that constitutes the library. In contrast, this paper describes an interface that makes it easy for people to build their own library collections. Collections may be built and served locally from the user's own web server, or (given appropriate permissions) remotely on a shared digital library host. End users can easily build new collections styled after existing ones from material on the Web or from their local files---or both, and collections can be updated and new ones brought on-line at any time. The interface, which is intended for non-professional end users, is modeled after widely used commercial software installation packages. Lest one quail at the prospect of end users building their own collections on a shared system, we also describe an interface for the administrative user who is responsible for maintaining a digital library installation.\n\n2-Hop Neighbour:\nBreadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl.\n\n2-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'What can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nVisual Ranking of Link Structures (Extended Abstract) Methods for ranking World Wide Web resources according to their position in the link structure of the Web are receiving considerable attention, because they provide the first effective means for search engines to cope with the explosive growth and diversification of the Web.\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nCreating a Semantic web Interface with Virtual Reality Novel initiatives amongst the Internet community such as Internet2 [1] and Qbone [2] are based on the use of high bandwidth and powerful computers. However the experience amongst the majority of Internet users is light-years from these emerging technologies. We describe the construction of a distributed high performance search engine, utilizing advanced threading techniques on a diskless Linux cluster. The resulting Virtual Reality scene is passed to a standard client machine for viewing. This search engine bridges the gap between the Internet of today, and the Internet of the future. Keywords: Internet Searching, High Performance VRML, Visualization. 1.\n\n2-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n2-Hop Neighbour:\nKeyword Searching and Browsing in Databases using BANKS With the growth of the Web, there has been a rapid increase in the number of users who need to access online databases without having a detailed knowledge of the schema or of query languages; even relatively simple query languages designed for non-experts are too complicated for them. We describe BANKS, a system which enables keyword-based search on relational databases, together with data and schema browsing. BANKS enables users to extract information in a simple manner without any knowledge of the schema or any need for writing complex queries. A user can get information by typing a few keywords, following hyperlinks, and interacting with controls on the displayed results. BANKS models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. Answers to a query are modeled as rooted trees connecting tuples that match individual keywords in the query. Answers are ranked using a notion of proximity coupled with a notion of prestige of nodes based on inlinks, similar to techniques developed for Web search. We present an efficient heuristic algorithm for finding and ranking query results. 1.\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Application Of Object- Oriented Analysis To Agent Based Systems Agents are important software abstractions for distributed problem solving and autonomous, pro- active behavior. They have been used in many applications, including manufacturing, enterprise integration, network management, and advanced user interfaces for worldwide web applications. As agent systems become more prevalent, the need arises for software engineering methodologies. The relationships between agents and objects, and the role of object oriented analysis in multiagent system development are discussed here. The approach is illustrated with a case study from discrete parts manufacturing. 1.0 INTRODUCTION Numerous examples can be found of applications of agent based systems to enterprise integration 31 ; concurrent engineering 5 ; and manufacturing 26 . Many agent based systems have also been developed for network management 31 , scheduling 9 , and advanced user interfaces 28 . These systems exhibit significant advances in distributed problem solving and pro-active ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA survey of Agent-Oriented Software Engineering Agent-Oriented Software Engineering is the one of the most recent contributions to the field of Software Engineering. It has several benefits compared to existing development approaches, in particular the ability to let agents represent high-level abstractions of active entities in a software system. This paper gives an overview of recent research and industrial applications of both general high-level methodologies and on more specific design methodologies for industry-strength software engineering.\n\n1-Hop Neighbour:\nSupporting Conflict Resolution in Cooperative Design Systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemente...\n\n1-Hop Neighbour:\nA Java Application Framework for Agent Based Systems Agents are the next significant software abstraction, especially for distributed systems. Agent based systems have been developed in response to the following requirements: - Personalized and customized user interfaces that are pro-active in assisting the user - Adaptable, fault tolerant distributed systems that solve complex problems - Open systems where components come and go and new components are continually added. - Migration and load balancing across platforms, throughout a network. - New metaphors, such as negotiation, for solving distributed, multi- disciplinary problems. Agents appear in a wide range of applications, but all agent development to date has been done independently by each development team. This has led to several problems, including i) duplication of effort, ii) inability to satisfy industrial strength requirements for security and scalability, and iii) incompatibility of agent systems. The application framework described in this chapter captures and clarifies th...\n\n2-Hop Neighbour:\nSupporting Conflict Management in Cooperative Design Teams The design of complex artifacts has increasingly become a cooperative process, with the detection and resolution of conflicts between design agents playing a central role. Effective tools for supporting the conflict management process, however, are still lacking. This paper describes a system called DCSS (the Design Collaboration Support System) developed to meet this challenge in design teams with both human and machine-based agents. Every design agent is provided with an \"assistant\" that provides domain-independent conflict detection, classification and resolution expertise. The design agents provide the domainspecific expertise needed to instantiate this general expertise, including the rationale for their actions, as a part of their design activities. DCSS has been used successfully to support the cooperative design of Local Area Networks by human and machine-based designers. This paper includes a description of DCSS's underlying model and implementation, examples of its operation...\n\n2-Hop Neighbour:\nIssues in Agent-Oriented Software Engineering In this paper, I will discuss the conceptual foundation of agent-oriented software development by relating the fundamental elements of the agent-oriented view to those of other, well established programming paradigms, especially the object-oriented approach. Furthermore, I will motivate the concept of autonomy as the basic property of the agent-oriented school and discuss the development history of programming paradigms that lead to this perspective on software systems. The paper will be concluded by an outlook on how the new paradigm can change the way we think about software systems.\n\n2-Hop Neighbour:\nRequirements Interaction Management ion. Requirements may be distinguished based on the abstraction level of their description. A requirement may be further defined by add new details defined in more specialized subrequirements. Through specialization of abstract requirements, or generalization of detailed requirement, a requirement abstraction hierarchy can be defined. . Development p roperties . Requirements may be distinguished based on their development properties. For example, a requirement may have just been proposed. Late r, it may be accepted or rejected. . Representational properties. Requirements may be distinguished based on their representation. A requirement may begin as an informal sketch, then become a natural language sentence (e.g., \"The system shall ...\"). Finall y, more formal representations, such as UML, Z, or predicate cal- Requirements Interaction Management - Definition and scope 6 1999 William N. Robinson Requirements Interaction Management GSU CIS 99-7 culus, may be used to express a requir...\n\n2-Hop Neighbour:\nAgent Communication Languages: The Current Landscape this article--- suggest a paradigm for software development that emphasizes autonomy both at design time and runtime, adaptivity, and cooperation. This approach seems appealing in a world of distributed, heterogeneous systems. Languages for communicating agents promise to play the role that natural languages played for their human counterparts. An agent communication language that allows agents to interact while hiding the details of their internal workings will result in agent communities that tackle problems no individual agent could.\n\n2-Hop Neighbour:\nA Methodology for Agent-Oriented Analysis and Design . This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system). 1. Introduction Progress in software engineering over the past two decades has been made through the development of increasingly powerful and natural high-level abstractions with which to model and develop complex systems. Procedural abstraction, abstract data types, and, most recently, objects and components are all examples of such abstractions. It is our belief that agents represent a similar advance in abstraction: they may be used by software developers to more n...", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'PicASHOW: Pictorial Authority Search by Hyperlinks on the Web We describe PicASHOW, a fully automated WWW image retrieval system that is based on several link-structure analyzing algorithms. Our basic premise is that a page # displays (or links to) an image when the author of # considers the image to be of value to the viewers of the page. Wethus extend some well known link-based WWW #### ######### schemes to the context of image retrieval. PicASHOW's analysis of the link structure enables it to retrieve relevant images even when those are stored in les with meaningless names. The same analysis also allows it to identify ##### ########## and ##### ####. We dene these as Web pages that are rich in relevant images, or from which many images are readily accessible. PicASHOW requires no image analysis whatsoever and no creation of taxonomies for pre-classication of the Web's images. It can be implemented by standard WWW search engines with reasonable overhead, in terms of both computations and storage, and with no change to user query formats. It can thus be used to easily add image retrieving capabilities to standard search engines. Our results demonstrate that PicASHOW, while relying almost exclusively on link analysis, compares well with dedicated WWW image retrieval systems. We conclude that link analysis, a bona-de eective technique for Web page search, can improve the performance of Web image retrieval, as well as extend its denition to include the retrieval of image hubs and containers. Keywords Image Retrieval; Link Structure Analysis; Hubs and Authorities; Image Hubs. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n2-Hop Neighbour:\nThe Overview of Web Search Engines The World Wide Web allows people to share information globally. The amount of information grows without bound. In order to extract information that we are interested in, we need a tool to search the Web. The tool is called a search engine. This survey covers different components of the search engine and how the search engine really works. It provides a background understanding of information retrieval. It discusses different search engines that are commercially available. It investigates how the search engines find information in the Web and how they rank its pages to the give n query. Also, the paper provides guidelines for users on how to use search engines.\n\n2-Hop Neighbour:\nA Query Paradigm to Discover the Relation between Text and Images This paper studies the relation between images and text in image databases. An analysis of this relation results in the definition of three distinct query modalities: (1) linguistic scenario: images are part of a whole including a self-contained linguistic discourse, and their meaning derives from their interaction with the linguistic discourse. A typical case of this scenario is constituted by images on the World Wide Web; (2) closed world scenario: images are defined in a limited domain, and their meaning is anchored by conventions and norms in that domain. (3) user scenario: the linguistic discourse is provided by the user. This is the case of highly interactive systems with relevance feedback. This paper deals with image databases of the first type. It shows how the relation between images (or parts of images) and text can be inferred, and exploited for search. The paper develops a similarity model in which the similarity between two images is given by both their visual similarity...\n\n2-Hop Neighbour:\nExploiting Redundancy in Question Answering Our goal is to automatically answer brief factual questions of the form \"When was the Battle of Hastings?\" or \"Who wrote The Wind in the Willows?\". Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to \"vote\" for the most likely answer. The ...\n\n2-Hop Neighbour:\nFrom Resource Discovery to Knowledge Discovery on the Internet More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with \"trails\" linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, r...\n\n2-Hop Neighbour:\nToward Learning Based Web Query Processing In this paper, we describe a novel Web query processing approach with learning capabilities. Under this approach, user queries are in the form of keywords and search engines are employed to find URLs of Web sites that might contain the required information. The first few URLs are presented to the user for browsing. Meanwhile, the query processor learns both the information required by the user and the way that the user navigates through hyperlinks to locate such information. With the learned knowledge, it processes the rest URLs and produces precise query results in the form of segments of Web pages without user involvement. The preliminary experimental results indicate that the approach can process a range of Web queries with satisfactory performance. The architecture of such a query processor, techniques of modeling HTML pages, and knowledge for query processing are discussed. Experiments on the effectiveness of the approach, the required knowledge, and the training strategies are presented.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Probabilistic Deduction with Conditional Constraints over Basic Events We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees. 1. Introduction Dealing wit...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nProbabilistic Logic Programming with Conditional Constraints . We introduce a new approach to probabilistic logic programming in which probabilities are defined over a set of possible worlds. More precisely, classical program clauses are extended by a subinterval of [0; 1] that describes a range for the conditional probability of the head of a clause given its body. We then analyze the complexity of selected probabilistic logic programming tasks. It turns out that probabilistic logic programming is computationally more complex than classical logic programming. More precisely, the tractability of special cases of classical logic programming generally does not carry over to the corresponding special cases of probabilistic logic programming. Moreover, we also draw a precise picture of the complexity of deciding and computing tight logical consequences in probabilistic reasoning with conditional constraints in general. We then present linear optimization techniques for deciding satisfiability and computing tight logical consequences of probabilistic...\n\n1-Hop Neighbour:\nA Data Model and Algebra for Probabilistic Complex Values . We present a probabilistic data model for complex values. More precisely, we introduce probabilistic complex value relations, which combine the concept of probabilistic relations with the idea of complex values in a uniform framework. We elaborate a model-theoretic definition of probabilistic combination strategies, which has a rigorous foundation on probability theory. We then define an algebra for querying database instances, which comprises the operations of selection, projection, renaming, join, Cartesian product, union, intersection, and difference. We prove that our data model and algebra for probabilistic complex values generalizes the classical relational data model and algebra. Moreover, we show that under certain assumptions, all our algebraic operations are tractable. We finally show that most of the query equivalences of classical relational algebra carry over to our algebra on probabilistic complex value relations. Hence, query optimization techniques for class...\n\n1-Hop Neighbour:\nProbabilistic Default Reasoning with Conditional Constraints We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of -, lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of reference-class reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of -, lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of - and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of -, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...\n\n2-Hop Neighbour:\nProbabilistic Deduction with Conditional Constraints over Basic Events We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees. 1. Introduction Dealing wit...\n\n2-Hop Neighbour:\nComplexity and Expressive Power of Logic Programming . This paper surveys various complexity and expressiveness results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming and datalog, but we also mention general logic programming with function symbols. Next to classical results on plain logic programming (pure Horn clause programs), more recent results on various important extensions of logic programming are surveyed. These include logic programming with different forms of negation, disjunctive logic programming, logic programming with equality, and constraint logic programming. 1 Computing Science Department, Uppsala University, Box 311, S 751 05 Uppsala, Sweden. Email: [email protected] 2 Institut und Ludwig Wittgenstein Labor fur Informationssysteme, Technische Universitat Wien, Treitlstraße 3, A-1040 Wien, Austria. E-mail: [email protected] 3 Institut und Ludwig Wittgenstein Labor fur Informationssysteme, Technische Universitat ...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nDiscovering Seeds of New Interest Spread from Premature Pages Cited by Multiple Communities The World Wide Web is a great source of new topics significant for trend birth and creation. In this paper, we propose a method for discovering topics, which stimulate communities of people into earnest communications on the topics' meaning, and grow into a trend of popular interest. Here, the obtained are web pages which absorb attentions of people from multiple interest-communities. It is shown by a experiments to a small group of people, that topics in such pages can trigger the growth of peoples' interests, beyond the bounds of existing communities.\n\n2-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.\n\n2-Hop Neighbour:\nTopic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.\n\n2-Hop Neighbour:\nSimRank: A Measure of Structural-Context Similarity The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects.\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Exploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nExploiting Geographical Location Information of Web Pages Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, the geographical scope of web resources is largely ignored by web search engines. We make the case for identifying and exploiting the geographical location information of web sites so that web search engines can rank resources in a geographically sensitive fashion, in addition to using more traditional information-retrieval strategies. In this paper, we first consider how to compute the geographical location of web pages. Subsequently, we consider how to exploit such information in one specific \"proof-of-concept\" appl...\n\n2-Hop Neighbour:\nMining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi...\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nBackground Readings for Collection Synthesis\n\n2-Hop Neighbour:\nWeb Interaction and the Navigation Problem in Hypertext written for Encyclopedia of Microcomputers The web has become a ubiquitous tool, used in day-to-day work, to find information and conduct business, and it is revolutionising the role and availability of information. One of the problems encountered in web interaction, which is still unsolved, is the navigation problem, whereby users can \"get lost in hyperspace\", meaning that when following a sequence of links, i.e. a trail of information, users tend to become disoriented in terms of the goal of their original query and the relevance to the query of the information they are currently browsing. Herein we build statistical foundations for tackling the navigation problem based on a formal model of the web in terms of a probabilistic automaton, which can also be viewed as a finite ergodic Markov chain. In our model of the web the probabilities attached to state transitions have two interpretations, namely, they can denote the proportion of times a user followed a link, and alternatively they can denote the expected utility of following a link. Using this approach we have developed two techniques for constructing a web view based on the two interpretations of the probabilities of links, where a web view is a collection of relevant trails. The first method we describe is concerned with finding frequent user behaviour patterns. A collection of trails is taken as input and an ergodic Markov chain is produced as output with the probabilities of transitions corresponding to the frequency the user traversed the associated links. The second method we describe is a reinforcement learning algorithm that attaches higher probabilities to links whose expected trail relevance is higher. The user's home page and a query are taken as input and an ergodic Markov chain is produced as output with the probabilities of...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Hierarchical Presentation of Expansion Terms Different presentations of candidate expansion terms have not been fully explored in interactive query expansion (IQE). Most existing systems that offer an IQE facility use a list form of presentation. This paper examines an hierarchical presentation of the expansion terms which are automatically generated from a set of retrieved documents, organised in a general to specific manner, and visualised by cascade menus. To evaluate the effectiveness of the presentation, a user test was carried out to compare the hierarchical form with the conventional list form. This shows that users of the hierarchy can complete the expansion task in less time and with fewer terms over those using the lists. Relations between initial query terms and selected expansion terms were also investigated. Keywords Information retrieval, interactive query expansion, concept hierarchies 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nGrouper: A Dynamic Clustering Interface to Web Search Results Users of Web search engines are often forced to sift through the long ordered list of document \"snippets\" returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on most major search engines. The NorthernLight search engine organizes its output into \"custom folders\" based on pre-computed document labels, but does not reveal how the folders are generated or how well they correspond to users' interests. In this paper, we introduce Grouper -- an interface to the results of the HuskySearch meta-search engine, which dynamically groups the search results into clusters labeled by phrases extracted from the snippets. In addition, we report on the first empirical comparison of user Web search behavior on a standard ranked-list presentation versus a clustered presentation. By analyzing HuskySearch logs, we are able to demonstrate substantial differences in the number of documents f...\n\n1-Hop Neighbour:\nQuery Expansion Using an Interactive Concept Hierarchy Query expansion is the process of supplementing an original query with additional terms in order to refine a search and increase retrieval effectiveness. If the query expansion is interactive, then the user and the system work together to expand the query. The system usually suggests possible expansion terms and the user selects those they wish to add to the query. Studies have shown that interactive query expansion has the potential to improve retrieval effectiveness, but that it rarely succeeds in achieving its potential. It has been shown that users desire some control over the expansion process. In order to achieve this, the functionality of the system must be represented on the user interface in a comprehensible way. The main aim of this study is to focus on a small aspect of the interface and investigate whether the method used to present potential query expansion terms has any effect on retrieval effectiveness. The tool tested in this study automatically generates a hierarchical...\n\n2-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Models for Information Integration: Turning Local-as-View Into Global-as-View There are basically two approaches for designing a data integration system. In the global-as-view approach, one defines the concepts in the global schema as views over the sources, whereas in the local-as-view approach, one characterizes the sources as views over the global schema. The goal of this paper is to verify whether we can transform a data integration system built with the local-as-view approach into a system following the global-as-view approach. We study the problem in a setting where the global schema is expressed in the relational model with inclusion dependencies, and the queries used in the integration systems (both the queries on the global schema, and the views in the mapping) are expressed in the language of conjunctive queries. The result we present is that such a transformation exists: we can always transform a local-as-view system into a global-as-view system such that, for each query, the set of answers to the query wrt the former is the same as the set of answers wrt the latter.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nView-based Query Processing and Constraint Satisfaction View-based query processing requires to answer a query posed to a database only on the basis of the information on a set of views, which are again queries over the same database. This problem is relevant in many aspects of database management, and has been addressed by means of two basic approaches, namely, query rewriting and query answering. In the former approach, one tries to compute a rewriting of the query in terms of the views, whereas in the latter, one aims at directly answering the query based on the view extensions. We study view-based query processing for the case of regular-path queries, which are the basic querying mechanisms for the emergent field of semistructured data. Based on recent results, we first show that a rewriting is in general a co-NP function wrt to the size of view extensions. Hence, the problem arises of characterizing which instances of the problem admit a rewriting that is PTIME. A second contribution of the work is to establish a tight connection between view-based query answering and constraint-satisfaction problems, which allows us to show that the above characterization is going to be difficult. As a third contribution of our work, we present two methods for computing PTIME rewritings of specific forms. The first method, which is based on the established connection with constraint-satisfaction problems, gives us rewritings expressed in Datalog with a fixed number of variables. The second method, based on automata-theoretic techniques, gives us rewritings that are formulated as unions of conjunctive regular-path queries with a fixed number of variables.\n\n1-Hop Neighbour:\nOn the Expressive Power of Data Integration Systems There are basically two approaches for designing a data integration system. In the global-as-view (GAV) approach, one maps the concepts in the global schema to views over the sources, whereas in the local-as-view (LAV) approach, one maps the sources into views over the global schema. The goal of this paper is to relate the two approaches with respect to their expressive power.\n\n2-Hop Neighbour:\nA Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th\n\n2-Hop Neighbour:\nA Web Odyssey: from Codd to XML INTRODUCTION The Web presents the database area with vast opportunities and commensurate challenges. Databases and the Web are organically connected at many levels. Web sites are increasingly powered by databases. Collections of linked Web pages distributed across the Internet are themselves tempting targets for a database. The emergence of XML as the lingua franca of the Web brings some much needed order and will greatly facilitate the use of database techniques to manage Web information. This paper will discuss some of the developments related to the Web from the viewpoint of database theory. As we shall see, the Web scenario requires revisiting some of the basic assumptions of the area. To be sure, database theory remains as valid as ever in the classical setting, and the database industry will continue to representamulti-billion dollar target of applicability for the foreseeable future. But the Web represents an opportunityofanentirely di#erent scale. We are th\n\n2-Hop Neighbour:\nLossless Regular Views If the only information we have on a certain database is through a set of views, the question arises of whether this is sufficient to answer completely a given query. We say that the set of views is lossless with respect to the query, if, no matter what the database is, we can answer the query by solely relying on the content of the views. The question of losslessness has various applications, for example in query optimization, mobile computing, data warehousing, and data integration. We study this problem in a context where the database is semistructured, and both the query and the views are expressed as regular path queries. The form of recursion present in this class prevents us from applying known results to our case.", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Background Readings for Collection Synthesis' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFinding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \\What's Related \" service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...\n\n2-Hop Neighbour:\nAnalysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the\n\n2-Hop Neighbour:\nCS 395T Large-Scale Data Mining Fall 2001 ojects the vectors x 1 : : : x n onto the principal components. Note that the eigenvectors of ^ are the left singular vectors of the matrix 1 p n 1 [x 1 ; x 2 ; : : : ; x n ]: Thus PCA can be obtained from the SVD of mean centered data. The mean centering is the important dierence between PCA and SVD and can yield qualitatively dierent results for data sets where the mean is not equal to 0, as shown in gure 1. SVD PCA SVD,PCA Figure 1: The leading singular vector may not be in the same direction as the principal component 1 2 CS 395T: Large-Scale Data Mining 2 Clustering Considerations Clustering is the grouping together of similar objects. Usually, the clustering problem is posed as a\n\n2-Hop Neighbour:\nCategorisation by Context Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword based queries, or by catalogues, which organise documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material, and therefore it will be necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. The paper describes the technique of categorisation by context, which exploits the context perceivable from the structure of HTML documents to extract useful information for classifying the documents they refer to. We present the results of experiments with a preliminary implementation of the technique. 1. INTRODUCTION Most Web search engines (e.g. Altavista^TM [Altavista], HotBot^TM [HotBot], Excite^TM [Excite]) perform search based on the content of docume...\n\n2-Hop Neighbour:\nDocument Classification as an Internet service: Choosing the best classifier This project investigates some of the issues involved in a new proposal for expanding the scope of the field of Data Mining by providing mining models as services on the Internet. This idea can widely increase the reach and accessibility of Data Mining to common people because one of the primary stumbling blocks in the adoption of mining is the extremely high level of expertise and data resources needed in building a robust mining model. We feel this task should be left to the specialists with access to data and resources, who can provide their most up to date model as a service on the Internet for public use.\n\n2-Hop Neighbour:\nTowards a Highly-Scalable Metasearch Engine The World Wide Web has been expanding in a very fast rate. The coverage of the Web by each of the major search engines has been steadily decreasing despite their effort to index more web pages. Worse yet, as these search engines get larger, higher percentages of their indexed information are becoming obsolete. More and more people are having doubt about the scalability of centralized search engine technology. A more scalable alternative to search the Web is the metasearch engine approach. A metasearch engine can be considered as an interface on top of multiple local search engines to provide uniform access to many local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reflect the content of each search engine need to be co...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Recognition of Partially Occluded and/or Imprecisely Localized Faces Using a Probabilistic Approach New face recognition approaches are needed, because although much progress has been recently achieved in the field (e.g. within the eigenspace domain), still many problems are to be robustly solved. Two of these problems are occlusions and the imprecise localization of faces (which ultimately imply a failure in identification) . While little has been done to account for the first problem, almost nothing has been proposed to account for the second. This paper presents a probabilistic approach that attempts to solve both problems while using an eigenspace representation. To resolve the localization problem, we need to find the subspace (within the feature space, e.g. eigenspace) that represents this error for each of the training image. To resolve the occlusion problem, each face is divided into n local regions which are analyzed in isolation. In contrast with other previous approaches, where a simple voting space is used, we present a probabilistic method that analyzes how \"good\" a loca...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe FERET Evaluation Methodology for Face-Recognition Algorithms Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to (1) assess the state of the art, (2) identify future areas of research, and (3) measure algorithm performance. 1 Introduction Over the last decade, face recognition has become an active area of research in computer vision, neuroscience, and psychology. Progress has advanced to the point that face-recognition systems are being demonstrated in real-...\n\n2-Hop Neighbour:\nComputers Seeing People this paper, we present methods that give machines the ability to see people, interpret their actions and interact with them. We present the motivating factors behind this work, examples of how such computational methods are developed and their applications. The basic reason for providing machines the ability to see people really depends on the task we are associating with a machine. An industrial vision system aimed at extracting defects on an assembly line need not know anything about people. Similarly, a computer used for email and text writing need not see and perceive the users gestures and expressions. However, if our interest is to build intelligent machines that can work with us, support our needs and be our helpers, than it maybe required for these machines to know more about who they are supporting and helping. If our computers are to do more then support our text-based needs like writing papers, spreadsheets, and communicating via email; perhaps take on a role of being a personal assistant, then the ability to see a person is essential. Such an ability to perceive people is something that we take for granted in our everyday interactions with each other. At present our model of a machine or more specifically of a computer is something that is placed in the corner of the room. It is deaf, dumb, and blind, having no sense of the environment that it is in or of the person that is near it. We communicate with this computer using a coded sequence of tappings on a keyboard. Imagine a computer that knows that you are near it, that you are looking at it, knows who you are and what you are trying to do. Imagine a machine that can interpret a video signal based on who is in the scene and what they are doing. Such abilities in a computer are hard to imagine, unless it has...\n\n2-Hop Neighbour:\nFace Recognition Identifying a human individual from his or her face is one of the most nonintrusive modalities in biometrics. However, it is also one of the most challenging ones. This chapter discusses why it is challenging and the factors that a practitioner can take advantage of in developing a practical face recognition system. Some major existing approaches are discussed along with some algorithmic considerations. A face recognition algorithm is presented as an example along with some experimental data. Some possible future research directions are outlined at the end of the chapter. 1.1 INTRODUCTION Face recognition from images is a sub-area of the general object recognition problem. It is of particular interest in a wide variety of applications. Applications in law enforcement for mugshot identification, verification for personal identification such as driver's licenses and credit cards, gateways to limited access areas, surveillance of crowd behavior are all potential applications of a succes...\n\n2-Hop Neighbour:\nBeyond Eigenfaces: Probabilistic Matching for Face Recognition We propose a novel technique for direct visual matching of images for the purposes of face recognition and database search. Specifically, we argue in favor of a probabilistic measure of similarity, in contrast to simpler methods which are based on standard L2 norms (e.g., template matching) or subspace-restricted norms (e.g., eigenspace matching). The proposed similarity measure is based on a Bayesian analysis of image differences: we model two mutually exclusive classes of variation between two facial images: intra-personal (variations in appearance of the same individual, due to different expressions or lighting) and extra-personal (variations in appearance due to a difference in identity). The high-dimensional probability density functions for each respective class are then obtained from training data using an eigenspace density estimation technique and subsequently used to compute a similarity measure based on the a posteriori probability of membership in the intrapersonal class,...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Text and Image Metasearch on the Web As the Web continues to increase in size, the relative coverage of Web search engines is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable. This paper provides details of the text and image metasearch functions of the Inquirus search engine developed at the NEC Research Institute. For text metasearch, we describe features including the use of link information in metasearch, and provide statistics on the usage and performance of Inquirus and the Web search engines. For image metasearch, Inquirus queries multiple image search engines on the Web, downloads the actual images, and creates image thumbnails for display to the user. Inquirus handles image search engines that return direct links to images, and engines that return links to HTML pages. For the engines that return HTML pages, Inquirus analyzes the text on the pages in order to predict which images are most likely to correspond to the query. The individual image search engin...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n1-Hop Neighbour:\nAn Information Gathering Agent for Querying Web Search Engines Information gathering agents have attracted much attention of late. As a new application, they are attractive because the need for intelligent assistance in navigating the World Wide Web and large databases is acute. Information agents provide an open-ended and complex, yet easily accessible environment in which ideas from many areas can be integrated. We have developed an information gathering agent called SavvySearch for intelligently searching multiple search engines on the Web. SavvySearch tracks responses from existing search engines to manage resource usage and submit queries only to the most appropriate search engines. To implement SavvySearch, we adapted simple ideas from machine learning, information retrieval and planning and tested two issues in the designs: Can search engine selection knowledge be acquired to improve performance? Do users find that high quality results are being returned early within the limited parallelism provided by SavvySearch? Current results indicate ...\n\n1-Hop Neighbour:\nContext and Page Analysis for Improved Web Search NEC Research Institute has developed a metasearch engine that improves the efficiency of Web searches by downloading and analyzing each document and then displaying results that show the query terms in context. Several popular and useful search engines such as AltaVista, Excite, HotBot, Infoseek, Lycos, and Northern Light attempt to maintain full-text indexes of the World Wide Web. However, relying on a single standard search engine has limitations. The standard search engines have limited coverage, 1,2 outdated databases, and are sometimes unavailable due to problems with the network or the engine itself. The precision of standard engine results can also vary because they generally focus on handling queries quickly and use relatively simple ranking schemes. 3 Rankings can be further muddled by keyword spamming to increase a page's rank order. Often, the relevance of a particular page is obvious only after loading it and finding the query terms. Metasearch engines, such as MetaCrawler and SavvySearch, attempt to contend with the problem of limited coverage by submitting queries to several standard search engines at once. 4,5 The primary advantages of metasearch engines are that they combine the results of several search engines and present a consistent user interface. 5 However, most metasearch engines rely on the documents and summaries returned by standard search engines and so inherit their limited precision and vulnerability to keyword spamming. We developed the NEC Research Institute (NECI) metasearch engine [now called Inquirus] to improve the efficiency and precision of Web search by downloading and analyzing each document and then displaying results that show the query terms in\n\n2-Hop Neighbour:\nConfSeek - A Multi-user, Multi-Threaded Specialized Search Engine for Conferences The explosive growth of the World Wide Web - the latest estimates for its size are around 1,000,000,000 web-pages - has made critical the need to nd information more accurately than what the current generic search engines can deliver. This project implements a prototype specialized search engine that allows user to submit queries for conferences in a specic eld of interest, and returns the detailed information about those conferences (deadlines, etc.). It uses multiple existing search engines to provide better coverage of the information available on the Internet. It can interact with multiple users concurrently and makes use of multi-threading to achieve faster information processing. The goal is to make this tool available to the scientic community to provide researchers with improved access to conference information. We discuss possible extensions (e.g. ranking of conferences according to their `quality', trip planning, etc.). Contents 1 Introduction 2 2 Goals of this Project...\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nContext in Web Search Web search engines generally treat search requests in isolation. The results for a given query are identical, independent of the user, or the context in which the user made the request. Nextgeneration search engines will make increasing use of context information, either by using explicit or implicit context information from users, or by implementing additional functionality within restricted contexts. Greater use of context in web search may help increase competition and diversity on the web.\n\n2-Hop Neighbour:\nAn Overview of World Wide Web Search Technologies With over 800 million pages covering most areas of human endeavor, the World Wide Web is fertile ground for information retrieval. Numerous search technologies have been applied to Web searches, and the dominant search method has yet to be identified. This chapter provides an overview of existing Web search technologies and classifies them into six categories: (i) hyperlink exploration, (ii) information retrieval, (iii) metasearches, (iv) SQL approaches, (v) content-based multimedia searches, and (vi) others. A comparative study of some major commercial and experimental search services is presented, and some future research directions for Web searches are suggested. Keywords: Survey, World Wide Web, Searches, Search Engines, and Information Retrieval. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Robust Classification Systems for Imprecise Environments In real-world environments it is usually difficult to specify target operating conditions precisely. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. In some cases, the performance of the hybrid can actually surpass that of the best known classifier. The hybrid is also efficient to build, to store, and to update. Finally, we provide empirical evidence that a robust hybrid classifier is needed for many real-world problems. Introduction Traditionally, classification systems have been built by experimenting with many different classifiers, comparing their performance and choosing the classifier that performs best....' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMeta-Learning in Distributed Data Mining Systems: Issues and Approaches Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some principled fashion multiple classifiers computed separately over different databases. This study, describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity co...\n\n1-Hop Neighbour:\nThe Case Against Accuracy Estimation for Comparing Induction Algorithms We analyze critically the use of classification accuracy to compare classifiers on natural data sets, providing a thorough investigation using ROC analysis, standard machine learning algorithms, and standard benchmark data sets. The results raise serious concerns about the use of accuracy for comparing classifiers and draw into question the conclusions that can be drawn from such studies. In the course of the presentation, we describe and demonstrate what we believe to be the proper use of ROC analysis for comparative studies in machine learning research. We argue that this methodology is preferable both for making practical choices and for drawing scientific conclusions. 1 INTRODUCTION Substantial research has been devoted to the development and analysis of algorithms for building classifiers, and a necessary part of this research involves comparing induction algorithms. A common methodology for such evaluations is to perform statistical comparisons of the accuracies of learned class...\n\n1-Hop Neighbour:\nA Comparative Evaluation of Meta-Learning Strategies over Large and Distributed Data Sets There has been considerable interest recently in various approaches to scaling up machine learning systems to large and distributed data sets. We have been studying approaches based upon the parallel application of multiple learning programs at distributed sites, followed by a meta-learning stage to combine the multiple models in a principled fashion. In this paper, we empirically determine the \"best\" data partitioning scheme for a selected data set to compose \"appropriatelysized \" subsets and we evaluate and compare three di#erent strategies, Voting, Stacking and Stacking with Correspondence Analysis (SCANN) for combining classification models trained over these subsets. We seek to find ways to e#ciently scale up to large data sets while maintaining or improving predictive performance measured by the error rate, a cost model, and the TP-FP spread. Keywords: classification, multiple models, meta-learning, stacking, voting, correspondence analysis, data partitioning Email address of co...\n\n2-Hop Neighbour:\nMulti-Layer Incremental Induction . This paper describes a multi-layer incremental induction algorithm, MLII, which is linked to an existing nonincremental induction algorithm to learn incrementally from noisy data. MLII makes use of three operations: data partitioning, generalization and reduction. Generalization can either learn a set of rules from a (sub)set of examples, or refine a previous set of rules. The latter is achieved through a redescription operation called reduction: from a set of examples and a set of rules, we derive a new set of examples describing the behaviour of the rule set. New rules are extracted from these behavioral examples, and these rules can be seen as meta-rules, as they control previous rules in order to improve their predictive accuracy. Experimental results show that MLII achieves significant improvement on the existing nonincremental algorithm HCV used for experiments in this paper, in terms of rule accuracy. 1 Introduction Existing machine learning algorithms can be generally distin...\n\n2-Hop Neighbour:\nA Machine Learning Researcher's Foray into Recidivism Prediction We discuss an application of machine learning to recidivism prediction. Our initial results motivate the need for a methodology for technique selection for applications that involve unequal but unknown error costs, a skewed data set, or both. Evaluation methodologies traditionally used in machine learning are inadequate for analyzing performance in these situations, although they arise frequently when addressing real-world problems. After discussing the problem of recidivism prediction and the particulars of our data set, we present experimental results that motivate the need to evaluate learning algorithm over a range of error costs. We then describe Receiver Operating Characteristic (ROC) analysis, which has been used extensively in signal detection theory for decades but has only recently begun to filter into machine learning research. With this new perspective, we revisit the recidivism prediction task and present results that contradict those obtained using a traditional ...\n\n2-Hop Neighbour:\nPruning Decision Trees with Misclassification Costs . We describe an experimental study of pruning methods for decision tree classifiers when the goal is minimizing loss rather than error. In addition to two common methods for error minimization, CART's cost-complexity pruning and C4.5's error-based pruning, we study the extension of cost-complexity pruning to loss and one pruning variant based on the Laplace correction. We perform an empirical comparison of these methods and evaluate them with respect to loss. We found that applying the Laplace correction to estimate the probability distributions at the leaves was beneficial to all pruning methods. Unlike in error minimization, and somewhat surprisingly, performing no pruning led to results that were on par with other methods in terms of the evaluation criteria. The main advantage of pruning was in the reduction of the decision tree size, sometimes by a factor of ten. While no method dominated others on all datasets, even for the same domain different pruning mechanisms are better for ...\n\n2-Hop Neighbour:\nPrivacy Preserving Association Rule Mining in Vertically Partitioned Data Privacy considerations often constrain data mining projects. This paper addresses the problem of association rule mining where transactions are distributed across sources. Each site holds some attributes of each transaction, and the sites wish to collaborate to identify globally valid association rules. However, the sites must not reveal individual transaction data. We present a two-party algorithm for efficiently discovering frequent itemsets with minimum support levels, without either site revealing individual transaction values.\n\n2-Hop Neighbour:\nPrivacy Preserving Distributed Data Mining em, there is a simple distributed solution that provides a degree of privacy to the individual sites. An example association rule could be: Received F lu shot and age > 50 implies hospital admission, where at least 5% of insured meet all the criteria (support), and at least 30% of those meeting the flu shot and age criteria actually require hospitalization (confidence). There are algorithms to e#ciently find all association rules with a minimum level of support. We can easily extend this to the distributed case using the following lemma: If a rule has support > k% valid The Data Approach Figure 1: Data Warehouse approach to Distributed Data Mining globally, it must have support > k% on at least one of the individual sites. A distributed algorithm for this would work as follows: Request that each site send all rules with support at least k. For e", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Node Similarity in Networked Information Spaces Netvorked information spaces contain information entities, corresponding to nodes, vhich are (:orlrle(:l.ed [)y associm.i(ms, (:or'r'esporldirlg 1.o links irl (.he nel.wor'k. Exarrq)les of nel.wor'ked information spaces are: the World Wide Web, vhere information entities are veb pages, and associations are hyperlinks; the scientific literature, vhere information entities are articles and associations are references to other articles. SimilariW betveen information entities in a net- vorked information space can be defined not only based on the content of the information entities, but also based on the connectivity established by the associations present. This paper explores the definition of similariW based on connectivity only, and proposes several algorithms [)r' I, his [mr'pose. Our' rrlei, r'i(:s I,ake mJvard,age o[' I, he local rleigh[)or'hoo(ts o[' I, he rmcJes irl I, he rlel,- is no required, as long as a query engine is available for fo]]oving ]inks and extracting he necessary local neighbourhoods for similarity estimation. Tvo variations of similarity estimation beveen vo nodes are described, one based on he separate local neighbourhoods of he nodes, and another based on he join local neighbourhood expanded from boh nodes a he same ime. The algorithms are imp]emened and evaluated on he citation graph of computer science. The immediate application of his vork is in finding papers similar o a given paper [he Web.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nTowards a Highly-Scalable Metasearch Engine The World Wide Web has been expanding in a very fast rate. The coverage of the Web by each of the major search engines has been steadily decreasing despite their effort to index more web pages. Worse yet, as these search engines get larger, higher percentages of their indexed information are becoming obsolete. More and more people are having doubt about the scalability of centralized search engine technology. A more scalable alternative to search the Web is the metasearch engine approach. A metasearch engine can be considered as an interface on top of multiple local search engines to provide uniform access to many local search engines. Database selection is one of the main challenges in building a large-scale metasearch engine. The problem is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. In order to enable accurate selection, metadata that reflect the content of each search engine need to be co...\n\n2-Hop Neighbour:\nComputing Geographical Scopes of Web Resources Many information resources on the web are relevant primarily to limited geographical communities. For instance, web sites containing information on restaurants, theaters, and apartment rentals are relevant primarily to web users in geographical proximity to these locations. In contrast, other information resources are relevant to a broader geographical community. For instance, an on-line newspaper may be relevant to users across the United States. Unfortunately, most current web search engines largely ignore the geographical scope of web resources. In this paper, we introduce techniques for automatically computing the geographical scope of web resources, based on the textual content of the resources, as well as on the geographical distribution of hyperlinks to them. We report an extensive experimental evaluation of our strategies using real web data. Finally, we describe a geographically-aware search engine that we have built using our techniques for determining the geographical scope of web resources. 1\n\n2-Hop Neighbour:\nWho do you want to be today? Web Personae for personalised information access Personalised context sensitivity is the Holy Grail of web information retrieval. As a first step towards this goal, we present the Web Personae personalised search and browsing system. We use well-known information retrieval techniques to develop and track user models. Web Personae differ from previous approaches in that we model users with multiple profiles, each corresponding to a distinct topic or domain. Such functionality is essential in heterogeneous environments such as the Web. We introduce Web Personae, describe an algorithm for learning such models from browsing data, and discuss applications and evaluation methods.\n\n2-Hop Neighbour:\nC4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.\n\n2-Hop Neighbour:\nKeyword Searching and Browsing in Databases using BANKS With the growth of the Web, there has been a rapid increase in the number of users who need to access online databases without having a detailed knowledge of the schema or of query languages; even relatively simple query languages designed for non-experts are too complicated for them. We describe BANKS, a system which enables keyword-based search on relational databases, together with data and schema browsing. BANKS enables users to extract information in a simple manner without any knowledge of the schema or any need for writing complex queries. A user can get information by typing a few keywords, following hyperlinks, and interacting with controls on the displayed results. BANKS models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. Answers to a query are modeled as rooted trees connecting tuples that match individual keywords in the query. Answers are ranked using a notion of proximity coupled with a notion of prestige of nodes based on inlinks, similar to techniques developed for Web search. We present an efficient heuristic algorithm for finding and ranking query results. 1.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'SimRank: A Measure of Structural-Context Similarity The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects.\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nScaling Personalized Web Search Recent web search techniques augment traditional text matching with a global notion of “importance ” based on the linkage structure of the web, such as in Google’s PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance—for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques. 1\n\n1-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n1-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n2-Hop Neighbour:\nA Comparison of Techniques to Find Mirrored Hosts on the WWW We compare several algorithms for identifying mirrored hosts on the World Wide Web. The algorithms operate on the basis of URL strings and linkage data: the type of information easily available from web proxies and crawlers. Identification of mirrored hosts can improve web-based information retrieval in several ways: First, by identifying mirrored hosts, search engines can avoid storing and returning duplicate documents. Second, several new information retrieval techniques for the Web make inferences based on the explicit links among hypertext documents -- mirroring perturbs their graph model and degrades performance. Third, mirroring information can be used to redirect users to alternate mirror sites to compensate for various failures, and can thus improve the performance of web browsers and proxies. # This work was presented at the Workshop on Organizing Web Space at the Fourth ACM Conference on Digital Libraries 1999. We evaluated 4 classes of \"top-down\" algorithms for detecting ...\n\n2-Hop Neighbour:\nConstructing, Organizing, and Visualizing Collections of Topically Related Web Resources For many purposes, the Web page is too small a unit of interaction and analysis. Web sites are structured multimedia documents consisting of many pages, and users often are interested in obtaining and evaluating entire collections of topically related sites. Once such a collection is obtained, users face the challenge of exploring, comprehending, and organizing the items. We report four innovations that address these user needs. • We replaced the web page with the web site as the basic unit of interaction and analysis. • We defined a new information structure, the clan graph, that groups together sets of related sites. • We augment the representation of a site with a site profile, information about site structure and content that helps inform user evaluation of a site. • We invented a new graph visualization, the auditorium visualization, that reveals important structural and content properties of sites within a clan graph. Detailed analysis and user studies document the utility of this approach. The clan graph construction algorithm tends to filter out irrelevant sites and discover additional relevant items. The auditorium visualization, augmented with drill down capabilities to explore site profile data, helps users to find high-quality sites as well as sites that serve a particular function.\n\n2-Hop Neighbour:\nFocused Web Crawling: A Generic Framework for Specifying the User Interest and for Adaptive Crawling Strategies Compared to the standard web search engines, focused crawlers yield good recall as well as good precision by restricting themselves to a limited domain. In this paper, we do not introduce another focused crawler, but we introduce a generic framework for focused crawling consisting of two major components: (1) Specification of the user interest and measuring the resulting relevance of a given web page. The proposed method of specifying the user interest by a formula combining atomic topics significantly improves the expressive power of the user. (2) Crawling strategy. Ordering the links at the crawl frontier is a challenging task since pages of a low relevance may be on a path to highly relevant pages. Thus, tunneling may be necessary. The explicit specification of the user interest allows us to define topic-specific strategies for tunneling. Our system Ariadne is a prototype implementation of the proposed framework. An experimental evaluation of different crawling strategies demonstrates the performance gain obtained by focusing a crawl and by dynamically adapting the focus. 1\n\n2-Hop Neighbour:\nTheseus: Categorization by Context Introduction The traditional approach to document categorization is categorization by content, since information for categorizing a document is extracted from the document itself. In a hypertext environment like the Web, the structure of documents and the link topology can be exploited to perform what we call categorization by context [Attardi 98]: the context surrounding a link in an HTML document is used for categorizing the document referred by the link. Categorization by context is capable of dealing also with multimedia material, since it does not rely on the ability to analyze the content of documents. Categorization by context leverages on the categorization activity implicitly performed when someone places or refers to a document on the Web. By focusing the analysis to the documents used by a group of people, one can build a catalogue tuned to the need of that group. Categorization by context is based on the following assumptions: 1\n\n2-Hop Neighbour:\nThe Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human -rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Topical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCollection Synthesis The invention of the hyperlink and the HTTP transmission protocol caused an amazing new structure to appear on the Internet -- the World Wide Web. With the Web, there came spiders, robots, and Web crawlers, which go from one link to the next checking Web health, ferreting out information and resources, and imposing organization on the huge collection of information (and dross) residing on the net. This paper reports on the use of one such crawler to synthesize document collections on various topics in science, mathematics, engineering and technology. Such collections could be part of a digital library.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n1-Hop Neighbour:\nAdaptive Retrieval Agents: Internalizing Local Context and Scaling up to the Web . This paper focuses on two machine learning abstractions springing from ecological models: (i) evolutionary adaptation by local selection, and (ii) selective query expansion by internalization of environmental signals. We first outline a number of experiments pointing to the feasibility and performance of these methods on a general class of graph environments. We then describe how these methods have been applied to the intelligent retrieval of information distributed across networked environments. In particular, the paper discusses a novel distributed evolutionary algorithm and representation used to construct populations of adaptive Web agents. These InfoSpiders search on-line for information relevant to the user, by traversing hyperlinks in an autonomous and intelligent fashion. They can adapt to the spatial and temporal regularities of their local context. Our results suggest that InfoSpiders could complement current search engine technology by starting up where search engines stop...\n\n2-Hop Neighbour:\nAn Approach to Relate the Web Communities Through Bipartite Graphs The Web harbors a large number of community structures. Early detection of community structures has many purposes such as reliable searching and selective advertising. In this paper we investigate the problem of extracting and relating the web community structures from a large collection of Web-pages by performing hyper-link analysis. The proposed algorithm extracts the potential community signatures by extracting the corresponding dense bipartite graph (DBG) structures from the given data set of web pages. Further, the proposed algorithm can also be used to relate the extracted community signatures. We report the experimental results conducted on 10 GB TREC (Text REtrieval Conference) data collection that contains 1.7 million pages and 21.5 million links. The results demonstrate that the proposed approach extracts meaningful community signatures and relates them.\n\n2-Hop Neighbour:\nGreenstone: A Comprehensive Open-Source Digital Library Software System This paper describes the Greenstone digital library software, a comprehensive, open-source system for the construction and presentation of information collections. Collections built with Greenstone offer effective full-text searching and metadata-based browsing facilities that are attractive and easy to use. Moreover, they are easily maintainable and can be augmented and rebuilt entirely automatically. The system is extensible: software “plugins ” accommodate different document and metadata types.\n\n2-Hop Neighbour:\nInformation Retrieval on the World Wide Web and Active Logic: A Survey and Problem Definition As more information becomes available on the World Wide Web (there are currently over 4 billion pages covering most areas of human endeavor), it becomes more difficult to provide effective search tools for information access. Today, people access web information through two main kinds of search interfaces: Browsers (clicking and following hyperlinks) and Query Engines (queries in the form of a set of keywords showing the topic of interest). The first process is tentative and time consuming and the second may not satisfy the user because of many inaccurate and irrelevant results. Better support is needed for expressing one's information need and returning high quality search results by web search tools. There appears to be a need for systems that do reasoning under uncertainty and are flexible enough to recover from the contradictions, inconsistencies, and irregularities that such reasoning involves.\n\n2-Hop Neighbour:\nWeb Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1\n\n2-Hop Neighbour:\nCentroid-Based Document Classification: Analysis & Experimental Results . In this paper we present a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our experiments show that this centroid-based classifier consistently and substantially outperforms other algorithms such as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. Our analysis shows that the similarity measure used by the centroidbased scheme allows it to classify a new document based on how closely its behavior matches the behavior of the documents belonging to different classes. This matching allows it to dynamically adjust for classes with different densities and accounts for dependencies between the terms in the different classes. 1 Introduction We have seen a tremendous growth in the volume of online text documents available on the Internet, digital libraries, news sources, and company-wide intranets. It has been forecasted that these docu...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A Framework For Designing, Modeling and Analyzing Agent Based Software Systems The agent paradigm is gaining popularity because it brings intelligence, reasoning and autonomy to software systems. Agents are being used in an increasingly wide variety of applications from simple email filter programs to complex mission control and safety systems. However there appears to be very little work in defining practical software architecture, modeling and analysis tools that can be used by software engineers. This should be contrasted with object-oriented paradigm that is supported by models such as UML and CASE tools that aid during the analysis, design and implementation phases of object-oriented software systems. In our research we are developing a framework and extensions to UML to address this need. Our approach is rooted in the BDI formalism, but stresses the practical software design methods instead of reasoning about agents. In this paper we describe our preliminary ideas Index Terms: Agent-Oriented programming, ObjectOriented programming, BDI, UML 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nAn Overview of the Multiagent Systems Engineering Methodology . To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents. 1.\n\n1-Hop Neighbour:\nLIME: Linda Meets Mobility Lime is a system designed to assist in the rapid development of dependable mobile applications over both wired and ad hoc networks. Mobile agents reside on mobile hosts and all communication takes place via transiently shared tuple spaces distributed across the mobile hosts. The decoupled style of computing characterizing the Linda model is extended to the mobile environment. At the application level, both agents and hosts perceive movement as a sudden change of context. The set of tuples accessible by a particular agent residing on a given host is altered transparently in response to changes in the connectivity pattern among the mobile hosts. In this paper we present the key design concepts behind the Lime system. 1 INTRODUCTION Today's users demand ubiquitous network access independent of their physical location. This style of computation, often referred to as mobile computing, is enabled by rapid advances in the wireless communication technology. The networking scenarios enabled ...\n\n2-Hop Neighbour:\nDynamic Reconfiguration in Collaborative Problem Solving In this article we will describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve nearto -optimum results in optimizing the system behavior, there are usually social, technical and security restrictions on applying centralized control. Distributed control on the other hand may cope with these restrictions but also entails sub-optimality and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of patient flow control at Charit'e Berlin. Introduction Research in Distributed Artificial Intelligence (DAI, (Bond & Gasser 1988)) has been traditionally divided into Distributed Problem Solving (DPS) and Multi Agent Systems (MAS). However, r...\n\n2-Hop Neighbour:\nEngineering Mobile-agent Applications via Context-dependent Coordination The design and development of Internet applications can take advantage of a paradigm based on autonomous and mobile agents. However, mobility introduces peculiar coordination problems in agent-based Internet applications. First, it suggests the exploitation of an infrastructure based on a multiplicity of local interaction spaces. Second, it may require coordination activities to be adapted both to the characteristics of the execution environment where they occur and to the needs of the application to which the coordinating agents belong. In this context, this paper introduces the concept of context-dependent coordination based on programmable interaction spaces. On the one hand, interaction spaces associated to different execution environments may be independently programmed so as to lead to differentiated, environment-dependent, behaviors. On the other hand, agents can program the interaction spaces of the visited execution environments to obtain an application-dependent behavior of the interaction spaces themselves. Several examples show how an infrastructure enforcing context-dependent coordination can be effectively exploited to simplify and make more modular the design of Internet applications based on mobile agents. In addition, the MARS coordination infrastructure is presented as an example of a system in which the concept of context-dependent coordination has found a clean and efficient implementation.\n\n2-Hop Neighbour:\nXML Dataspaces for Mobile Agent Coordination This paper presents XMARS, a programmable coordination architecture for Internet applications based on mobile agents. In XMARS, agents coordinate -- both with each other and with their current execution environment -- through programmable XML dataspaces, accessed by agents in a Linda-like fashion. This suits very well the characteristics of the Internet environment: on the one hand, it offers all the advantages of XML in terms of interoperability and standard representation of information; on the other hand, it enforces open and uncoupled interactions, as required by the dynamicity of the environment and by the mobility of the application components. In addition, coordination in XMARS is made more flexible and secure by the capability of programming the behaviour of the coordination media in reaction to the agents' accesses. An application example related to the management of on-line academic courses shows the suitability and the effectiveness of the XMARS architecture. Ke...\n\n2-Hop Neighbour:\nAgents That Reason and Negotiate By Arguing The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents). 1 Introduction An increasing number of software app...\n\n2-Hop Neighbour:\nSupporting Internet-Scale Multi-Agent Systems ts a model of AgentScape from the agent perspective, that is, the location comprising the middleware and the resources are represented by a location manager agent and resource objects. Calls from an agent to the middleware are modeled by requests to the location manager agent to, for example, create an agent or move an agent. Information about resources residing at the location can be retrieved by binding to the resource objects, which are local distributed objects. These objects can be accessed only within the location they reside, not from outside the location. For development of agent applications, an application programming interface (API) and a runtime system (RTS) are provided, see Fig. 1. The default API and RTS can be extended to provide a higher-level application programming interface with, for example, a model that offers more structure and semantics to the agent application developer. Within AgentScape, management of large-scale agent systems is an important issue, includi", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Privacy Interfaces for Information Management this article, we propose a set of guidelines for designing privacy interfaces that facilitate the creation, inspection, modification, and monitoring of privacy policies. These guidelines are based on our experience with COLLABCLIO---a system that supports automated sharing of Web browsing histories. COLLAB- CLIO stores a person's browsing history and makes it searchable by content, keyword, and other attributes. A typical COLLABCLIO query might be: \"Show me all the pages Tessa has visited in the .edu domain that contain the phrase `direct manipulation.' \" Since a COLLABCLIO user can make queries regarding the browsing history of other users, there are obvious privacy concerns.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCollections - Adapting The Display of Personal Objects for Different Audiences Although current networked systems and online applications provide new opportunities for displaying and sharing personal information, they do not account for the underlying social contexts that frame such interactions. Existing categorization and management mechanisms for digital content have been designed to focus on the data they handle without much regard for the social circumstances within which their content is shared. As we share large collections of personal information over mediated environments, our tools need to account for the social scenarios that surround our interactions. This thesis presents Collections: an application for the management of digital pictures according to their intended audiences. The goal is to create a graphical interface that supports the creation of fairly complex privacy decisions concerning the display of digital photographs. Simple graphics are used to enable the collector to create a wide range of audience arrangements for her digital pho...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Adaptive information extraction: Core technologies for information agents Introduction For the purposes of this chapter, an information agent can be described as a distributed system that receives a goal through its user interface, gathers information relevant to this goal from a variety of sources, processes this content as appropriate, and delivers the results to the users. We focus on the second stage in this generic architecture. We survey a variety of information extraction techniques that enable information agents to automatically gather information from heterogeneous sources. For example, consider an agent that mediates package-delivery requests. To satisfy such requests, the agent might need to retrieve address information from geographic services, ask an advertising service for freight forwarders that serve the destination, request quotes from the relevant freight forwarders, retrieve duties and legal constraints from government sites, get weather information to estimate transportation delays, etc. Information extraction (IE) is a form of sh' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMIA - An Ubiquitous Multi-Agent Web Information System This paper gives a brief overview about AI methods and techniques we have developed for building ubiquitous web information systems. These methods from areas of machine learning, logic programming, knowledge representation and multi-agent systems are discussed in the context of our prototypical information system MIA. MIA is a web information system for mobile users, who are equipped with a PDA (Palm Pilot), a cellular phone and a GPS device or cellular WAP phone. It captures the main issues of ubiquitous computing: location awareness, anytime information access and PDA technology. 1 Introduction Nowadays, the biggest but also the most chaotic and unstructured source of information is the World-WideWeb. Making this immense amount of information available for ubiquitous computing in daily life is a great challenge. Besides hardware issues for wireless ubiquitous computing, that still are to be solved (wireless communication, blue-tooth technologies, wearable computing units, integrat...\n\n1-Hop Neighbour:\nLearning Information Extraction Rules for Semi-structured and Free Text . A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically. WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semistructured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories. Keywords: natural language processing, information extraction, rule learning 1. Information extraction As more and more text becomes available on-line, there is a growing need for systems that extract information automatically from text data. An information extraction (IE) sys...\n\n1-Hop Neighbour:\nRoadRunner: Towards Automatic Data Extraction from Large Web Sites The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach. 1\n\n2-Hop Neighbour:\nUsing Information Extraction to Aid the Discovery of Prediction Rules from Text Text mining and Information Extraction (IE) are both topics of significant recent interest. Text mining concerns applying data mining, a.k.a. knowledge discovery from databases (KDD) techniques to unstructured text. Information extraction (IE) is a form of shallow text understanding that locates specific pieces of data in natural language documents, transforming unstructured text into a structured database. This paper describes a system called DiscoTEX, that combines IE and KDD methods to perform a text mining task, discovering prediction rules from natural-language corpora. An initial version of DiscoTEX is constructed by integrating an IE module based on Rapier and a rule-learning module, Ripper. We present encouraging results on applying these techniques to a corpus of computer job postings from an Internet newsgroup.\n\n2-Hop Neighbour:\nIntelligent Anticipated Exploration of Web Sites In this paper we describe a web search agent, called Global Search Agent (hereafter GSA for short). GSA integrates and enhances several search techniques in order to achieve significant improvements in the user-perceived quality of delivered information as compared to usual web search engines. GSA features intelligent merging of relevant documents from different search engines, anticipated selective exploration and evaluation of links from the current resuk set, automated derivation of refined queries based on user relevance feedback. System architecture as well as experimental accounts are also illustrated.\n\n2-Hop Neighbour:\nThe RoadRunner Project: Towards Automatic Extraction of Web Data Introduction ROADRUNNER is a research project that aims at developing solutions for automatically extracting data from large HTML data sources. The target of our research are data-intensive Web sites, i.e., HTML-based sites that publish large amounts of data in a fairly complex structure. In our view, we aim at ideally seeing the data extraction process of a data-intensive Web site as a black-box taking as input the URL of an entry point to the site (e.g. the home page), and returning as output data extracted from HTML pages in the site in a structured database-like format. This paper describes the top-level software architecture of the ROADRUNNER System, which has been specifically designed to automatize the data extraction process. Several components of the system have already been implemented, and preliminary experiments show the feasibility of our ideas. Data-intensive Web sites usually share a number o\n\n2-Hop Neighbour:\nMultistrategy Learning for Information Extraction Information extraction (IE) is the problem of filling out pre-defined structured summaries from text documents. We are interested in performing IE in non-traditional domains, where much of the text is often ungrammatical, such as electronic bulletin board posts and Web pages. We suggest that the best approach is one that takes into account many different kinds of information, and argue for the suitability of a multistrategy approach. We describe learners for IE drawn from three separate machine learning paradigms: rote memorization, term-space text classification, and relational rule induction. By building regression models mapping from learner confidence to probability of correctness and combining probabilities appropriately, it is possible to improve extraction accuracy over that achieved by any individual learner. We describe three different multistrategy approaches. Experiments on two IE domains, a collection of electronic seminar announcements from a university computer science de...\n\n2-Hop Neighbour:\nLearning to Construct Knowledge Bases from the World Wide Web The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) andrelations (e.g., employed by, produced by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system that has created a knowledge base describing university people, courses, and research projects.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Boosting and Rocchio Applied to Text Filtering We discuss two learning algorithms for text filtering: modified Rocchio and a boosting algorithm called AdaBoost. We show how both algorithms can be adapted to maximize any general utility matrix that associates cost (or gain) for each pair of machine prediction and correct label. We first show that AdaBoost significantly outperforms another highly effective text filtering algorithm. We then compare AdaBoost and Rocchio over three large text filtering tasks. Overall both algorithms are comparable and are quite effective. AdaBoost produces better classifiers than Rocchio when the training collection contains a very large number of relevant documents. However, on these tasks, Rocchio runs much faster than AdaBoost. 1 Introduction With the explosion in the amount of information available electronically, information filtering systems that automatically send articles of potential interest to a user are becoming increasingly important. If users indicate their interests to a filtering system...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nMachine Learning and Natural Language Processing In this report, some collaborative work between the fields of Machine Learning (ML) and Natural Language Processing (NLP) is presented. The document is structured in two parts. The first part includes a superficial but comprehensive survey covering the state--of--the--art of machine learning techniques applied to natural language learning tasks. In the second part, a particular problem, namely Word Sense Disambiguation (WSD), is studied in more detail. In doing so, four algorithms for supervised learning, which belong to different families, are compared in a benchmark corpus for the WSD task. Both qualitative and quantitative conclusions are drawn. This document stands for the complementary documentation for the conference \"Aprendizaje autom 'atico aplicado al procesamiento del lenguaje natural\", given by the author within the course: \"Curso de Industrias de la Lengua: La Ingenier'ia Lingu'istica en la Sociedad de la Informaci'on\", Fundaci'on Duques de Soria. Soria. July 2000. 1 Con...\n\n1-Hop Neighbour:\nIntelligent Information Triage In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many informationretrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether a news story will be followed by many, very similar news stories, and also whether the stock price of one or more companies associated with a news story will move significantly following the appearance of that story. We conclude by discussing how the comprehensibility of the learned classifiers can be critical to success. 1.\n\n1-Hop Neighbour:\nBoosting for Document Routing RankBoost is a recently proposed algorithm for learning ranking functions. It is simple to implement and has strong justifications from computational learning theory. We describe the algorithm and present initial experimental results on applying it to the document routing problem. The first set of results applies RankBoost to a text representation produced using modern term weighting methods. Performance of RankBoost is somewhat inferior to that of a state-of-the-art routing algorithm which is, however, more complex and less theoretically justified than RankBoost. RankBoost achieves comparable performance to the state-of-the-art algorithm when combined with feature or example selection heuristics. Our second set of results examines the behavior of RankBoost when it has to learn not only a ranking function but also all aspects of term weighting from raw data. Performance is usually, though not always, less good here, but the term weighting functions implicit in the resulting ranking fun...\n\n2-Hop Neighbour:\nMultistrategy Learning for Information Extraction Information extraction (IE) is the problem of filling out pre-defined structured summaries from text documents. We are interested in performing IE in non-traditional domains, where much of the text is often ungrammatical, such as electronic bulletin board posts and Web pages. We suggest that the best approach is one that takes into account many different kinds of information, and argue for the suitability of a multistrategy approach. We describe learners for IE drawn from three separate machine learning paradigms: rote memorization, term-space text classification, and relational rule induction. By building regression models mapping from learner confidence to probability of correctness and combining probabilities appropriately, it is possible to improve extraction accuracy over that achieved by any individual learner. We describe three different multistrategy approaches. Experiments on two IE domains, a collection of electronic seminar announcements from a university computer science de...\n\n2-Hop Neighbour:\nUsing Decision Trees to Improve Case-Based Learning This paper shows that decision trees can be used to improve the performance of casebased learning (CBL) systems. We introduce a performance task for machine learning systems called semi-flexible prediction that lies between the classification task performed by decision tree algorithms and the flexible prediction task performed by conceptual clustering systems. In semi-flexible prediction, learning should improve prediction of a specific set of features known a priori rather than a single known feature (as in classification) or an arbitrary set of features (as in conceptual clustering). We describe one such task from natural language processing and present experiments that compare solutions to the problem using decision trees, CBL, and a hybrid approach that combines the two. In the hybrid approach, decision trees are used to specify the features to be included in k-nearest neighbor case retrieval. Results from the experiments show that the hybrid approach outperforms both the decision ...\n\n2-Hop Neighbour:\nInformation Triage using Prospective Criteria : In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock-price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many information-retrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether the stock price of one or more companies mentioned in a news story will move significantly following the appearance of that story. We conclude by discussing that the comprehensibility of the learned classifiers can be critical to success. 1\n\n2-Hop Neighbour:\nLearning to Classify Text from Labeled and Unlabeled Documents In many important text classification problems, acquiring class labels for training documents is costly, while gathering large quantities of unlabeled data is cheap. This paper shows that the accuracy of text classifiers trained with a small number of labeled documents can be improved by augmenting this small training set with a large pool of unlabeled documents. We present a theoretical argument showing that, under common assumptions, unlabeled data contain information about the target function. We then introduce an algorithm for learning from labeled and unlabeled text based on the combination of Expectation-Maximization with a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents; it then trains a new classifier using the labels for all the documents, and iterates to convergence. Experimental results, obtained using text from three different realworld tasks, show that the use of unlabeled data reduces classification error by up to 33%.\n\n2-Hop Neighbour:\nApproximate Statistical Tests for Comparing Supervised Classification Learning Algorithms This paper reviews five approximate statistical tests for determining whether one learning algorithm out-performs another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely-used statistical tests are shown to have high probability of Type I error in certain situations and should never be used. These tests are (a) a test for the difference of two proportions and (b) a paired-differences t test based on taking several random train/test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type I error. The fifth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation. Experiments show that this test also has acceptable Type I error. The paper also measures the power (ability to detect algorit...", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Rank Aggregation Revisited The rank aggregation problem is to combine many different rank orderings on the same set of candidates, or alternatives, in order to obtain a \"better\" ordering. Rank aggregation has been studied extensively in the context of social choice theory, where several \"voting paradoxes\" have been discovered. The problem' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAn Efficient Boosting Algorithm for Combining Preferences We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the \"collaborativefiltering \" problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations.\n\n1-Hop Neighbour:\nOptimal Aggregation Algorithms for Middleware Assume that each object in a database has m grades, or scores, one for each of m attributes. ForexamzUan object can ave a color grade, t at tells ow red it is, and a s ape grade, t at tells ow round it is. For eac attribute, t ere is a sorted list, w ic lists eac object and its grade under t at attribute, sorted by grade ( ig est grade first). Eac object is assigned an overall grade, t at is obtained bycom`-T ng t e attribute grades using a fixedm notone aggregation function,orcombining rule, suc as mh or average. To determ`h t e top k objects, t at is, k objects wit t e ig est overall grades, t e naive algoritm mor access every object in t e database, to find its grade under eac attribute. Fagin as given an algoritm (\"Fagin's Algorit mit or FA) t at is mh mh` efficient. For som mm'T` e aggregation functions, FA isoptim al wit ig probability in t e worst case. We analyze an elegant andrem'-`- lysim ple algoritm (\"t e t res old algoritm \", or TA) t at isoptim al in am` stronger sense t an FA. We s ow t at TA is essentiallyoptim al, not just for som mmT-k' aggregation functions, but for all of tem and not just in a ig-probability worst-case sense, but over every database. Unlike FA, w ic requires large buffers (w ose sizemz grow unboundedly as t e database size grows), TA requires only asmkzz constant-size buffer. TA allows early stopping, w ic yields, in a precise sense, anapproxim ate version of t e top k answers. We distinguis two types of access: sorted access (w ere t em`U' ewaresystem obtains t e grade of an object insom sorted list by proceeding t roug t e list sequentiallyfrom t e top), and random access (w ere t e mzUETh. resystem requests t e grade of object in a list, and obtains it in one step). We consider t e scenarios w ere ra...\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nLearning Search Engine Specific Query Transformations for Question Answering We introduce a method for learning query transformations that improves the ability to retrieve answers to questions from an information retrieval system. During the training stage the method involves automatically learning phrase features for classifying questions into different types, automatically generating candidate query transformations from a training set of question/answer pairs, and automatically evaluating the candidate transforms on target information retrieval systems such as real-world general purpose search engines. At run time, questions are transformed into a set of queries, and re-ranking is performed on the documents retrieved. We present a prototype search engine, Tritus, that applies the method to web search engines. Blind evaluation on a set of real queries from a web search engine log shows that the method significantly outperforms the underlying web search engines as well as a commercial search engine specializing in question answering. Keywords Web search, quer...\n\n2-Hop Neighbour:\nInCommonSense - Rethinking Web Search Results The World Wide Web is a rich annotation system which allows people to relate to documents and sites from different perspectives. People describe, comment, relate or mock other Web pages in the context of their document. This richness is currently not reflected in snippets presented by Web search engines, where a search result is represented by the text found in the Web document alone. This paper proposes a new method for representing documents in Web search engines' results. This method is based on recent trends in search engine technology and provides descriptions of the retrieved documents, assembled from people's commentary and annotations on the Web. This paper suggests a new way for automatically retrieving and reusing people's annotations on the Web, incorporating these annotations into a search engine for creating a hybrid directory-search-engine, allowing for both automatic retrieval and on-the-fly human authored summaries. I. INTRODUCTION It is common knowledge that many use...\n\n2-Hop Neighbour:\nUsing Web Structure for Classifying and Describing Web Pages The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages.\n\n2-Hop Neighbour:\nDetection of Heterogeneities in a Multiple Text Database Environment As the number of text retrieval systems (search engines) grows rapidly on the World Wide Web, there is an increasing need to build search brokers (metasearch engines) on top of them. Often, the task of building an effective and efficient metasearch engine is hindered by the heterogeneities among the underlying local search engines. In this paper, we first analyze the impact of various heterogeneities on building a metasearch engine. We then present some techniques that can be used to detect the most prominent heterogeneities among multiple search engines. Applications of utilizing the detected heterogeneities in building better metasearch engines will be provided.\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Controlling the Robots of Web Search Engines Robots are deployed by a Web search engine for collecting information from different Web servers in order to maintain the currency of its data base of Web pages. In this paper, we investigate the number of robots to be used by a search engine so as to maximize the currency of the data base without putting an unnecessary load on the network. We use a queueing model to represent the system. The arrivals to the queueing system are Web pages brought by the robots; service corresponds to the indexing of these pages. The objective is to find the number of robots, and thus the arrival rate of the queueing system, such that the indexing queue is neither starved nor saturated. For this, we consider a finite-buffer queueing system and define the cost function to be minimized as a weighted sum of the loss rate and the fraction of time that the system is empty. Both static and dynamic policies are considered. In the static setting the number of robots is held fixed; in the dynamic setting robots may be re...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDesign and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1\n\n2-Hop Neighbour:\nSynchronizing a database to Improve Freshness In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy up-to-date. As the size of the data grows, it becomes more difficult to maintain the copy \"fresh,\" making it crucial to synchronize the copy effectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the \"freshness\" very significantly compared to current policies in use.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The RoboCup Physical Agent Challenge: Phase I Traditional AI research has not given due attention to the important role that physical bodies play for agents as their interactions produce complex emergent behaviors to achieve goals in the dynamic real world. The RoboCup Physical Agent Challenge provides a good testbed for studying how physical bodies play a signi cant role in realizing intelligent behaviors using the RoboCup framework [Kitano, et al., 95]. In order for the robots to play a soccer game reasonably well, a wide range of technologies needs to be integrated and a number of technical breakthroughs must be made. In this paper, we present three challenging tasks as the RoboCup Physical Agent Challenge Phase I: (1) moving the ball to the speci ed area (shooting, passing, and dribbling) with no, stationary, or moving obstacles, (2) catching the ball from an opponent or a teammate (receiving, goal-keeping, and intercepting), and (3) passing the ball between two players. The rst two are concerned with single agent skills while the third one is related to a simple cooperative behavior. Motivation for these challenges and evaluation methodology are given. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge...\n\n1-Hop Neighbour:\nThe CMUnited-97 Robotic Soccer Team: Perception and Multiagent Control Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specificroles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. The robots can also switch roles to maximize the overall performance of the team. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors.\n\n1-Hop Neighbour:\nThe RoboCup Synthetic Agent Challenge 97 RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers --- it not only supplies a concrete domain to evalute their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork. 1 Introduction RoboCup (The World Cup Robot Soccer) is an attempt to promote AI and robotics research by providing a common task, Soccer, for evaluation of various theories, algorithms, and agent architectur...\n\n2-Hop Neighbour:\nImplementing Teams of Agents Playing Simulated Robot Soccer . This article is intended to present an overview of the issues related to implementing teams of cooperating agents playing simulated robot soccer. First we introduce the concept of robot soccer and the simulated environment. Then we discuss why the (simulated) robot soccer is an interesting application area from the point of view of robotics and articial intelligence. The main part of the paper contains a discussion of agent architectures, both from theoretical and practical point of view. Later we discuss how to combine individual agents into teams having common strategies and goals. We also discuss learning, both on individual and team levels. 1 Introduction Robot soccer is a growing area of interest for the robotics and articial intelligence communities. There are many reasons for that. The main one is the complexity of the domain together with a set of well-dened rules governing the behaviour of the agents in this domain. The domain is suited for experiments both in ...\n\n2-Hop Neighbour:\nThe CMUnited-97 Simulator Team . The Soccer Server system provides a rich and challenging multiagent, real-time domain. Agents must accurately perceive and act despite a quickly changing, largely hidden, noisy world. They must also act at several levels, ranging from individual skills to full-team collaborative and adversarial behaviors. This article presents the CMUnited-97 approaches to the above challenges which helped the team to the semifinals of the 29-team RoboCup-97 tournament. 1 Introduction The Soccer Server system [5] used at RoboCup-97 [2] provides a rich and challenging multiagent, real-time domain. Sensing and acting is noisy, while interagent communication is unreliable and low-bandwidth. In order to be successful, each agent in a team must be able to sense and act in real time: sensations arrive at unpredictable intervals while actions are possible every 100ms. Furthermore, since the agents get local, noisy sensory information, they must have a method of converting their sensory inputs into a good w...\n\n2-Hop Neighbour:\nPlanning Agents in James Abstract — Testing is an obligatory step in developing multi-agent systems. For testing multi-agent systems in virtual, dynamic environments, simulation systems are required that support a modular, declarative construction of experimental frames, that facilitate the embeddence of a variety of agent architectures, and that allow an efficient parallel, distributed execution. We introduce the system James (A Java-Based Agent Modeling Environment for Simulation). In James agents and their dynamic environment are modeled as reflective, time triggered state automata. Its possibilities to compose experimental frames based on predefined components, to express temporal interdependencies, to capture the phenomenon of pro-activeness and reflectivity of agents are illuminated by experiments with planning agents. The underlying planning system is a general purpose system, about which no empirical results exist besides traditional static benchmark tests. We analyze the interplay between heuristics for selecting goals, viewing range, commitment strategies, explorativeness, and trust in the persistence of the world and uncover properties of the agent, the planning engine and the chosen test scenario: Tileworld. I.\n\n2-Hop Neighbour:\nTeam Formation by Self-Interested Mobile Agents . A process of team formation by autonomous agents in a distributed environment is presented. Since the environment is distributed, there are serious problems with communication and consistent decision making inside a team. To deal with these problems, the standard technique of token passing in a computer network is applied. The passing cycle of the token serves as the communication route. It assures consistent decision making inside the team maintaining its organizational integrity. On the other hand it constitutes a component of the plan of the cooperative work performed by a complete team. Two algorithms for team formation are given. The first one is based on simple self-interested agents that still can be viewed as reactive agents (see [14]) although augmented with knowledge, goal, and cooperation mechanisms. The second one is based on sophisticated self-interested agents. Moreover, the algorithm based on fully cooperative agents, which is an adaptation of the static ...\n\n2-Hop Neighbour:\nCurrent State of the Art in Distributed Autonomous Mobile Robotics . As research progresses in distributed robotic systems, more and more aspects of multi-robot systems are being explored. This article surveys the current state of the art in distributed mobile robot systems. Our focus is principally on research that has been demonstrated in physical robot implementations. We have identi ed eight primary research topics within multi-robot systems | biological inspirations, communication, architectures, localization/mapping/exploration, object transport and manipulation, motion coordination, recon gurable robots, and learning { and discuss the current state of research in these areas. As we describe each research area, we identify some key open issues in multi-robot team research. We conclude by identifying several additional open research issues in distributed mobile robotic systems. 1 Introduction The eld of distributed robotics has its origins in the late-1980's, when several researchers began investigating issues in multiple mobile robot systems....", "role": "user" }, { "content": "<answer>Artificial Intelligence</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Implementing Teams of Agents Playing Simulated Robot Soccer . This article is intended to present an overview of the issues related to implementing teams of cooperating agents playing simulated robot soccer. First we introduce the concept of robot soccer and the simulated environment. Then we discuss why the (simulated) robot soccer is an interesting application area from the point of view of robotics and articial intelligence. The main part of the paper contains a discussion of agent architectures, both from theoretical and practical point of view. Later we discuss how to combine individual agents into teams having common strategies and goals. We also discuss learning, both on individual and team levels. 1 Introduction Robot soccer is a growing area of interest for the robotics and articial intelligence communities. There are many reasons for that. The main one is the complexity of the domain together with a set of well-dened rules governing the behaviour of the agents in this domain. The domain is suited for experiments both in ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe RoboCup Synthetic Agent Challenge 97 RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multiagent domain. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for machine learning, planning, and multi-agent researchers --- it not only supplies a concrete domain to evalute their techniques, but also challenges researchers to evolve these techniques to face key constraints fundamental to this domain: real-time, uncertainty, and teamwork. 1 Introduction RoboCup (The World Cup Robot Soccer) is an attempt to promote AI and robotics research by providing a common task, Soccer, for evaluation of various theories, algorithms, and agent architectur...\n\n2-Hop Neighbour:\nTeam Formation by Self-Interested Mobile Agents . A process of team formation by autonomous agents in a distributed environment is presented. Since the environment is distributed, there are serious problems with communication and consistent decision making inside a team. To deal with these problems, the standard technique of token passing in a computer network is applied. The passing cycle of the token serves as the communication route. It assures consistent decision making inside the team maintaining its organizational integrity. On the other hand it constitutes a component of the plan of the cooperative work performed by a complete team. Two algorithms for team formation are given. The first one is based on simple self-interested agents that still can be viewed as reactive agents (see [14]) although augmented with knowledge, goal, and cooperation mechanisms. The second one is based on sophisticated self-interested agents. Moreover, the algorithm based on fully cooperative agents, which is an adaptation of the static ...\n\n2-Hop Neighbour:\nCyberoos2000: Experiments with Emergent Tactical Behaviour this paper is that, rather than defining situated or tactical reasoning ad hoc, it is desirable to categorise agents according to their functionality and reactions to the environment, and identify corresponding classes of action theories and agent architectures. Then, the reasoning exhibited by agents of a certain type (and validated by particular action theories) can be declared to be situated, tactical, strategic, social and so on. In other words, the principal target is a systematic description of increasing levels of agent reasoning abilities. The results reported in [8, 9, 10] demonstrated that this is achievable at the situated level. Preliminary results on the systematic models for basic tactical behaviour were obtained as well [11]. This work intends to use this framework in experimenting with emergent tactical teamwork; and thus build up empirical results and intuition necessary to advance the systematic methodology towards collaborative goal-oriented agents.\n\n2-Hop Neighbour:\nLearning of Kick in Artificial Soccer Soccer simulation is a suitable domain for research in artificial intelligence. This paper describes a new ball kicking skill, that uses case based learning. In many situations a single kick command is not sufficient to reach the desired ball movement. Hence a skill is needed, that finds a suitable sequence of kicks. The new kicking skill was developed for the AT Humboldt artificial soccer team.\n\n2-Hop Neighbour:\nPlaying Soccer with Legged Robots Sony has provided a remarkable platform for research and development in robotic agents, namely fully autonomous legged robots. In this paper, we describe our work using Sony's legged robots to participate at the RoboCup'98 legged robot demonstration and competition. Robotic soccer represents a very challenging environment for research into systems with multiple robots that need to achieve concrete objectives, particularly in the presence of an adversary. Furthermore RoboCup'98 offers an excellent opportunity for robot entertainment. We introduce the RoboCup context and briefly present Sony's legged robot. We developed a vision-based navigation and a Bayesian localization algorithm. Team strategy is achieved through pre-defined behaviors and learning by instruction. 1 Introduction Problem solving in complex domains necessarily involves multiple agents, dynamic environments, and the need for learning from feedback and previous experience. Robotic soccer is an example of one such complex...\n\n2-Hop Neighbour:\nA Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Issues in Agent-Oriented Software Engineering In this paper, I will discuss the conceptual foundation of agent-oriented software development by relating the fundamental elements of the agent-oriented view to those of other, well established programming paradigms, especially the object-oriented approach. Furthermore, I will motivate the concept of autonomy as the basic property of the agent-oriented school and discuss the development history of programming paradigms that lead to this perspective on software systems. The paper will be concluded by an outlook on how the new paradigm can change the way we think about software systems.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ...\n\n1-Hop Neighbour:\nDesigning Agent-Oriented Systems by Analysing Agent Interactions . We propose a preliminary methodology for agent-oriented software engineering based on the idea of agent interaction analysis. This approach uses interactions between undetermined agents as the primary component of analysis and design. Agents as a basis for software engineering are useful because they provide a powerful and intuitive abstraction which can increase the comprehensiblity of a complex design. The paper describes a process by which the designer can derive the interactions that can occur in a system satisfying the given requirements and use them to design the structure of an agent-based system, including the identification of the agents themselves. We suggest that this approach has the flexibility necessary to provide agent-oriented designs for open and complex applications, and has value for future maintenance and extension of these systems. 1\n\n1-Hop Neighbour:\nA survey of Agent-Oriented Software Engineering Agent-Oriented Software Engineering is the one of the most recent contributions to the field of Software Engineering. It has several benefits compared to existing development approaches, in particular the ability to let agents represent high-level abstractions of active entities in a software system. This paper gives an overview of recent research and industrial applications of both general high-level methodologies and on more specific design methodologies for industry-strength software engineering.\n\n2-Hop Neighbour:\nA Logic of BDI Agents with Procedural Knowledge In this paper, we present a new logic for specifying the behaviour of multi-agent systems. In this logic, agents are viewed as BDI systems, in that their state is characterised in terms of beliefs, desires, and intentions: the semantics of the BDI component of the logic are based on the wellknown system of Rao and Georgeff. In addition, agents have available to them a library of plans, representing their `know-how': procedural knowledge about how to achieve their intentions. These plans are, in effect, programs, that specify how a group of agents can work in parallel to achieve certain ends. The logic provides a rich set of constructs for describing the structure and execution of plans. Some properties of the logic are investigated, (in particular, those relating to plans), and some comments on future work are presented. 1 Introduction There is currently much international interest in computer systems that go under the banner of intelligent agents [17]. Crudely, an intelligent agent i...\n\n2-Hop Neighbour:\nPractical Reasoning with Procedural Knowledge (A Logic of BDI Agents with Know-How) . In this paper, we present a new logic for specifying the behaviour of multi-agent systems. In this logic, agents are viewed as BDI systems, in that their state is characterised in terms of beliefs, desires, and intentions: the semantics of the BDI component of the logic are based on the well-known system of Rao and Georgeff. In addition, agents have available to them a library of plans, representing their `know-how': procedural knowledge about how to achieve their intentions. These plans are, in effect, programs, that specify how a group of agents can work in parallel to achieve certain ends. The logic provides a rich set of constructs for describing the structure and execution of plans. Some properties of the logic are investigated, (in particular, those relating to plans), and some comments on future work are presented. 1 Introduction There is currently much international interest in computer systems that go under the banner of intelligent agents [16]. Crudely, an intel...\n\n2-Hop Neighbour:\nAgent-Oriented Software Engineering Agent-oriented techniques represent an exciting new means of analysing, designing and building complex software systems. They have the potential to significantly improve current practice in software engineering and to extend the range of applications that can feasibly be tackled. Yet, to date, there have been few serious attempts to cast agent systems as a software engineering paradigm. This paper seeks to rectify this omission. Specifically, it will be argued that: (i) the conceptual apparatus of agent-oriented systems is well-suited to building software solutions for complex systems and (ii) agent-oriented approaches represent a genuine advance over the current state of the art for engineering complex systems. Following on from this view, the major issues raised by adopting an agent-oriented approach to software engineering are highlighted and discussed.\n\n2-Hop Neighbour:\nVirtual Enterprise Design - BDI Agents vs. Objects Current research identifying architectures for a virtual enterprise has moved from information modelling to role modelling. Thus, a high level of autonomy results from the distribution of responsibilities, capabilities, and knowledge among different business units in the virtual enterprise at the design stage. Current trends tend towards using object-oriented technology as an effective abstract system design and implementation methodology. We argue that applying the software agent paradigm to the virtual enterprise provides various advantages on both the design and operational levels. We further show that the Belief Desire Intention agent architecture has additional abilities of mapping real world business unit autonomy and interaction. We also introduce the Belief Desire Intention agent paradigm capability of facilitating highly flexible (agile) enterprise design and implementation. 1\n\n2-Hop Neighbour:\nHow to Avoid Knowing It All Beliefs have been formally modelled in the last decades using doxastic logics. The possible worlds model and its associated Kripke semantics provide an intuitive semantics for these logics, but they seem to commit us to model agents that are logically omniscient (they believe every classical tautology) and perfect reasoners (their beliefs are closed under classical deductive closure). Thus, this model would not be appropriate to model non-ideal agents, that have resource limitations that prevent them from attaining such levels of doxastic competence. This report contains a statement of these problems and a brief survey of some of the most interesting approaches that have been suggested to overcome them. Contents 1 Formal models of belief 3 1.1 Possible worlds and Kripke semantics . . . . . . . . . . . . . . . . . . . . . 3 1.2 Logical omniscience and perfect reasoning . . . . . . . . . . . . . . . . . . 5 2 Avoiding logical omniscience 7 2.1 Syntactic approaches . . . . . . . ...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Design and Implementation of the OLOG Deductive Object-Oriented Database Management System . OLOG is a novel deductive database system for advanced intelligent information system applications. It directly supports eective storage, ecient access and inference of large amount of persistent data with complex structures. It provides a SQL-like data denition language and data manipulation language, and a declarative rule-based query language. It combines the best of the deductive, object-oriented, and objectrelational approaches in a uniform framework. This paper describes the design and implementation of the OLOG system. 1 Introduction Deductive, object-oriented, and object-relational databases are three important extensions of the traditional relational database technology. Deductive databases stem from the integration of logic programming and relational databases. It oers representational and operational uniformity, reasoning capabilities, recursion, declarative querying, ecient secondary storage access, etc. However, deductive databases based on relational databas...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nDeductive Database Languages: Problems and Solutions this paper, we discuss these problems from four different aspects: complex values, object orientation, higher-orderness, and updates. In each case, we examine four typical languages that address the corresponding issues.\n\n1-Hop Neighbour:\nDesign and Implementation of the ROL System ROL is a deductive object-oriented database system developed at the University of Regina. It eectively integrates important features of deductive databases and object-oriented databases in a uniform framework and provides a uniform rule-based declarative language for dening, manipulating and querying a database. This paper describes the latest implementation of ROL. 1 Introduction In the past decade a lot of interests arose in integrating deductive and object-oriented databases to gain the best of the two approaches such as recursion, declarative querying, and rm logical foundations from deductive approaches, and object identity, complex objects, classes, class hierarchy, property inheritance with overriding and schema from object-oriented approach. A number of deductive object-oriented database languages have been proposed, such as O-logic [17], revised O-logic [11], C-logic [8], IQL [2], IQL2[1], Flogic [10], LOGRES [7], LLO [16], LOL [6], CORAL++[19], Datalog method [3], DLT ...\n\n1-Hop Neighbour:\nRelationlog: A Typed Extension to Datalog with Sets and Tuples This paper presents a novel logic programming based language for nested relational and complex value models called Relationlog. It stands in the same relationship to the nested relational and complex value models as Datalog stands to the relational model. The main novelty of the language is the introduction of powerful mechanisms, namely, partial and complete set terms, for representing and manipulating both partial and complete information on nested sets, tuples and relations. They generalize the set grouping and set enumeration mechanisms of LDL and allow the user to directly encode the open and closed world assumptions on nested sets, tuples, and relations. They allow direct inference and access to deeply embedded values in a complex value relation as if the relation is normalized, which greatly increases the ease of use of the language. As a result, the extended relational algebra operations can be represented in Relationlog directly, and more importantly, recursively in a way similar to Datalog. Like Datalog, Relationlog has a well-defined Herbrand model-theoretic semantics, which captures the intended semantics of nested sets, tuples and relations, and also a well-defined proof-theoretic semantics which coincides with its model-theoretic semantics.\n\n2-Hop Neighbour:\nXML Declarative Description: A Language for the Semantic Web this document, you agree to all provisions of the copyright laws protecting it. a Web resource's semantics. It employs XML as its bare syntax and enhances XML expressive power by employing Declarative Description theory. 8 A description in XDD is a set of ordinary XML elements, extended XML elements with variables, and the XML elements' relationships in terms of XML clauses. An ordinary XML element denotes a semantic unit and is a surrogate of an information item in the real application domain. An extended XML element represents implicit information or a set of semantic units. Clauses express rules, conditional relationships, integrity constraints, and ontological axioms. We define the precise and formal semantics of an XDD description as a set of ordinary XML elements, without employing other formalisms. Important axioms that are missing in XML and RDF but expressible in XDD include symmetry, composition-of, and inverse relations. As an example of the inverse-relation axiom, consider the XML clauses A and B in Figure 1a. They model the Creator and PubRAy8\"xW properties' inverses assumed by some particular domain. Figure 1b then gives an example of representing an RDF statement C, \"A creator of a document entitled `XDD language' is John.\" The semantics of an XDD description, which comprises the clauses A, B, and C, will also contain an RDF statement, \"A publication of John is `XDD language\"' (Figure 1c), hence allowing inverse inference of such implicit information. Obviously, this axiom cannot be represented in RDF. XDD can directly represent all XMLbased application markup languages. It also can simply represent XML applications that provide common conventions of semantics, syntax, and structures for certain specific domains. In addition to RDF, these domai...\n\n2-Hop Neighbour:\nA Rule-based Query Language for HTML With the recent popularity of the web, enormous amount of information is now available on line. Most web documents available over the web are in HTML format and are hierarchically structured in nature. How to query such web documents based on their internal hierarchical structure becomes more and more important. In this paper, we present a rule-based language called WebQL to support effective and flexible web queries. Unlike other web query languages, WebQL is a high level declarative query language with a logical semantics. It allows us to query web documents based on their internal hierarchical structures. It supports not only negation and recursion, but also query result restructuring in a natural way. We also describe the implementation of the system that supports the WebQL query language.\n\n2-Hop Neighbour:\nOverview of Datalog Extensions with Tuples and Sets Datalog (with negation) is the most powerful query language for relational database with a well-defined declarative semantics based on the work in logic programming. However, Datalog only allows inexpressive flat structures and cannot directly support complex values such as nested tuples and sets common in novel database applications. For these reasons, Datalog has been extended in the past several years to incorporate tuple and set constructors. In this paper, we examine four different Datalog extensions: LDL, COL, Hilog and Relationlog. 1 Introduction Databases and logic programming are two independently developed areas in computer science. Database technology has evolved in order to effectively and efficiently organize, manage and maintain large volumes of ever increasingly complex data reliably in various memory devices. The underlying structure of databases has been the primary focus of research which leads to the development of data models. The most well-known and widely used da...\n\n2-Hop Neighbour:\nOLOG: A Deductive Object Database Language (Extended Abstract) ) Mengchi Liu Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 [email protected] http://www.cs.uregina.ca/mliu Abstract. Deductive object-oriented databases are intended to combine the best of the deductive and object-oriented approaches. However, some important object-oriented features are not properly supported in the existing proposals. This paper proposes a novel deductive language that supports important structurally object-oriented features such as object identity, complex objects, typing, classes, class hierarchies, multiple property inheritance with overriding, conict-handling, and blocking, and schema denitions in a uniform framework. The language eectively integrates useful features in deductive and object-oriented database languages. The main novel feature is the logical semantics that cleanly accounts for those structurally object-oriented features that are missing in object-oriented database languages. Therefor...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'The Structure of Object Transportation and Orientation in Human-Computer Interaction An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of humancomputer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design. Keywords Direct manipulation, input device, multi-dimensional control, visuomotor control, visual conditions, information processing, interface design, virtual reality. INTRODUCTION Object manipulation is a basic operation in humancomputer interaction (HCI). Modern computer technology advances towards affording m...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nRotating Virtual Objects with Real Handles Times for virtual object rotations reported in the literature are of the order of ten seconds or more and this is far longer than it takes to manually orient a \"real\" object, such as a cup. This is a report of a series of experiments designed to investigate the reasons for this difference and to help design interfaces for object manipulation. The results suggest that two major factors are important. Having the hand physically in the same location as the virtual object being manipulated is one. The other is based on whether the object is being rotated to a new, randomly determined orientation, or is always rotated to the same position. Making the object held in the hand have the same physical shape as the object being visually manipulated was not found to be a significant factor. The results are discussed in the context of interactive virtual environments. Categories and Subject Descriptors: H.1.2 [Models and Principles]: User/Machine Systems -- human factors; I.3.6 [Computer Graphics]:...\n\n1-Hop Neighbour:\nSystem Lag Tests for Augmented and Virtual Environments We describe a simple technique for accurately calibrating the temporal lag in augmented and virtual environments within the Enhanced Virtual Hand Lab (EVHL), a collection of hardware and software to support research on goal-directed human hand motion. Lag is the sum of various delays in the data pipeline associated with sensing, processing, and displaying information from the physical world to produce an augmented or virtual world. Our main calibration technique uses a modified phonograph turntable to provide easily tracked periodic motion, reminiscent of the pendulum-based calibration technique of Liang, Shaw and Green. Measurements show a three-frame (50 ms) lag for the EVHL. A second technique, which uses a specialized analog sensor that is part of the EVHL, provides a \"closed loop\" calibration capable of sub-frame accuracy. Knowing the lag to sub-frame accuracy enables a predictive tracking scheme to compensate for the end-toend lag in the data pipeline. We describe both techniques and the EVHL environment in which they are used.", "role": "user" }, { "content": "<answer>Human-Computer Interaction</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Intelligent Information Triage In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many informationretrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether a news story will be followed by many, very similar news stories, and also whether the stock price of one or more companies associated with a news story will move significantly following the appearance of that story. We conclude by discussing how the comprehensibility of the learned classifiers can be critical to success. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nImproving Text Categorization Methods for Event Tracking Automated tracking of events from chronologically ordered document streams is a new challenge for statistical text classification. Existing learning techniques must be adapted or improved in order to effectively handle difficult situations where the number of positive training instances per event is extremely small, the majority of training documents are unlabelled, and most of the events have a short duration in time. We adapted several supervised text categorization methods, specifically several new variants of the k-Nearest Neighbor (kNN) algorithm and a Rocchio approach, to track events. All of these methods showed significant improvement (up to 71% reduction in weighted error rates) over the performance of the original kNN algorithm on TDT benchmark collections, making kNN among the top-performing systems in the recent TDT3 official evaluation. Furthermore, by combining these methods, we significantly reduced the variance in performance of our event tracking system over different ...\n\n1-Hop Neighbour:\nLanguage Models for Financial News Recommendation We present a unique approach to identifying news stories that influence the behavior of financial markets. Specifically, we describe the design and implementation of &AElig;nalyst, a system that can recommend interesting news stories -- stories that are likely to affect market behavior. &AElig;nalyst operates by correlating the content of news stories with trends in financial time series. We identify trends in time series using piecewise linear fitting and then assign labels to the trends according to an automated binning procedure. We use language models to represent patterns of language that are highly associated with particular labeled trends. &Aelig;nalyst can then identify and recommend news stories that are highly indicative of future trends. We evaluate the system in terms of its ability to recommend the stories that will a ect the behavior of the stock market. We demonstrate that stories recommended by &Aelig;nalyst could be used to profitably predict forthcoming trends in stock prices.\n\n1-Hop Neighbour:\nBoosting and Rocchio Applied to Text Filtering We discuss two learning algorithms for text filtering: modified Rocchio and a boosting algorithm called AdaBoost. We show how both algorithms can be adapted to maximize any general utility matrix that associates cost (or gain) for each pair of machine prediction and correct label. We first show that AdaBoost significantly outperforms another highly effective text filtering algorithm. We then compare AdaBoost and Rocchio over three large text filtering tasks. Overall both algorithms are comparable and are quite effective. AdaBoost produces better classifiers than Rocchio when the training collection contains a very large number of relevant documents. However, on these tasks, Rocchio runs much faster than AdaBoost. 1 Introduction With the explosion in the amount of information available electronically, information filtering systems that automatically send articles of potential interest to a user are becoming increasingly important. If users indicate their interests to a filtering system...\n\n2-Hop Neighbour:\nLearned Models for Continuous Planning We are interested in the nature of activity -- structured behavior of nontrivial duration -- in intelligent agents. We believe that the development of activity is a continual process in which simpler activities are composed, via planning, to form more sophisticated ones in a hierarchical fashion. The success or failure of a planner depends on its models of the environment, and its ability to implement its plans in the world. We describe an approach to generating dynamical models of activity from real-world experiences and explain how they can be applied towards planning in a continuous state space. 1 Introduction We are interested in the problem of how activity emerges in an intelligent agent. We believe that activity plays a critical role in the development of many high-level cognitive structures: classes, concepts, and language, to name a few. Thus, it is our goal to derive and implement a theory of the development of activity in intelligent agents and implement it using the Pioneer...\n\n2-Hop Neighbour:\nCombining Multiple Learning Strategies for Effective Cross Validation Parameter tuning through cross-validation becomes very difficult when the validation set contains no or only a few examples of the classes in the evaluation set. We address this open challenge by using a combination of classifiers with different performance characteristics to effectively reduce the performance variance on average of the overall system across all classes, including those not seen before. This approach allows us to tune the combination system on available but less-representative validation data and obtain smaller performance degradation of this system on the evaluation data than using a single-method classifier alone. We tested this approach by applying k-Nearest Neighbor, Rocchio and Language Modeling classifiers and their combination to the event tracking problem in the Topic Detection and Tracking (TDT) domain, where new classes (events) are created constantly over time, and representative validation sets for new classes are often difficult to ob...\n\n2-Hop Neighbour:\nMachine Learning and Natural Language Processing In this report, some collaborative work between the fields of Machine Learning (ML) and Natural Language Processing (NLP) is presented. The document is structured in two parts. The first part includes a superficial but comprehensive survey covering the state--of--the--art of machine learning techniques applied to natural language learning tasks. In the second part, a particular problem, namely Word Sense Disambiguation (WSD), is studied in more detail. In doing so, four algorithms for supervised learning, which belong to different families, are compared in a benchmark corpus for the WSD task. Both qualitative and quantitative conclusions are drawn. This document stands for the complementary documentation for the conference \"Aprendizaje autom 'atico aplicado al procesamiento del lenguaje natural\", given by the author within the course: \"Curso de Industrias de la Lengua: La Ingenier'ia Lingu'istica en la Sociedad de la Informaci'on\", Fundaci'on Duques de Soria. Soria. July 2000. 1 Con...\n\n2-Hop Neighbour:\nStatistical Phrases in Automated Text Categorization In this work we investigate the usefulness of n-grams for document indexing in text categorization (TC). We call n-gram a set t k of n word stems, and we say that t k occurs in a document d j when a sequence of words appears in d j that, after stop word removal and stemming, consists exactly of the n stems in t k , in some order. Previous researches have investigated the use of n-grams (or some variant of them) in the context of specific learning algorithms, and thus have not obtained general answers on their usefulness for TC. In this work we investigate the usefulness of n-grams in TC independently of any specific learning algorithm. We do so by applying feature selection to the pool of all #-grams (# # n), and checking how many n-grams score high enough to be selected in the top # #-grams. We report the results of our experiments, using several feature selection functions and varying values of #, performed on the Reuters-21578 standard TC benchmark. We also report results of making actual use of the selected n-grams in the context of a linear classifier induced by means of the Rocchio method.\n\n2-Hop Neighbour:\nTowards a Comprehensive Topic Hierarchy for News To date, a comprehensive, Yahoo-like hierarchy of topics has yet to be offered for the domain of news. The Yahoo approach of managing such a hierarchy --- hiring editorial staff to read documents and correctly assign them to topics --- is simply not practical in the domain of news. Far too many stories are written and made available online everyday. While many Machine Learning methods exist for organising documents into topics, these methods typically require a large number of labelled training examples before performing accurately. When managing a large and ever-changing topic hierarchy, it is unlikely that there would be enough time to provide many examples per topic. For this reason, it would be useful to identify extra information within the domain of news that could be harnessed to minimise the number of labelled examples required to achieve reasonable accuracy. To this end, the notion of a semi-labelled document is introduced. These documents, which are partially labelled by th...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Topic-Driven Crawlers: Machine Learning Issues Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.\n\n1-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...\n\n1-Hop Neighbour:\nMySpiders : Evolve your own intelligent Web crawlers Abstract. The dynamic nature of the World Wide Web makes it a challenge to find information that is bothrelevant and recent. Intelligent agents can complement the power of searchengines to meet this challenge. We present a Web tool called MySpiders, which implements an evolutionary algorithm managing a population of adaptive crawlers who browse the Web autonomously. Each agent acts as an intelligent client on behalf of the user, driven by a user query and by textual and linkage clues in the crawled pages. Agents autonomously decide which links to follow, which clues to internalize, when to spawn offspring to focus the search near a relevant source, and when to starve. The tool is available to the public as a threaded Java applet. We discuss the development and deployment of such a system. Keywords: web informational retrieval, topic-driver crawlers, online search, InfoSpiders, MySpiders, applet\n\n2-Hop Neighbour:\nFocused Web Crawling: A Generic Framework for Specifying the User Interest and for Adaptive Crawling Strategies Compared to the standard web search engines, focused crawlers yield good recall as well as good precision by restricting themselves to a limited domain. In this paper, we do not introduce another focused crawler, but we introduce a generic framework for focused crawling consisting of two major components: (1) Specification of the user interest and measuring the resulting relevance of a given web page. The proposed method of specifying the user interest by a formula combining atomic topics significantly improves the expressive power of the user. (2) Crawling strategy. Ordering the links at the crawl frontier is a challenging task since pages of a low relevance may be on a path to highly relevant pages. Thus, tunneling may be necessary. The explicit specification of the user interest allows us to define topic-specific strategies for tunneling. Our system Ariadne is a prototype implementation of the proposed framework. An experimental evaluation of different crawling strategies demonstrates the performance gain obtained by focusing a crawl and by dynamically adapting the focus. 1\n\n2-Hop Neighbour:\nConstructing, Organizing, and Visualizing Collections of Topically Related Web Resources For many purposes, the Web page is too small a unit of interaction and analysis. Web sites are structured multimedia documents consisting of many pages, and users often are interested in obtaining and evaluating entire collections of topically related sites. Once such a collection is obtained, users face the challenge of exploring, comprehending, and organizing the items. We report four innovations that address these user needs. • We replaced the web page with the web site as the basic unit of interaction and analysis. • We defined a new information structure, the clan graph, that groups together sets of related sites. • We augment the representation of a site with a site profile, information about site structure and content that helps inform user evaluation of a site. • We invented a new graph visualization, the auditorium visualization, that reveals important structural and content properties of sites within a clan graph. Detailed analysis and user studies document the utility of this approach. The clan graph construction algorithm tends to filter out irrelevant sites and discover additional relevant items. The auditorium visualization, augmented with drill down capabilities to explore site profile data, helps users to find high-quality sites as well as sites that serve a particular function.\n\n2-Hop Neighbour:\nWeb Crawling Agents for Retrieving Biomedical Information Autonomous agents for topic driven retrieval of information from the Web are currently a very active area of research. The ability to conduct real time searches for information is important for many users including biomedical scientists, health care professionals and the general public. We present preliminary research on different retrieval agents tested on their ability to retrieve biomedical information, whose relevance is assessed using both genetic and ontological expertise. In particular, the agents are judged on their performance in fetching information about diseases when given information about genes. We discuss several key insights into the particular challenges of agent based retrieval learned from our initial experience in the biomedical domain.\n\n2-Hop Neighbour:\nThe Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis.\n\n2-Hop Neighbour:\nBuilding Efficient and Effective Metasearch Engines Frequently a user's information needs are stored in the databases of multiple search engines. It is inconvenient and inefficient for an ordinary user to invoke multiple search engines and identify useful documents from the returned results. To support unified access to multiple search engines, a metasearch engine can be constructed. When a metasearch engine receives a query from a user, it invokes the underlying search engines to retrieve useful information for the user. Metasearch engines have other benefits as a search tool such as increasing the search coverage of the Web and improving the scalability of the search. In this article, we survey techniques that have been proposed to tackle several underlying challenges for building a good metasearch engine. Among the main challenges, the database selection problem is to identify search engines that are likely to return useful documents to a given query. The document selection problem is to determine what documents to retrieve from each identified search engine. The result merging problem is to combine the documents returned from multiple search engines. We will also point out some problems that need to be further researched.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'ITTALKS: A Case Study in the Semantic Web and DAML . Effective use of the vast quantity of information now available on the web will require the use of \"Semantic Web\" markup languages such as the DARPA Agent Markup Language (DAML). Such languages will enable the automated gathering and processing of much information that is currently available but insufficiently utilized. Effectively, such languages will facilitate the integration of multi-agent systems with the existing information infrastructure. As part of our exploration of Semantic Web technology, and DAML in particular, we have constructed ITTALKS, a web-based system for automatic and intelligent notification of information technology talks. In this paper, we describe the ITTALKS system, and discuss the numerous ways in which the use of Semantic Web concepts and DAML extend its ability to provide an intelligent online service to both the human community and the agents assisting them. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n1-Hop Neighbour:\nLearning to Tag for Information Extraction from Text . LearningPINOCCHIO is an algorithm for adaptive information extraction. It learns template filling rules that insert SGML tags into texts. LearningPINOCCHIO is based on a covering algorithm that learns rules by bottom-up generalization of instances in a tagged corpus. It has been tested on three scenarios in informal domains in two languages (Italian and English). Experiments report excellent results with respect to the current state of the art. 1 Introduction By general agreement the main barriers to wide use and commercialization of IE are the difficulties in adapting systems to new applications and domains. In the last years there has been increasing interest in applying machine learning to Information Extraction from text [13, 3, 9, 1, 11]. In particular there is an increasing interest in the application of adaptive IE to Web pages [12, 10] and to informal domains (rental ads, email messages, etc.) [15, 8, 2] for building fully automated systems. This is due from one side to the...\n\n2-Hop Neighbour:\nLearning Information Extraction Rules for Semi-structured and Free Text . A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically. WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences. Such semistructured text has largely been beyond the scope of previous systems. When used in conjunction with a syntactic analyzer and semantic tagging, WHISK can also handle extraction from free text such as news stories. Keywords: natural language processing, information extraction, rule learning 1. Information extraction As more and more text becomes available on-line, there is a growing need for systems that extract information automatically from text data. An information extraction (IE) sys...\n\n2-Hop Neighbour:\nFEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Clustering Hypertext With Applications To Web Searching : Clustering separates unrelated documents and groups related documents, and is useful for discrimination, disambiguation, summarization, organization, and navigation of unstructured collections of hypertext documents. We propose a novel clustering algorithm that clusters hypertext documents using words (contained in the document), out-links (from the document) , and in-links (to the document). The algorithm automatically determines the relative importance of words, out-links, and in-links for a given collection of hypertext documents. We annotate each cluster using six information nuggets: summary, breakthrough, review, keywords, citation, and reference. These nuggets constitute high-quality information resources that are representatives of the content of the clusters, and are extremely effective in compactly summarizing and navigating the collection of hypertext documents. We employ web searching as an application to illustrate our results. Keywords: cluster annotation, feature comb...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nSearching the world wide web The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the “indexable Web, ” the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages. The Internet has grown rapidly since its inception in December 1969 (1) and is anticipated to expand 1000 % over the next few years (2). The amount of scientific information and the number of electronic journals on the Internet continue to increase [about 1000 journals as of 1996 (2, 3)]. The Internet and the World Wide Web (the Web) represent significant advancements for the retrieval and dissemination of scientific and other literature and for the advancement of education (2, 4). With the introduction of full-text search engines such as AltaVista (www.\n\n1-Hop Neighbour:\nConcept Decompositions for Large Sparse Text Data using Clustering Abstract. Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99 % is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like ” and “self-similar ” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means\n\n2-Hop Neighbour:\nText Database Selection for Longer Queries A metasearch engine is a system that supports unified access to multiple local search engines. One of the main challenges in building a large-scale metasearch engine is to solve the database (search engine) selection problem, which is to efficiently and accurately determine a small number of potentially useful local search engines to invoke for each user query. For the database of each search engine, a representative which indicates approximately the contents of the database is created in advance to enable database selection. The representatives of all databases can be integrated into a single representative to make the selection process more scalable. While an integrated representative with high scalability has just been proposed and has been found to be effective for short queries, its effectiveness for longer queries is significantly lower. In the Internet environment, most queries initially submitted by users are short queries. However, it has been found that better search effectiveness can often be achieved when additional terms are added to the initial queries through query expansion or relevance feedback. The resulting queries are usually longer than the initial queries. In this paper, we propose a new method to construct database representatives and to decide which databases to select for longer queries. Experimental results are given to compare the performance of the new method with that of a previous method.\n\n2-Hop Neighbour:\nDocument Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...\n\n2-Hop Neighbour:\nTheseus: Categorization by Context Introduction The traditional approach to document categorization is categorization by content, since information for categorizing a document is extracted from the document itself. In a hypertext environment like the Web, the structure of documents and the link topology can be exploited to perform what we call categorization by context [Attardi 98]: the context surrounding a link in an HTML document is used for categorizing the document referred by the link. Categorization by context is capable of dealing also with multimedia material, since it does not rely on the ability to analyze the content of documents. Categorization by context leverages on the categorization activity implicitly performed when someone places or refers to a document on the Web. By focusing the analysis to the documents used by a group of people, one can build a catalogue tuned to the need of that group. Categorization by context is based on the following assumptions: 1\n\n2-Hop Neighbour:\nAn Approach to Relate the Web Communities Through Bipartite Graphs The Web harbors a large number of community structures. Early detection of community structures has many purposes such as reliable searching and selective advertising. In this paper we investigate the problem of extracting and relating the web community structures from a large collection of Web-pages by performing hyper-link analysis. The proposed algorithm extracts the potential community signatures by extracting the corresponding dense bipartite graph (DBG) structures from the given data set of web pages. Further, the proposed algorithm can also be used to relate the extracted community signatures. We report the experimental results conducted on 10 GB TREC (Text REtrieval Conference) data collection that contains 1.7 million pages and 21.5 million links. The results demonstrate that the proposed approach extracts meaningful community signatures and relates them.\n\n2-Hop Neighbour:\nExploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'C4-1: Building a community hierarchy for the Web based on Bipartite Graphs In this paper we propose an approach to extract and relate the communities by considering a community signature as a group of content creators that manifests itself as a set of interlinked pages. We abstract a community signature as a group of pages that form a dense bipartite graph (DBG), and proposed an algorithm to extract the DBGs from the given data set. Also, using the proposed approach, the extracted communities can be grouped to form a high-level communities. We apply the proposed algorithm on 10 GB TREC (Text REtrieval Conference) data set and extract a three-level community hierarchy. The extracted community hierarchy facilitates an easy analysis of low-level communities and provides a way to understand the sociology of the Web.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n1-Hop Neighbour:\nEfficient Identification of Web Communities We dene a community on the web as a set of sites that have more links (in either direction) to members of the community than to non-members. Members of such a community can be eciently identied in a maximum ow / minimum cut framework, where the source is composed of known members, and the sink consists of well-known non-members. A focused crawler that crawls to a xed depth can approximate community membership by augmenting the graph induced by the crawl with links to a virtual sink node. The effectiveness of the approximation algorithm is demonstrated with several crawl results that identify hubs, authorities, web rings, and other link topologies that are useful but not easily categorized. Applications of our approach include focused crawlers and search engines, automatic population of portal categories, and improved ltering. Categories and Subject Descriptors H.2.8 [Database Management]: Database Applications| data mining ; H.3.3 [Information Storage and Retrieval]: Information...\n\n2-Hop Neighbour:\nExploration versus Exploitation in Topic Driven Crawlers Topic driven crawlers are increasingly seen as a way to address the scalability limitations of universal search engines, by distributing the crawling process across users, queries, or even client computers. The context available to a topic driven crawler allows for informed decisions about how to prioritize the links to be explored, given time and bandwidth constraints. We have developed a framework and a number of methods to evaluate the performance of topic driven crawler algorithms in a fair way, under limited memory resources. Quality metrics are derived from lexical features, link analysis, and a hybrid combination of the two. In this paper we focus on the issue of how greedy a crawler should be. Given noisy quality estimates of links in a frontier, we investigate what is an appropriate balance between a crawler's need to exploit this information to focus on the most promising links, and the need to explore links that appear suboptimal but might lead to more relevant pages. We show that exploration is essential to locate the most relevant pages under a number of quality measures, in spite of a penalty in the early stage of the crawl.\n\n2-Hop Neighbour:\nText and Image Metasearch on the Web As the Web continues to increase in size, the relative coverage of Web search engines is decreasing, and search tools that combine the results of multiple search engines are becoming more valuable. This paper provides details of the text and image metasearch functions of the Inquirus search engine developed at the NEC Research Institute. For text metasearch, we describe features including the use of link information in metasearch, and provide statistics on the usage and performance of Inquirus and the Web search engines. For image metasearch, Inquirus queries multiple image search engines on the Web, downloads the actual images, and creates image thumbnails for display to the user. Inquirus handles image search engines that return direct links to images, and engines that return links to HTML pages. For the engines that return HTML pages, Inquirus analyzes the text on the pages in order to predict which images are most likely to correspond to the query. The individual image search engin...\n\n2-Hop Neighbour:\nWhat can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1\n\n2-Hop Neighbour:\nEngineering a multi-purpose test collection for Web retrieval experiments Past research into text retrieval methods for the Web has been restricted by the lack of a test collection capable of supporting experiments which are both realistic and reproducible. The 1.69 million document WT10g collection is proposed as a multi-purpose testbed for experiments with these attributes, in distributed IR, hyperlink algorithms and conventional ad hoc retrieval. WT10g was constructed by selecting from a superset of documents in such a way that desirable corpus properties were preserved or optimised. These properties include: a high degree of inter-server connectivity, integrity of server holdings, inclusion of documents related to a very wide spread of likely queries, and a realistic distribution of server holding sizes. We conrm that WT10g contains exploitable link information using a site (homepage) nding experiment. Our results show that, on this task, Okapi BM25 works better on propagated link anchor text than on full text. Keywords: Web retrieval; Link-based ranking; Distributed information retrieval; Test collections 1\n\n2-Hop Neighbour:\nAn Approach to Relate the Web Communities Through Bipartite Graphs The Web harbors a large number of community structures. Early detection of community structures has many purposes such as reliable searching and selective advertising. In this paper we investigate the problem of extracting and relating the web community structures from a large collection of Web-pages by performing hyper-link analysis. The proposed algorithm extracts the potential community signatures by extracting the corresponding dense bipartite graph (DBG) structures from the given data set of web pages. Further, the proposed algorithm can also be used to relate the extracted community signatures. We report the experimental results conducted on 10 GB TREC (Text REtrieval Conference) data collection that contains 1.7 million pages and 21.5 million links. The results demonstrate that the proposed approach extracts meaningful community signatures and relates them.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'A System For Automatic Personalized Tracking of Scientific Literature on the Web We introduce a system as part of the CiteSeer digital library project for automatic tracking of scientific literature that is relevant to a user’s research interests. Unlike previous systems that use simple keyword matching, CiteSeer is able to track and recommend topically relevant papers even when keyword based query profiles fail. This is made possible through the use of a heterogenous profile to represent user interests. These profiles include several representations, including content based relatedness measures. The CiteSeer tracking system is well integrated into the search and browsing facilities of CiteSeer, and provides the user with great flexibility in tuning a profile to better match his or her interests. The software for this system is available, and a sample database is online as a public service.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nYarrow: A Real-Time Client Side Meta-Search Learner In this paper we report our research on building Yarrow - an intelligent web meta-search engine. The predominant feature of Yarrow is that in contrast to the lack of adaptive learning features in existing metasearch engines, Yarrow is equipped with a practically efficient on-line learning algorithm so that it is capable of helping the user to search for the desired documents with as little feedback as possible. Currently, Yarrow can query eight of the most popular search engines and is able to perform document parsing and indexing, and learning in real-time on client side. Its architecture and performance are also discussed. 1. Introduction As the world wide web evolves and grows so rapidly, web search, an interface between the human users and the vast information gold mine of the web, is becoming a necessary part of people's daily life. Designing and implementing practically effective web search tools is a challenging task. It calls for innovative methods and strategies f...\n\n1-Hop Neighbour:\nWebSail: From On-line Learning to Web Search In this paper we investigate the applicability of on-line learning algorithms to the real-world problem of web search. Consider that web documents are indexed using n Boolean features. We first present a practically efficient on-line learning algorithm TW2 to search for web documents represented by a disjunction of at most k relevant features. We then design and implement WebSail, a real-time adaptive web search learner, with TW2 as its learning component. WebSail learns from the user's relevance feedback in real-time and helps the user to search for the desired web documents. The architecture and performance of WebSail are also discussed.\n\n1-Hop Neighbour:\nCiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both \"citing\" and \"cited\" publications can be followed. In addition to simple browsing and keyword searches, the agent ...\n\n2-Hop Neighbour:\nExploiting Synergy Between Ontologies and Recommender Systems Recommender systems learn about user preferences over time, automatically finding things of similar interest. This reduces the burden of creating explicit queries. Recommender systems do, however, suffer from cold-start problems where no initial information is available early on upon which to base recommendations. Semantic knowledge structures, such as ontologies, can provide valuable domain knowledge and user information. However, acquiring such knowledge and keeping it up to date is not a trivial task and user interests are particularly difficult to acquire and maintain. This paper investigates the synergy between a web-based research paper recommender system and an ontology containing information automatically extracted from departmental databases available on the web. The ontology is used to address the recommender systems cold-start problem. The recommender system addresses the ontology’s interest-acquisition problem. An empirical evaluation of this approach is conducted and the performance of the integrated systems measured.\n\n2-Hop Neighbour:\nAutomatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine.\n\n2-Hop Neighbour:\nCapturing Knowledge of User Preferences: Ontologies in Recommender Systems Tools for filtering the World Wide Web exist, but they are hampered by the difficulty of capturing user preferences in such a dynamic environment. We explore the acquisition of user profiles by unobtrusive monitoring of browsing behaviour and application of supervised machine-learning techniques coupled with an ontological representation to extract user preferences. A multi-class approach to paper classification is used, allowing the paper topic taxonomy to be utilised during profile construction. The Quickstep recommender system is presented and two empirical studies evaluate it in a real work setting, measuring the effectiveness of using a hierarchical topic ontology compared with an extendable flat list.\n\n2-Hop Neighbour:\nMining the Web to Create Minority Language Corpora The Web is a valuable source of language specific resources but the process of collecting, organizing and utilizing these resources is difficult. We describe CorpusBuilder, an approach for automatically generating Web-search queries for collecting documents in a minority language. It differs from pseudo-relevance feedback in that retrieved documents are labeled by an automatic language classifier as relevant or irrelevant, and this feedback is used to generate new queries. We experiment with various query-generation methods and query-lengths to find inclusion/exclusion terms that are helpful for retrieving documents in the target language and find that using odds-ratio scores calculated over the documents acquired so far was one of the most consistently accurate query-generation methods. We also describe experiments using a handful of words elicited from a user instead of initial documents and show that the methods perform similarly. Experiments applying the same approach to multiple languages are also presented showing that our approach generalizes to a variety of languages. 1.\n\n2-Hop Neighbour:\nAuthoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative \"authority\" of pages in such environments. This issue is central to a number of basic hypertext search t...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Learning Probabilistic Relational Models A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with &quot;flat &quot; data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases. 1' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nUsing Probabilistic Relational Models for Collaborative Filtering Recent projects in collaborative filtering and information filtering address the task of inferring user preference relationships for products or information. The data on which these inferences are based typically consists of pairs of people and items. The items may be information sources (such as web pages or newspaper articles) or products (such as books, software, movies or CDs). We are interested in making recommendations or predictions. Traditional approaches to the problem derive from classical algorithms in statistical pattern recognition and machine learning. The majority of these approaches assume a ”flat ” data representation for each object, and focus on a single dyadic relationship between the objects. In this paper, we examine a richer model that allows us to reason about many different relations at the same time. We build on the recent work on probabilistic relational models (PRMs), and describe how PRMs can be applied to the task of collaborative filtering. PRMs allow us to represent uncertainty about the existence of relationships in the model and allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. 1\n\n1-Hop Neighbour:\nProbabilistic Frame-Based Systems Two of the most important threads of work in knowledge representation today are frame-based representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of their inability to deal with uncertainty and noise. BNs provide an intuitive and coherent probabilistic representation of our uncertainty, but are very limited in their ability to handle complex structured domains. In this paper, we provide a language that cleanly integrates these approaches, preserving the advantages of both. Our approach allows us to provide natural and compact definitions of probability models for a class, in a way that is local to the class frame. These models can be instantiated for any set of interconnected instances, resulting in a coherent probability distribution over the instance properties. Our language also allows us to represent important types of uncertainty tha...\n\n2-Hop Neighbour:\nUncertain Knowledge Representation and Communicative Behavior in Coordinated Defense This paper reports on results we obtained on communication among artificial and human agents interacting in a simulated air defense domain. In our research, we postulate that the artificial agents use a decision-theoretic method to select optimal communicative acts, given the characteristics of the particular situation. Thus, the agents we implemented compute the expected utilities of various alternative communicative acts, and execute the best one. The agents use a probabilistic frame-based knowledge formalism to represent the uncertain information they have about the domain and about the other agents present. We build on our earlier work that uses the Recursive Modeling Method (RMM) for coordination, and apply RMM to rational communication in an anti-air defense domain. In this domain, distributed units coordinate and communicate to defend a specified territory from a number of attacking missiles. We measure the benefits of rational communication by showing the improvement in the qua...", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Effective Site Finding using Link Anchor Information Link-based ranking methods have been described in the literature and applied in commercial Web search engines. However, according to recent TREC experiments, they are no better than traditional content-based methods. We conduct a different type of experiment, in which the task is to find the main entry point of a specific Web site. In our experiments, ranking based on link anchor text is twice as effective as ranking based on document content, even though both methods used the same BM25 formula. We obtained these results using two sets of 100 queries on a 18.5 million docu- ment set and another set of 100 on a 0.4 million document set. This site finding effectiveness begins to explain why many search engines have adopted link methods. It also opens a rich new area for effectiveness improvement, where traditional methods fail.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nTopical Locality in the Web Most web pages are linked to others with related content. This idea, combined with another that says that text in, and possibly around, HTML anchors describe the pages to which they point, is the foundation for a usable WorldWide Web. In this paper, we examine to what extent these ideas hold by empirically testing whether topical locality mirrors spatial locality of pages on the Web. In particular, we find that the likelihood of linked pages having similar textual content to be high; the similarity of sibling pages increases when the links from the parent are close together; titles, descriptions, and anchor text represent at least part of the target page; and that anchor text may be a useful discriminator among unseen child pages. These results show the foundations necessary for the success of many web systems, including search engines, focused crawlers, linkage analyzers, and intelligent web agents.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nImproved Algorithms for Topic Distillation in a Hyperlinked Environment Abstract This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typ-ical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity anal-ysis based algorithm with content analysis. We identify three problems with the existing approach and devise al-gorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45 % over pure connectivity anal-ysis. 1\n\n2-Hop Neighbour:\nWhat can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1\n\n2-Hop Neighbour:\nExploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure.\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.\n\n2-Hop Neighbour:\nValidating Access to External Information Sources in a Mediator Environment A mediator integrates existing information sources into a new application. In order to answer complex queries, the mediator splits them up into sub-queries which it sends to the information sources. Afterwards, it combines the replies to answer the original query. Since the information sources are usually external, autonomous systems, the access to them can sometimes be erroneous, most notably when the information source is changed. This results in an incorrect behaviour of the whole system. The question that this paper addresses, is: how to check whether or not the access was correct? The paper introduces a notational framework for the general information access validation problem, describes the typical errors that can occur in a mediator environment, and proposes several validation mechanisms. It is also investigated how the validation functionality can be integrated into the mediator architecture, and what the most important quality measures of a validation method are. Moreover, the practical usability of the presented approaches is demonstrated on a real-world application using Web-based information sources. Several measurements are performed to compare the presented methods with previous work in the field.\n\n2-Hop Neighbour:\nDiscovering Informative Content Blocks from Web Documents In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Multi-Agent Systems Coalition Formation 5.4 Payoff Division Overview 119 5 Contract Nets, Coalition Formation 119-1 Chapter 5: Contract Nets, Coalition Formation Multi-Agent Systems (6 Lectures), Sept. 2000, Bahia Blanca 5.1 General Contract Nets How to distribute tasks? . Global Market Mechanisms. Implementations use a single centralized mediator . . Announce, bid, award -cycle. Distributed Negotiation . We need the following: 1. Define a task allocation problem in precise terms. 2. Define a formal model for making bidding and awarding decisions. 5.1 General Contract Nets 120 Chapter 5: Contract Nets, Coalition Formation Multi-Agent Systems (6 Lectures), Sept. 2000, Bahia Blanca Definition 5.1 (Task-Allocation Problem) A task allocation problem is given by 1. a set of tasks T , 2. a set of agents A A A, 3. a cost function cost i i i : 2 T -# R#{} (stating the costs that agent i i i incurs by handling some tasks), and 4. the initial allocation of tasks #T init 1 1 1 , . . . , T init ...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nFormal Models and Decision Procedures for Multi-Agent Systems The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. A number of theoretical formalizations for such multiagent systems have been proposed. However, most of these formalizations do not have a strong semantic basis nor a sound and complete axiomatization. Hence, it has not been clear as to how these formalizations could be used in building agents in practice. This paper explores a particular type of multi-agent system, in which each agent is viewed as having the three mental attitudes of belief (B), desire (D), and intention (I). It provides a family of multi-modal branching-time BDI logics with a semantics that is grounded in traditional decision theory and a possible-worlds framework, categorizes them, provides sound and complete axiomatizations, and gives constructive tableaubased decision procedures for testing the satisfiability and validity of formulas. The computational complexity of these decision procedures is n...\n\n1-Hop Neighbour:\nImpact: A Platform for Collaborating Agents twork. The Impact server provides the infrastructure upon which different Impact agents can interact. To avoid a performance bottleneck, multiple copies of the server can be replicated and scattered across the network. Impact agents A set of data objects can be represented in a wide variety of ways. When building an application, we'd like to select a data structure that supports the application operations that are the most frequently executed, the most critical, or both. So, any definition of an agent must support such flexible choice of data structures, and agentization must let us extend arbitrary data representations. In Impact, an agent consists of any body of software code whatsoever, with the associated wrapper. Figure 2 shows such an agent's architecture. The software code. The agent's code consists of two parts: . a set of data structures (or data types) manipulated by the agent. For example, if we are building a database agen\n\n2-Hop Neighbour:\nMeans-End Plan Recognition - Towards a Theory of Reactive Recognition This paper draws its inspiration from current work in reactive planning to guide plan recognition using \"plans as recipes\". The plan recognition process guided by such a library of plans is called means-end plan recognition. An extension of dynamic logic, called dynamic agent logic, is introduced to provide a formal semantics for means-end plan recognition and its counterpart, means-end plan execution. The operational semantics, given by algorithms for means-end plan recognition, are then related to the provability of formulas in the dynamic agent logic. This establishes the relative soundness and completeness of the algorithms with respect to a given library of plans. Some of the restrictive assumptions underlying means-end plan recognition are then relaxed to provide a theory of reactive recognition that allows for changes in the external world during the recognition process. Reactive recognition, when embedded with the mental attitudes of belief, desire, and intention, leads to a po...\n\n2-Hop Neighbour:\nResolution-Based Proof for Multi-Modal Temporal Logics of Knowledge Temporal logics of knowledge are useful in order to specify complex systems in which agents are both dynamic and have information about their surroundings. We present a resolution method for propositional temporal logic combined with multi-modal S5 and illustrate its use on examples. This paper corrects a previous proposal for resolution in multi-modal temporal logics of knowledge. Keywords: temporal and modal logics, non-classical resolution, theorem-proving 1 Introduction Combinations of logics have been useful for specifying and reasoning about complex situations, for example multi-agent systems [21, 24], accident analysis [15], and security protocols [18]. For example, logics to formalise multi-agent systems often incorporate a dynamic component representing change of over time; an informational component to capture the agent's knowledge or beliefs; and a motivational component for notions such as goals, wishes, desires or intentions. Often temporal or dynamic logic is used for...\n\n2-Hop Neighbour:\nModel Checking Multiagent Systems Model checking is a very successful technique which has been applied in the design and verification of finite state concurrent reactive processes. In this paper we show how this technique can be lifted to be applicable to multiagent systems. Our approach allows us to reuse the technology and tools developed in model checking, to design and verify multiagent systems in a modular and incremental way, and also to have a very efficient model checking algorithm. 1 Introduction Model checking is a very successful automatic technique which has been devised for the design and verification of finite state reactive systems, e.g., sequential circuit designs, communication protocols, and safety critical control systems (see, e.g., [2]). There is evidence that model checking, when applicable, is far more successful than the other approaches to formal methods and verification (e.g., first order or inductive theorem proving, tableau based reasoning about modal satisfiability). Nowadays many very eff...", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'CaseLP, A Rapid Prototyping Environment For Agent Based Software Intelligent agents and multi-agent systems are increasingly recognized as an innovative approach for analyzing, designing and implementing complex, heterogeneous and distributed software applications. The agent-based view offers a powerful and high level conceptualization that software engineers can exploit to considerably improve the way in which software is realized. Agent-based software engineering is a recent and very interesting research area. Due to its novelty, there is still no evidence of well-established practices for the development of agent-based applications and thus experimentation in this direction is very important. This dissertation' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nA Multi-Agent Approach to Vehicle Monitoring in Motorway . This paper describes CaseLP, a prototyping environment for MultiAgent Systems (MAS), and its adoption for the development of a distributed industrial application. CaseLP employs architecture definition, communication, logic and procedural languages to model a MAS from the top-level architecture down to procedural behavior of each agent's instance. The executable specification which is obtained can be employed as a rapid prototype which helps in taking quick decisions on the best possible implementation solutions. Such capabilities have been applied to a distributed application of Elsag company, in order to assess the best policies for data communication and database allocation before the concrete implementation. The application consists in remote traffic control and surveillance over service areas on an Italian motorway, employing automatic detection and car plate reading at monitored gates. CaseLP allowed to predict data communication performance statistics under differe...\n\n1-Hop Neighbour:\nHEMASL: A Flexible Language to Specify Heterogeneous Agents In the realization of agent-based applications the developer generally needs to use heterogeneous agent architectures, so that each application component can optimally perform its task. Languages that easily model the heterogeneity of agents' architectures are very useful in the early stages of the application development. This paper presents HEMASL, a simple meta-language used to specify heterogeneous agent architectures, and sketches how HEMASL should be implemented in an object-oriented commercial programming language as Java. Moreover, the paper briefly discusses the benefits of adding HEMASL to CaseLP, a LP-based specification and prototyping environment for multi-agent systems, in order to enhance its flexibility and usability. 1. Introduction Intelligent agents and multi-agent systems (MASs) are more and more recognized as the \"new\" modeling techniques to be used to engineer complex and distributed software applications [12]. Agent-based software engineering is concerned with ...\n\n1-Hop Neighbour:\nA Survey of Agent-Oriented Methodologies . This article introduces the current agent-oriented methodologies. It discusseswhat approacheshave been followed (mainly extending existing objectoriented and knowledge engineering methodologies), the suitability of these approaches for agent modelling, and some conclusions drawn from the survey. 1 Introduction Agent technology has received a great deal of attention in the last few years and, as a result, the industry is beginning to get interested in using this technology to develop its own products. In spite of the different developed agent theories, languages, architectures and the successful agent-based applications, very little work for specifying (and applying) techniques to develop applications using agent technology has been done. The role of agent-oriented methodologies is to assist in all the phases of the life cycle of an agent-based application, including its management. This article reviews the current approaches to the development of an agent-oriented (AO) methodology. ...\n\n2-Hop Neighbour:\nAn Agent-based Petri Net Model with Application to Seller/Buyer Design in Electronic Commerce Agents are becoming one of the most important topics in distributed and autonomous decentralized systems (ADS), and there are increasing attempts to use agent technologies to develop software systems in electronic commerce. Such systems are complex and there is a pressing need for system modeling techniques to support reliable, maintainable and extensible design. G-Nets are a type of Petri net defined to support modeling of a system as a set of independent and loosely-coupled modules. In this paper, we first introduce an extension of G-Net, agent-based G-Net, as a generic model for agent design. Then new communication mechanisms are introduced to support asynchronous message passing among agents. To illustrate our formal modeling technique is effective for agent modeling in electronic commerce, a pricenegotiation protocol example between buyers and sellers is provided. Finally, by analyzing an ordinary Petri net reduced from our agent-based G-Net models, we conclude that our agent-based G-Net models are L3-live, concurrent and effective for agent communications. 1.", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'PlanetP: Infrastructure Support for P2P Information Sharing Storage technology trends' are providing massive storage in extremely small packages while declining computing costs' are resulting in a rising number of devices' per person. The confluence of these trends' are presenting a new, critical challenge to storage and file system designers': how to enable users' to effectively manage, use, and share huge amounts' of data stored across a multitude of devices. In this paper, we present a novel middleware storage system, PlanetP, which is designed from first principles as' a peerto -peer (P2P), semantically indexed storage layer. PlanetP makes two novel design choices to meet the above challenge. First, PlanetP concentrates on content-based querying for information retrieval and assumes that the unit of storage is a shipper of XML, allowing it to index arbitrary data for search and retrieval regardless of the applications' used to create and manipulate the data. Second, PlanetP adopts' a P2P approach, avoiding centralization of storage and indexing. This' makes PlanetP particularly suitable for information sharing among ad hoc groups of users, each of which may have to manage data distributed across multiple devices'. PlanetP is' targeted for groups of up to 1000 users'; results' from studying communities of lO0-200 peers' running on a cluster of PCs indicates that PlanetP should scale well to the 1000-member threshold. Finally, we describe BreezeFS, a semantic file system that we have implemented to validate PlanetP's utility.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n1-Hop Neighbour:\nOceanStore: An Architecture for Global-Scale Persistent Storage OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development. 1\n\n2-Hop Neighbour:\nEvaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index.\n\n2-Hop Neighbour:\nInverted files and dynamic signature files for optimisation of Web Directories Web directories are taxonomies for the classification of Web documents. This kind of IR systems present a specific type of search where the document collection is restricted to one area of the category graph. This paper introduces a specific data architecture for Web directories which improves the performance of restricted searches. That architecture is based on a hybrid data structure composed of an inverted file with multiple embedded signature files. Two variants based on the proposed model are presented: hybrid architecture with total information and hybrid architecture with partial information. The validity of this architecture has been analysed by means of developing both variants to be compared with a basic model. The performance of the restricted queries was clearly improved, specially the hybrid model with partial information, which yielded a positive response under any load of the search system.\n\n2-Hop Neighbour:\nFEATURES: Real-time Adaptive Feature Learning and Document Learning for Web Search In this paper we report our research on building Features - an intelligent web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Not only does Features learn from the user's document relevance feedback, but also automatically extracts and suggests indexing keywords relevant to a search query and learns from the user's keyword relevance feedback so that it is able to speed up its search process and to enhance its search performance. We design two efficient and mutual-benefiting learning algorithms that work concurrently, one for feature learning and the other for document learning. Features employs these algorithms together with an internal index database and a real-time meta-searcher so to perform adaptive real-time learning to find desired documents with as little relevance feedback from the user as possible. The architecture and performance of Features are also discussed. 1 Introduction As the world wide web rapidly evo...\n\n2-Hop Neighbour:\nUser Modeling for Information Access Based on Implicit Feedback User modeling can be used in information filtering and retrieval systems to improve the representation of a user's information needs. User models can be constructed by hand, or learned automatically based on feedback provided by the user about the relevance of documents that they have examined. By observing user behavior, it is possible to infer implicit feedback without requiring explicit relevance judgments. Previous studies based on Internet discussion groups (USENET news) have shown reading time to be a useful source of implicit feedback for predicting a user's preferences. The study reported in this paper extends that work by providing framework for considering alternative sources of implicit feedback, examining whether reading time is useful for predicting a user's preferences for academic and professional journal articles, and exploring whether retention behavior can usefully augment the information that reading time provides. Two user studies were conducted in which undergradua...\n\n2-Hop Neighbour:\nMethods for Sampling Pages Uniformly from the World Wide Web We present two new algorithms for generating uniformly random samples of pages from the World Wide Web, building upon recent work by Henzinger et al. (Henzinger et al. 2000) and Bar-Yossef et al. (Bar-Yossef et al. 2000). Both algorithms are based on a weighted random-walk methodology. The first algorithm (DIRECTED-SAMPLE) operates on arbitrary directed graphs, and so is naturally applicable to the web. We show that, in the limit, this algorithm generates samples that are uniformly random. The second algorithm (UNDIRECTED-SAMPLE) operates on undirected graphs, thus requiring a mechanism for obtaining inbound links to web pages (e.g., access to a search engine). With this additional knowledge of inbound links, the algorithm can arrive at a uniform distribution faster than DIRECTEDSAMPLE, and we derive explicit bounds on the time to convergence. In addition, we evaluate the two algorithms on simulated web data, showing that both yield reliably uniform samples of pages. We also compare our results with those of previous algorithms, and discuss the theoretical relationships among the various proposed methods.", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Social Mental Shaping: Modelling the Impact of Sociality on the Mental States of Autonomous Agents This paper presents a framework that captures how the social nature of agents that are situated in a multi-agent environment impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping. This allows us to extend the standard Belief-DesireIntention model to account for how common social phenomena (e.g. cooperation, collaborative problem-solving and negotiation) can be integrated into a unified theoretical perspective that reflects a fully explicated model of the autonomous agent's mental state. Keywords: Multi-agent systems, agent interactions, BDI models, social influence. 3 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nModeling Sociality In The BDI Framework . We present a conceptual model for how the social nature of agents impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping . 1 Introduction Belief-Desire-Intention (BDI) architectures for deliberative agents are based on the physical symbol system assumption that agents maintain and reason about internal representations of their world [2]. However, while such architectures conceptualise individual intentionality and behaviour, they say nothing about the social aspects of agents being situated in a multi-agent system. The main reason for this limitation is that mental attitudes are taken to be internal to a particular agent (or team) and are modeled as a relation between the agent (or a team) and a proposition. The purpose of this paper is, therefore, to extend BDI models in order to investigate the problem of how the social nature of agents can impact upon their individual mental ...\n\n1-Hop Neighbour:\nControlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...\n\n1-Hop Neighbour:\nAgents That Reason and Negotiate By Arguing The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents). 1 Introduction An increasing number of software app...\n\n2-Hop Neighbour:\nThe Cooperative Problem-Solving Process We present a model of cooperative problem solving that describes the process from its beginning, with some agent recognizing the potential for cooperation with respect to one of its goals, through to team action. Our approach is to characterize the mental states of the agents that lead them to solicit, and take part in, cooperative action. The model is formalized by expressing it as a theory in a quantified multi-modal logic. Keywords: Multi-agent systems, cooperation, modal logic, temporal logic. 1 Introduction Agents --- both human and artificial --- can engage in many and varied types of social interaction, ranging from altruistic cooperation through to open conflict. However, perhaps the paradigm example of social interaction is cooperative problem solving (CPS), in which a group of autonomous agents choose to work together to achieve a common goal. For example, we might find a group of people working together to move a heavy object, play a symphony, build a house, or write a jo...\n\n2-Hop Neighbour:\nAn approach to using degrees of belief in BDI agents : The past few years have seen a rise in the popularity of the use of mentalistic attitudes such as beliefs, desires and intentions to describe intelligent agents. Many of the models which formalise such attitudes do not admit degrees of belief, desire and intention. We see this as an understandable simplification, but as a simplification which means that the resulting systems cannot take account of much of the useful information which helps to guide human reasoning about the world. This paper starts to develop a more sophisticated system based upon an existing formal model of these mental attributes. 1 Introduction In the past few years there has been a lot of attention given to building formal models of autonomous software agents; pieces of software which operate to some extent independently of human intervention and which therefore may be considered to have their own goals and the ability to determine how to achieve those goals. Many of these formal models are based on the use of ...\n\n2-Hop Neighbour:\nInvestigating Interactions Between Agent Conversations and Agent Control Components Exploring agent conversation in the context of fine-grained agent coordination research has raised several intellectual questions. The major issues pertain to interactions between different agent conversations, the representations chosen for different classes of conversations, the explicit modeling of interactions between the conversations, and how to address these interactions. This paper is not so ambitious as to attempt to address these questions, only frame them in the context of quantified, scheduling-centric multi-agent coordination. research. 1 Introduction Based on a long history of work in agents and agent control components for building distributed AI and multi-agent systems, we are attempting to frame and address a set of intellectual questions pertaining to agent conversation. Interaction lies at the heart of the matter; the issue is interaction between different agent conversations, that possibly occur at different levels of abstraction, but also interaction between the m...\n\n2-Hop Neighbour:\nArgument in Multi-Agent Systems Multi-agent systems research is concerned both with the modelling of human and animal societies and with the development of principles for the design of practical distributed information management systems. This position paper will, rather than examine the various dierences in perspective within this area of research, discuss issues of communication and commitment that are of interest to multi-agent systems research in general. 1 Introduction A computational society is a collection of autonomous agents that are loosely dependent upon each other. The intentional stance [12] is often taken in describing the state of these agents. An agent may have beliefs, desires, intentions, and it may adopt a role or have relationships with others. Thus, multi-agent systems (MAS) as with most AI research is signi cantly inuenced, at least in its vocabulary, by philosophy and cognitive psychology. 1 So, what's the point? Computational societies are developed for two primary reasons: Mode...\n\n2-Hop Neighbour:\nAgent-mediated Electronic Commerce: Scientific and Technological roadmap. this report is that a big part of Internet users have already sampled buying over the web (f.i. 40% in the UK), and a significat part qualify themselves as regular shoppers (f.i. 10% in the UK). Again, important differences between countries may be detected with respect to the expenses produced. For instance, Finland spent 20 times more that Spain on a per capita basis. The forecasts for European buying goods and services for the year 2002 suggest that the current 5.2 million shoppers will increase until 28.8 millions, and the European revenues from the current EUR3,032 million to EUR57,210 million. Finally, a significant increase in the number of European executives that believe in the future of electronic commerce has been observed (33% in 1999 up from 23% in 1998)", "role": "user" }, { "content": "<answer>Agents</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Keyword Searching and Browsing in Databases using BANKS With the growth of the Web, there has been a rapid increase in the number of users who need to access online databases without having a detailed knowledge of the schema or of query languages; even relatively simple query languages designed for non-experts are too complicated for them. We describe BANKS, a system which enables keyword-based search on relational databases, together with data and schema browsing. BANKS enables users to extract information in a simple manner without any knowledge of the schema or any need for writing complex queries. A user can get information by typing a few keywords, following hyperlinks, and interacting with controls on the displayed results. BANKS models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. Answers to a query are modeled as rooted trees connecting tuples that match individual keywords in the query. Answers are ranked using a notion of proximity coupled with a notion of prestige of nodes based on inlinks, similar to techniques developed for Web search. We present an efficient heuristic algorithm for finding and ranking query results. 1.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nBBQ: A Visual Interface for Integrated Browsing and Querying of XML In this paper we present BBQ (Blended Browsing and Querying), a graphic user interface for seamlessly browsing and querying XML data sources. BBQ displays the structure of multiple data sources using a paradigm that resembles drilling-down in Windows' directory structures. BBQ allows queries incorporating one or more of the sources. Queries are constructed in a query-by-example (QBE) manner, where DTDs play the role of schema. The queries are arbitrary conjunctive queries with GROUPBY, and their results can be subsequently used and refined. To support query refinement, BBQ introduces virtual result views: standalone virtual data sources that (i) are constructed by user queries, from elements in other data sources, and (ii) can be used in subsequent queries as first-class data sources themselves. Furthermore, BBQ allows users to query data sources with loose or incomplete schema, and can augment such schema with a DTD inference mechanism.\n\n1-Hop Neighbour:\nThe Anatomy of a Large-Scale Hypertextual Web Search Engine In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.\n\n2-Hop Neighbour:\nWeb Genre Visualization Web users vary widely in terms of their expertise on the topics for which they search, the amount of detail they seek, etc. Unfortunately, today's one-size-fits-all Web search services do not cater to such individual preferences. For example, it is difficult to query for documents that give extensive detail but assume modest prior expertise. We describe how shallow text classification techniques can be used to sort the documents returned by Web search services according to genre dimensions such as level of expertise and amount of detail, and propose a simple visualization interface that helps users rapidly find appropriate documents. (Keywords: document genre; information retrieval, visualisation, text classification, shallow linguistic processing) Motivation Consider two users seeking information about Pearson' correlation coefficient. Alice is writing a data-analysis program and needs a web page to remind her of the equations. Bob, a teacher, wants to point his pupils to an overview that isn't bogged down in equations.\n\n2-Hop Neighbour:\nThe PageRank Citation Ranking: Bringing Order to the Web The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.\n\n2-Hop Neighbour:\nDiscovering Informative Content Blocks from Web Documents In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.\n\n2-Hop Neighbour:\nComparing Statistical and Content-Based Techniques for Answer Validation on the Web Answer Validation is an emerging topic in Question Answering, where open domain systems are often required to rank huge amounts of candidate answers. We present a novel approach to answer validation based on the intuition that the amount of implicit knowledge which connects an answer to a question can be estimated by exploiting the redundancy of Web information. Two techniques are considered in this paper: a statistical approach, which uses the Web to obtain a large amount of pages, and a content-based approach, which analyses text snippets retrieved by the search engine. Both the approaches do not require to download the documents. Experiments carried out on the TREC-2001 judged-answer collection show that a combination of the two approaches achieves a high level of performance (i.e. about 88% success rate).\n\n2-Hop Neighbour:\nWeb Interaction and the Navigation Problem in Hypertext written for Encyclopedia of Microcomputers The web has become a ubiquitous tool, used in day-to-day work, to find information and conduct business, and it is revolutionising the role and availability of information. One of the problems encountered in web interaction, which is still unsolved, is the navigation problem, whereby users can \"get lost in hyperspace\", meaning that when following a sequence of links, i.e. a trail of information, users tend to become disoriented in terms of the goal of their original query and the relevance to the query of the information they are currently browsing. Herein we build statistical foundations for tackling the navigation problem based on a formal model of the web in terms of a probabilistic automaton, which can also be viewed as a finite ergodic Markov chain. In our model of the web the probabilities attached to state transitions have two interpretations, namely, they can denote the proportion of times a user followed a link, and alternatively they can denote the expected utility of following a link. Using this approach we have developed two techniques for constructing a web view based on the two interpretations of the probabilities of links, where a web view is a collection of relevant trails. The first method we describe is concerned with finding frequent user behaviour patterns. A collection of trails is taken as input and an ergodic Markov chain is produced as output with the probabilities of transitions corresponding to the frequency the user traversed the associated links. The second method we describe is a reinforcement learning algorithm that attaches higher probabilities to links whose expected trail relevance is higher. The user's home page and a query are taken as input and an ergodic Markov chain is produced as output with the probabilities of...", "role": "user" }, { "content": "<answer>Information Retrieval</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Experiences Using Case-Based Reasoning to Predict Software Project Effort This paper explores some of the practical issues associated with the use of case-based reasoning (CBR) or estimation by analogy. We note that different research teams have reported widely differing results with this technology. Whilst we accept that underlying characteristics of the datasets being used play a major role we also argue that configuring a CBR system can also have an impact. We examine the impact of the choice of number of analogies when making predictions; we also look at different adaptation strategies. Our analysis is based on a dataset of software projects collected by a Canadian software house. Our results show that choosing analogies is important but adaptation strategy appears to be less so. These findings must be tempered, however, with the finding that it was difficult to show statistical significance for smaller datasets even when the accuracy indicators differed quite substantially. For this reason we urge some degree of caution when comparing competing predicti...' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n1-Hop Neighbour:\nAn Investigation of Machine Learning Based Prediction Systems Traditionally, researchers have used either o€-the-shelf models such as COCOMO, or developed local models using statistical techniques such as stepwise regression, to obtain software e€ort estimates. More recently, attention has turned to a variety of machine learning methods such as arti®cial neural networks (ANNs), case-based reasoning (CBR) and rule induction (RI). This paper outlines some comparative research into the use of these three machine learning methods to build software e€ort prediction systems. We brie¯y describe each method and then apply the techniques to a dataset of 81 software projects derived from a Canadian software house in the late 1980s. We compare the prediction systems in terms of three factors: accuracy, explanatory value and con®gurability. We show that ANN methods have superior accuracy and that RI methods are least accurate. However, this view is somewhat counteracted by problems with explanatory value and con®gurability. For example, we found that considerable e€ort was required to con®gure the ANN and that this compared very unfavourably with the other techniques, particularly CBR and least squares regression (LSR). We suggest that further work be carried out, both to further explore interaction between the enduser\n\n2-Hop Neighbour:\nCase-Based Learning Algorithms Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.", "role": "user" }, { "content": "<answer>Machine Learning</answer>", "role": "assistant" } ]
[ { "content": "Classify the node 'Using Constraint Satisfaction for View Update Translation View update is the problem of translating update requests against a view into update requests against the base data. In this paper, we present a novel approach to this problem in relational databases. Using conditional tables to represent relational views, we translate a view update problem into a disjunction of a number of constraint satisfaction problems (CSPs). Solutions to the CSPs correspond to possible translations of the view update. Semantic information to resolve ambiguity can be handled as additional constraints in the CSPs. This approach enables us to apply the rich results of the CSP research to analyze and solve an important problem in database management.' into one of the following categories:\n\nAgents\nMachine Learning\nInformation Retrieval\nDatabases\nHuman-Computer Interaction\nArtificial Intelligence.\n\nRefer to the neighbour nodes for context.\n\n", "role": "user" }, { "content": "<answer>Databases</answer>", "role": "assistant" } ]