problem
string | solution
string | n_hop
int64 | dataset
string | split
string | __index_level_0__
int64 |
|---|---|---|---|---|---|
The node content
null
1-hop neighbor's text information:Bits-back coding software guide: Abstract | In this document, I first review the theory behind bits-back coding (aka. free energy coding) (Frey and Hinton 1996) and then describe the interface to C-language software that can be used for bits-back coding. This method is a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed bits-back approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The software which I describe in this guide is easy to use and the source code is only a few pages long. I illustrate the bits-back coding software on a simple quantized Gaussian mixture problem.
1-hop neighbor's text information: "A simple algorithm that discovers efficient perceptual codes," in Computational and Psychophysical Mechanisms of Visual Coding, : We describe the "wake-sleep" algorithm that allows a multilayer, unsupervised, neural network to build a hierarchy of representations of sensory input. The network has bottom-up "recognition" connections that are used to convert sensory input into underlying representations. Unlike most artificial neural networks, it also has top-down "generative" connections that can be used to reconstruct the sensory input from the representations. In the "wake" phase of the learning algorithm, the network is driven by the bottom-up recognition connections and the top-down generative connections are trained to be better at reconstructing the sensory input from the representation chosen by the recognition process. In the "sleep" phase, the network is driven top-down by the generative connections to produce a fantasized representation and a fantasized sensory input. The recognition connections are then trained to be better at recovering the fantasized representation from the fantasized sensory input. In both phases, the synaptic learning rule is simple and local. The combined effect of the two phases is to create representations of the sensory input that are efficient in the following sense: On average, it takes more bits to describe each sensory input vector directly than to first describe the representation of the sensory input chosen by the recognition process and then describe the difference between the sensory input and its reconstruction from the chosen representation.
1-hop neighbor's text information: A new view of the EM algorithm that justifies incremental and other variants. : The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 1
|
cora
|
train
| 2
|
The node content
null
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information:3 Representation Issues in Neighborhood Search and Evolutionary Algorithms: Evolutionary Algorithms are often presented as general purpose search methods. Yet, we also know that no search method is better than another over all possible problems and that in fact there is often a good deal of problem specific information involved in the choice of problem representation and search operators. In this paper we explore some very general properties of representations as they relate to neighborhood search methods. In particular, we looked at the expected number of local optima under a neighborhood search operator when averaged overall possible representations. The number of local optima under a neighborhood search operator for standard Binary and standard binary reflected Gray codes is developed and explored as one measure of problem complexity. We also relate number of local optima to another metric, , designed to provide one measure of complexity with respect to a simple genetic algorithm. Choosing a good representation is a vital component of solving any search problem. However, choosing a good representation for a problem is as difficult as choosing a good search algorithm for a problem. Wolpert and Macready's (1995) No Free Lunch (NFL) theorem proves that no search algorithm is better than any other over all possible discrete functions. Radcliffe and Surry (1995) extend these notions to also cover the idea that all representations are equivalent when their behavior is considered on average over all possible functions. To understand these results, we first outline some of the simple assumptions behind this theorem. First, assume the optimization problem is discrete; this describes all combinatorial optimization problems-and really all optimization problems being solved on computers since computers have finite precision. Second, we ignore the fact that we can resample points in the space. The "No Free Lunch" result can be stated as follows:
2-hop neighbor's text information: A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specified Tasks. : Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 4
|
The node content
null
1-hop neighbor's text information: "Measures for performance evaluation of genetic algorithms," : This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments.
1-hop neighbor's text information: An overview of genetic algorithms: Part 1, fundamentals. :
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information:Modeling Building-Block Interdependency Dynamical and Evolutionary Machine Organization Group: The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA test problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various test problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-IFF) as a canonical example. We present some empirical results of GAs on H-IFF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests.
2-hop neighbor's text information:Island Model Genetic Algorithms and Linearly Separable Problems: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 5
|
The node content
null
1-hop neighbor's text information: Generalization in reinforcement learning: Successful examples using sparse coarse coding. : On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 6
|
The node content
null
1-hop neighbor's text information: Grounding robotic control with genetic neural net-works. : Technical Report AI94-223 May 1994 Abstract An important but often neglected problem in the field of Artificial Intelligence is that of grounding systems in their environment such that the representations they manipulate have inherent meaning for the system. Since humans rely so heavily on semantics, it seems likely that the grounding is crucial to the development of truly intelligent behavior. This study investigates the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. Both the topology and weights of the neural networks are optimized through genetic algorithms. Although such comprehensive optimization is difficult, the empirical evidence gathered here shows that the method is not only tractable but quite fruitful. In the experiments, the agents evolved a wall-following control strategy and were able to transfer it to novel environments. Their behavior suggests that they were also learning to build cognitive maps.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Evolving graphs and networks with edge encoding: Preliminary report. : We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 7
|
The node content
null
1-hop neighbor's text information: A model for projection and action. : In designing autonomous agents that deal competently with issues involving time and space, there is a tradeoff to be made between guaranteed response-time reactions on the one hand, and flexibility and expressiveness on the other. We propose a model of action with probabilistic reasoning and decision analytic evaluation for use in a layered control architecture. Our model is well suited to tasks that require reasoning about the interaction of behaviors and events in a fixed temporal horizon. Decisions are continuously reevaluated, so that there is no problem with plans becoming obsolete as new information becomes available. In this paper, we are particularly interested in the tradeoffs required to guarantee a fixed reponse time in reasoning about nondeterministic cause-and-effect relationships. By exploiting approximate decision making processes, we are able to trade accuracy in our predictions for speed in decision making in order to improve expected per formance in dynamic situations.
1-hop neighbor's text information: Dynamic Programming and Markov Processes. : The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence.
1-hop neighbor's text information: Integrated Architectures for Learning, Planning and Reacting Based on Approximating Dynamic Programming, : This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.
2-hop neighbor's text information: Learning to achieve goals. : Temporal difference methods solve the temporal credit assignment problem for reinforcement learning. An important subproblem of general reinforcement learning is learning to achieve dynamic goals. Although existing temporal difference methods, such as Q learning, can be applied to this problem, they do not take advantage of its special structure. This paper presents the DG-learning algorithm, which learns efficiently to achieve dynamically changing goals and exhibits good knowledge transfer between goals. In addition, this paper shows how traditional relaxation techniques can be applied to the problem. Finally, experimental results are given that demonstrate the superiority of DG learning over Q learning in a moderately large, synthetic, non-deterministic domain.
2-hop neighbor's text information: Markov decision pro cesses in large state spaces. : In this paper we propose a new framework for studying Markov decision processes (MDPs), based on ideas from statistical mechanics. The goal of learning in MDPs is to find a policy that yields the maximum expected return over time. In choosing policies, agents must therefore weigh the prospects of short-term versus long-term gains. We study a simple MDP in which the agent must constantly decide between exploratory jumps and local reward mining in state space. The number of policies to choose from grows exponentially with the size of the state space, N . We view the expected returns as defining an energy landscape over policy space. Methods from statistical mechanics are used to analyze this landscape in the thermodynamic limit N ! 1. We calculate the overall distribution of expected returns, as well as the distribution of returns for policies at a fixed Hamming distance from the optimal one. We briefly discuss the problem of learning optimal policies from empirical estimates of the expected return. As a first step, we relate our findings for the entropy to the limit of high-temperature learning. Numerical simulations support the theoretical results.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 2
|
cora
|
train
| 8
|
The node content
null
1-hop neighbor's text information:Interpretable Neural Networks with BP-SOM: Back-propagation learning (BP) is known for its serious limitations in generalising knowledge from certain types of learning material. BP-SOM is an extension of BP which overcomes some of these limitations. BP-SOM is a combination of a multi-layered feed-forward network (MFN) trained with BP, and Kohonen's self-organising maps (SOMs). In earlier reports, it has been shown that BP-SOM improved the generalisation performance whereas it decreased simultaneously the number of necessary hidden units without loss of generalisation performance. These are only two effects of the use of SOM learning during training of MFNs. In this paper we focus on two additional effects. First, we show that after BP-SOM training, activations of hidden units of MFNs tend to oscillate among a limited number of discrete values. Second, we identify SOM elements as adequate organisers of instances of the task at hand. We visualise both effects, and argue that they lead to intelligible neural networks and can be employed as a basis for automatic rule extraction.
1-hop neighbor's text information: Proben1: A set of neural network benchmark problems and benchmarking rules. : Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison.
1-hop neighbor's text information: Self-Organization and Associative Memory, : Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
1
| 1
|
cora
|
train
| 9
|
The node content
null
1-hop neighbor's text information: Generalization in reinforcement learning: Successful examples using sparse coarse coding. : On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 10
|
The node content
null
1-hop neighbor's text information: Grounding robotic control with genetic neural net-works. : Technical Report AI94-223 May 1994 Abstract An important but often neglected problem in the field of Artificial Intelligence is that of grounding systems in their environment such that the representations they manipulate have inherent meaning for the system. Since humans rely so heavily on semantics, it seems likely that the grounding is crucial to the development of truly intelligent behavior. This study investigates the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. Both the topology and weights of the neural networks are optimized through genetic algorithms. Although such comprehensive optimization is difficult, the empirical evidence gathered here shows that the method is not only tractable but quite fruitful. In the experiments, the agents evolved a wall-following control strategy and were able to transfer it to novel environments. Their behavior suggests that they were also learning to build cognitive maps.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Evolving graphs and networks with edge encoding: Preliminary report. : We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress.
2-hop neighbor's text information: Towards automatic discovery of building blocks in genetic programming. : This paper presents an algorithm for the discovery of building blocks in genetic programming (GP) called adaptive representation through learning (ARL). The central idea of ARL is the adaptation of the problem representation, by extending the set of terminals and functions with a set of evolvable subroutines. The set of subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL supports subroutine creation and deletion. Subroutine creation or discovery is performed automatically based on the differential parent-offspring fitness and block activation. Subroutine deletion relies on a utility measure similar to schema fitness over a window of past generations. The technique described is tested on the problem of controlling an agent in a dynamic and non-deterministic environment. The automatic discovery of subroutines can help scale up the GP technique to complex problems.
2-hop neighbor's text information:Island Model Genetic Algorithms and Linearly Separable Problems: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 11
|
The node content
null
1-hop neighbor's text information: Toward efficient agnostic learning. : In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.
1-hop neighbor's text information: Learning in the presence of malicious errors, : In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems.
2-hop neighbor's text information:Rationality and Intelligence: The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper outlines a gradual evolution in the formal conception of rationality that brings it closer to our informal conception of intelligence and simultaneously reduces the gap between theory and practice. Some directions for future research are indicated.
2-hop neighbor's text information:Cognitive Computation (Extended Abstract): Cognitive computation is discussed as a discipline that links together neurobiology, cognitive psychology and artificial intelligence.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
4
| 2
|
cora
|
train
| 12
|
The node content
null
1-hop neighbor's text information: Maximizing the robustness of a linear threshold classifier with discrete weights. Network: : Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. :
2-hop neighbor's text information: "Staged hybrid genetic search for seismic data imaging," : Seismic data interpretation problems are typically solved using computationally intensive local search methods which often result in inferior solutions. Here, a traditional hybrid genetic algorithm is compared with different staged hybrid genetic algorithms on the geophysical imaging static corrections problem. The traditional hybrid genetic algorithm used here applied local search to every offspring produced by genetic search. The staged hybrid genetic algorithms were designed to temporally separate the local and genetic search components into distinct phases so as to minimize interference between the two search methods. The results show that some staged hybrid genetic algorithms produce higher quality solutions while using significantly less computational time for this problem.
2-hop neighbor's text information: "A genetic algorithm for the assembly line balancing problem", : Genetic algorithms are one example of the use of a random element within an algorithm for combinatorial optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our test-bed problems is discussed. We carry out extensive computational testing to find appropriate values for the various parameters associated with this genetic algorithm. These experiments underscore the importance of the correct choice of a scaling parameter and mutation rate to ensure the good performance of a genetic algorithm. We also describe a parallel implementation of the genetic algorithm and give some comparisons between the parallel and serial implementations. Both versions of the algorithm are shown to be effective in producing good solutions for problems of this type (with appropriately chosen parameters).
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 14
|
The node content
null
1-hop neighbor's text information: Learning to play the game of chess. : This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuro-dynamic Programming. :
2-hop neighbor's text information: Learning curves bounds for Markov decision processes with undiscounted rewards. : Markov decision processes (MDPs) with undis-counted rewards represent an important class of problems in decision and control. The goal of learning in these MDPs is to find a policy that yields the maximum expected return per unit time. In large state spaces, computing these averages directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. In this case, longer exploration times enable more accurate estimates and more informed decision-making. The learning curve for an MDP measures how the agent's performance depends on the allowed exploration time, T . In this paper we analyze these learning curves for a simple control problem with undiscounted rewards. In particular, methods from statistical mechanics are used to calculate lower bounds on the agent's performance in the thermodynamic limit T ! 1, N ! 1, ff = T =N (finite), where T is the number of time steps allotted per policy evaluation and N is the size of the state space. In this limit, we provide a lower bound on the return of policies that appear optimal based on imperfect statistics.
2-hop neighbor's text information: On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 16
|
The node content
null
1-hop neighbor's text information: Learning one-dimensional geometric patterns under one-sided random misclassification noise. :
1-hop neighbor's text information: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor.
2-hop neighbor's text information: Tracking drifting concepts by minimizing disagreements. : In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy *. Furthermore, the complexity of the class H of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class H can be approximated to within a factor k, then there is a simple tracking algorithm for H which can achieve a probability * of making a mistake if the target movement rate is at most a constant times * 2 =(k(d + k) ln 1 * ), where d is the Vapnik-Chervonenkis dimension of H. Also, we show that if H is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for H which tolerates drift rates up to a constant times * 2 =(d 2 ln 1 In addition, we prove complementary results for the classes of halfspaces and axis-aligned hy perrectangles showing that the maximum rate of drift that any algorithm (even with unlimited
2-hop neighbor's text information: Learning in the presence of malicious errors, : In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
4
| 2
|
cora
|
train
| 17
|
The node content
null
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 18
|
The node content
null
1-hop neighbor's text information: (1995) Linear space induction in first order logic with RELIEFF, : Current ILP algorithms typically use variants and extensions of the greedy search. This prevents them to detect significant relationships between the training objects. Instead of myopic impurity functions, we propose the use of the heuristic based on RELIEF for guidance of ILP algorithms. At each step, in our ILP-R system, this heuristic is used to determine a beam of candidate literals. The beam is then used in an exhaustive search for a potentially good conjunction of literals. From the efficiency point of view we introduce interesting declarative bias which enables us to keep the growth of the training set, when introducing new variables, within linear bounds (linear with respect to the clause length). This bias prohibits cross-referencing of variables in variable dependency tree. The resulting system has been tested on various artificial problems. The advantages and deficiencies of our approach are discussed.
1-hop neighbor's text information: (1995) Induction of decision trees using RELIEFF. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF.
1-hop neighbor's text information: Estimating attributes: Analysis and extension of relief. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
0
| 1
|
cora
|
train
| 19
|
The node content
null
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
2-hop neighbor's text information: On the worst-case analysis of temporal-difference learing algorithms. : We study the worst-case behavior of a family of learning algorithms based on Sutton's method of temporal differences. In our on-line learning framework, learning takes place in a sequence of trials, and the goal of the learning algorithm is to estimate a discounted sum of all the reinforcements that will be received in the future. In this setting, we are able to prove general upper bounds on the performance of a slightly modified version of Sutton's so-called TD() algorithm. These bounds are stated in terms of the performance of the best linear predictor on the given training sequence, and are proved without making any statistical assumptions of any kind about the process producing the learner's observed training sequence. We also prove lower bounds on the performance of any algorithm for this learning problem, and give a similar analysis of the closely related problem of learning to predict in a model in which the learner must produce predictions for a whole batch of observations before receiving reinforcement. fl A preliminary extended abstract of this paper appeared in Machine Learning: Proceedings of the Eleventh International Conference, 1994.
2-hop neighbor's text information: "Planning by Incremental Dynamic Programming." : This paper presents the basic results and ideas of dynamic programming as they relate most directly to the concerns of planning in AI. These form the theoretical basis for the incremental planning methods used in the integrated architecture Dyna. These incremental planning methods are based on continually updating an evaluation function and the situation-action mapping of a reactive system. Actions are generated by the reactive system and thus involve minimal delay, while the incremental planning process guarantees that the actions and evaluation function will eventually be optimal|no matter how extensive a search is required. These methods are well suited to stochastic tasks and to tasks in which a complete and accurate model is not available. For tasks too large to implement the situation-action mapping as a table, supervised-learning methods must be used, and their capabilities remain a significant limitation of the approach.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 20
|
The node content
null
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information: On genetic algorithms. : We analyze the performance of a Genetic Algorithm (GA) we call Culling and a variety of other algorithms on a problem we refer to as Additive Search Problem (ASP). ASP is closely related to several previously well studied problems, such as the game of Mastermind and additive fitness functions. We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Culling is efficient on ASP, highly noise tolerant, and the best known approach in some regimes. Noisy ASP is the first problem we are aware of where a Genetic Type Algorithm bests all known competitors. Standard GA's, by contrast, perform much more poorly on ASP than hillclimbing and other approaches even though the Schema theorem holds for ASP. We generalize ASP to k-ASP to study whether GA's will achieve `implicit parallelism' in a problem with many more schemata. GA's fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a Mean Field Theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GA's can beat competing methods.
2-hop neighbor's text information:A Genetic Algorithm for File and Task Placement in a Distributed System: In this paper we explore the distributed file and task placement problem, which is intractable. We also discuss genetic algorithms and how they have been used successfully to solve combinatorial problems. Our experimental results show the GA to be far superior to the greedy heuristic in obtaining optimal and near optimal file and task placements for the problem with various data sets.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 22
|
The node content
null
1-hop neighbor's text information: Learning to play the game of chess. : This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuro-dynamic Programming. :
2-hop neighbor's text information: Analytical mean squared error curves in temporal difference learning. : We have calculated analytical expressions for how the bias and variance of the estimators provided by various temporal difference value estimation algorithms change with o*ine updates over trials in absorbing Markov chains using lookup table representations. We illustrate classes of learning curve behavior in various chains, and show the manner in which TD is sensitive to the choice of its step size and eligibility trace parameters.
2-hop neighbor's text information:Fast Online Q(): Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 23
|
The node content
null
1-hop neighbor's text information: "Measures for performance evaluation of genetic algorithms," : This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments.
1-hop neighbor's text information: An overview of genetic algorithms: Part 1, fundamentals. :
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information:A Stochastic Search Approach to Grammar Induction: This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rod-ney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE.
2-hop neighbor's text information: A Package if Domain Independent Subroutines for Implementing Classifier Systems in Arbitrary, User-Defined Environments." Logic of Computers Group, :
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 24
|
The node content
null
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks, : Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm.
1-hop neighbor's text information:Finding Promising Exploration Regions by Weighting Expected Navigation Costs continuous environments, some first-order approximations to: In many learning tasks, data-query is neither free nor of constant cost. Often the cost of a query depends on the distance from the current location in state space to the desired query point. This is easiest to visualize in robotics environments where a robot must physically move to a location in order to learn something there. The cost of this learning is the time and effort it takes to reach the new location. Furthermore, this cost is characterized by a distance relationship: When the robot moves as directly as possible from a source state to a destination state, the states through which it passes are closer (i.e., cheaper to reach) than is the destination state. Distance relationships hold in many real-world non-robotics tasks also | any environment where states are not immediately accessible. Optimizing the performance of a chemical plant, for example, requires the adjustment of analog controls which have a continuum of intermediate states. Querying possibly optimal regions of state space in these environments is inadvisable if the path to the query point intersects a region of known volatility. In discrete environments with small numbers of states, it's possible to keep track of precisely where and to what degree learning has already been done sufficiently and where it still needs to be done. It is also possible to keep best known estimates of the distances from each state to each other (see Kaelbling, 1993). Kael-bling's DG-learning algorithm is based on Floyd's all-pairs shortest-path algorithm (Aho, Hopcroft, & Ull-man 1983) and is just slightly different from that used here. These "all-goals" algorithms (after Kaelbling) can provide a highly satisfying representation of the distance/benefit tradeoff. where E x is the exploration value of state x (the potential benefit of exploring state x), D xy is the distance to state y, and A xy is the action to take in state x to move most cheaply to state y. This information can be learned incrementally and completely : That is, it can be guaranteed that if a path from any state x to any state y is deducible from the state transitions seen so far, then (1) the algorithm will have a non-null entry for S xy (i.e., the algorithm will know a path from x to y), and (2) The current value for D xy will be the best deducible value from all data seen so far. With this information, decisions about which areas to explore next can be based on not just the amount to be gained from such exploration but also on the cost of reaching each area together with the benefit of incidental exploration done on the way. Though optimal exploration is NP-hard (i.e., it's at least as difficult as TSP) good approximations are easily computable. One such good approximation is to take the action at each state that leads in the direction of greatest accumulated exploration benefit:
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 25
|
The node content
null
1-hop neighbor's text information:Solving Combinatorial Optimization Tasks by Reinforcement Learning: A General Methodology Applied to Resource-Constrained Scheduling: This paper introduces a methodology for solving combinatorial optimization problems through the application of reinforcement learning methods. The approach can be applied in cases where several similar instances of a combinatorial optimization problem must be solved. The key idea is to analyze a set of "training" problem instances and learn a search control policy for solving new problem instances. The search control policy has the twin goals of finding high-quality solutions and finding them quickly. Results of applying this methodology to a NASA scheduling problem show that the learned search control policy is much more effective than the best known non-learning search procedure|a method based on simulated annealing.
1-hop neighbor's text information: Learning to Predict User Operations for Adaptive Scheduling. : Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
2-hop neighbor's text information:Learning from undiscounted delayed rewards: The general framework of reinforcement learning has been proposed by several researchers for both the solution of optimization problems and the realization of adaptive control schemes. To allow for an efficient application of reinforcement learning in either of these areas, it is necessary to solve both the structural and the temporal credit assignment problem. In this paper, we concentrate on the latter which is usually tackled through the use of learning algorithms that employ discounted rewards. We argue that for realistic problems this kind of solution is not satisfactory, since it does not address the effect of noise originating from different experiences and does not allow for an easy explanation of the parameters involved in the learning process. As a possible solution, we propose to keep the delayed reward undiscounted, but to discount the actual adaptation rate. Empirical results show that dependent on the kind of discount used amore stable convergence and even an increase in performance can be obtained.
2-hop neighbor's text information: Issues in using function approximation for reinforcement learning. : Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 26
|
The node content
null
1-hop neighbor's text information: Why experimentation can be better than "perfect guidance". : Many problems correspond to the classical control task of determining the appropriate control action to take, given some (sequence of) observations. One standard approach to learning these control rules, called behavior cloning, involves watching a perfect operator operate a plant, and then trying to emulate its behavior. In the experimental learning approach, by contrast, the learner first guesses an initial operation-to-action policy and tries it out. If this policy performs sub-optimally, the learner can modify it to produce a new policy, and recur. This paper discusses the relative effectiveness of these two approaches, especially in the presence of perceptual aliasing, showing in particular that the experimental learner can often learn more effectively than the cloning one.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 27
|
The node content
null
1-hop neighbor's text information: Maximizing the robustness of a linear threshold classifier with discrete weights. Network: : Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. :
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 28
|
The node content
null
1-hop neighbor's text information: The Structure-Mapping Engine: Algorithms and Examples. : This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a "tool kit" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact [email protected]
2-hop neighbor's text information: Utilizing prior concepts for learning. : The inductive learning problem consists of learning a concept given examples and non-examples of the concept. To perform this learning task, inductive learning algorithms bias their learning method. Here we discuss biasing the learning method to use previously learned concepts from the same domain. These learned concepts highlight useful information for other concepts in the domain. We describe a transference bias and present M-FOCL, a Horn clause relational learning algorithm, that utilizes this bias to learn multiple concepts. We provide preliminary empirical evaluation to show the effects of biasing previous information on noise-free and noisy data.
2-hop neighbor's text information:Analogical Problem Solving by Adaptation of Schemes: We present a computational approach to the acquisition of problem schemes by learning by doing and to their application in analogical problem solving. Our work has its background in automatic program construction and relies on the concept of recursive program schemes. In contrast to the usual approach to cognitive modelling where computational models are designed to fit specific data we propose a framework to describe certain empirically established characteristics of human problem solving and learning in a uniform and formally sound way.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
2
| 2
|
cora
|
train
| 29
|
The node content
null
1-hop neighbor's text information: Using Markov chains to analyze GAFOs. : Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided.
1-hop neighbor's text information: Modeling Hybrid Genetic Algorithms. : An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information: Adaptation in constant utility nonstationary environments. : Environments that vary over time present a fundamental problem to adaptive systems. Although in the worst case there is no hope of effective adaptation, some forms environmental variability do provide adaptive opportunities. We consider a broad class of non-stationary environments, those which combine a variable result function with an invariant utility function, and demonstrate via simulation that an adaptive strategy employing both evolution and learning can tolerate a much higher rate of environmental variation than an evolution-only strategy. We suggest that in many cases where stability has previously been assumed, the constant utility non-stationary environment may in fact be a more powerful viewpoint.
2-hop neighbor's text information: Genetic programming with user-driven selection: Experiments on the evolution of algorithms for image enhancement. : This paper describes an approach to using GP for image analysis based on the idea that image enhancement, feature detection and image segmentation can be re-framed as image filtering problems. GP can be used to discover efficient optimal filters which solve such problems. However, in order to make the search feasible and effective, terminal sets, function sets and fitness functions have to meet some requirements. In the paper these requirements are described and terminals, functions and fitness functions that satisfy them are proposed. Some preliminary experiments are also reported in which GP (with the mentioned characteristics) is applied to the segmentation of the brain in magnetic resonance images (an extremely difficult problem for which no simple solution is known) and compared with artificial neural nets.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 30
|
The node content
null
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
1-hop neighbor's text information: "Introduction to radial basis function networks", : This document is an introduction to radial basis function (RBF) networks, a type of artificial neural network for application to problems of supervised learning (e.g. regression, classification and time series prediction). It is available in either PostScript or hyper-text 2 .
1-hop neighbor's text information: Neural network implementation in SAS software. : The estimation or training methods in the neural network literature are usually some simple form of gradient descent algorithm suitable for implementation in hardware using massively parallel computations. For ordinary computers that are not massively parallel, optimization algorithms such as those in several SAS procedures are usually far more efficient. This talk shows how to fit neural networks using SAS/OR R fl , SAS/ETS R fl , and SAS/STAT R fl software.
2-hop neighbor's text information: Application of neural networks for the classification of diffuse liver disease by quantitative echography. Ultrasonic Imaging, : Three different methods were investigated to determine their ability to detect and classify various categories of diffuse liver disease. A statistical method, i.e., discriminant analysis, a supervised neural network called backpropagation and a nonsupervised, self-organizing feature map were examined. The investigation was performed on the basis of a previously selected set of acoustic and image texture parameters. The limited number of patients was successfully extended by generating additional but independent data with identical statistical properties. The generated data were used for training and test sets. The final test was made with the original patient data as a validation set. It is concluded that neural networks are an attractive alternative to traditional statistical techniques when dealing with medical detection and classification tasks. Moreover, the use of generated data for training the networks and the discriminant classifier has been shown to be justified and profitable.
2-hop neighbor's text information:LU TP 93-24 Predicting System Loads with Artificial Neural: Networks Methods and Results from Abstract: We devise a feed-forward Artificial Neural Network (ANN) procedure for predicting utility loads and present the resulting predictions for two test problems given by "The Great Energy Predictor Shootout The First Building Data Analysis and Prediction Competition" [1]. Key ingredients in our approach are a method (ffi -test) for determining relevant inputs and the Multilayer Perceptron. These methods are briefly reviewed together with comments on alternative schemes like fitting to polynomials and the use of recurrent networks.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
1
| 2
|
cora
|
train
| 31
|
The node content
null
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information:Case-Based Probability Factoring in Bayesian Belief Networks: Bayesian network inference can be formulated as a combinatorial optimization problem, concerning in the computation of an optimal factoring for the distribution represented in the net. Since the determination of an optimal factoring is a computationally hard problem, heuristic greedy strategies able to find approximations of the optimal factoring are usually adopted. In the present paper we investigate an alternative approach based on a combination of genetic algorithms (GA) and case-based reasoning (CBR). We show how the use of genetic algorithms can improve the quality of the computed factoring in case a static strategy is used (as for the MPE computation), while the combination of GA and CBR can still provide advantages in the case of dynamic strategies. Some preliminary results on different kinds of nets are then reported.
2-hop neighbor's text information: Learning Concept Classification Rules Using Genetic Algorithms. : In this paper, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability to dynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 32
|
The node content
null
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information: A Genetic Algorithm for the Topological Optimization of Neural Networks. :
2-hop neighbor's text information: Genetic-based machine learning and behavior based robotics: a new syntesis. : difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 34
|
The node content
null
1-hop neighbor's text information: Using Markov chains to analyze GAFOs. : Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided.
1-hop neighbor's text information: Modeling Hybrid Genetic Algorithms. : An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 35
|
The node content
null
1-hop neighbor's text information: Grounding robotic control with genetic neural net-works. : Technical Report AI94-223 May 1994 Abstract An important but often neglected problem in the field of Artificial Intelligence is that of grounding systems in their environment such that the representations they manipulate have inherent meaning for the system. Since humans rely so heavily on semantics, it seems likely that the grounding is crucial to the development of truly intelligent behavior. This study investigates the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. Both the topology and weights of the neural networks are optimized through genetic algorithms. Although such comprehensive optimization is difficult, the empirical evidence gathered here shows that the method is not only tractable but quite fruitful. In the experiments, the agents evolved a wall-following control strategy and were able to transfer it to novel environments. Their behavior suggests that they were also learning to build cognitive maps.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Evolving graphs and networks with edge encoding: Preliminary report. : We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress.
2-hop neighbor's text information: "Using case based learning to improve genetic algorithm based design optimization", : In this paper we describe a method for improving genetic-algorithm-based optimization using case-based learning. The idea is to utilize the sequence of points explored during a search to guide further exploration. The proposed method is particularly suitable for continuous spaces with expensive evaluation functions, such as arise in engineering design. Empirical results in two engineering design domains and across different representations demonstrate that the proposed method can significantly improve the efficiency and reliability of the GA optimizer. Moreover, the results suggest that the modification makes the genetic algorithm less sensitive to poor choices of tuning parameters such as muta tion rate.
2-hop neighbor's text information: Using real-valued genetic algorithms to evolve rule sets for classification. : In this paper, we use a genetic algorithm to evolve a set of classification rules with real-valued attributes. We show how real-valued attribute ranges can be encoded with real-valued genes and present a new uniform method for representing don't cares in the rules. We view supervised classification as an optimization problem, and evolve rule sets that maximize the number of correct classifications of input instances. We use a variant of the Pitt approach to genetic-based machine learning system with a novel conflict resolution mechanism between competing rules within the same rule set. Experimental results demonstrate the effectiveness of our proposed approach on a benchmark wine classifier system.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 36
|
The node content
null
1-hop neighbor's text information: Using Markov chains to analyze GAFOs. : Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided.
1-hop neighbor's text information: Modeling Hybrid Genetic Algorithms. : An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
2-hop neighbor's text information: Genetic-based machine learning and behavior based robotics: a new syntesis. : difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a
2-hop neighbor's text information:Symbolic and Subsymbolic Learning for Vision: Some Possibilities: Robust, flexible and sufficiently general vision systems such as those for recognition and description of complex 3-dimensional objects require an adequate armamentarium of representations and learning mechanisms. This paper briefly analyzes the strengths and weaknesses of different learning paradigms such as symbol processing systems, connectionist networks, and statistical and syntactic pattern recognition systems as possible candidates for providing such capabilities and points out several promising directions for integrating multiple such paradigms in a synergistic fashion towards that goal.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 38
|
The node content
null
1-hop neighbor's text information: The Use of Explicit Goals for Knowledge to Guide Inference and Learning. : Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.
1-hop neighbor's text information:Modeling Invention by Analogy in ACT-R: We investigate some aspects of cognition involved in invention, more precisely in the invention of the telephone by Alexander Graham Bell. We propose the use of the Structure-Behavior-Function (SBF) language for the representation of invention knowledge; we claim that because SBF has been shown to support a wide range of reasoning about physical devices, it constitutes a plausible account of how an inventor might represent knowledge of an invention. We further propose the use of the ACT-R architecture for the implementation of this model. ACT-R has been shown to very precisely model a wide range of human cognition. We draw upon the architecture for execution of productions and matching of declarative knowledge through spreading activation. Thus we present a model which combines the well-established cognitive validity of ACT-R with the powerful, specialized model-based reasoning methods facilitated by SBF.
1-hop neighbor's text information: Explaining Serendipitous Recognition in Design, : Creative designers often see solutions to pending design problems in the everyday objects surrounding them. This can often lead to innovation and insight, sometimes revealing new functions and purposes for common design pieces in the process. We are interested in modeling serendipitous recognition of solutions to pending problems in the context of creative mechanical design. This paper characterizes this ability, analyzing observations we have made of it, and placing it in the context of other forms of recognition. We propose a computational model to capture and explore serendipitous recognition which is based on ideas from reconstructive dynamic memory and situation assessment in case-based reasoning.
2-hop neighbor's text information: A comparative utility analysis of case-based reasoning and control-rule learning systems. : The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1
2-hop neighbor's text information: Kritik: An early case-based design system. In Maher, M.L. & Pu, : In the late 1980s, we developed one of the early case-based design systems called Kritik. Kritik autonomously generated preliminary (conceptual, qualitative) designs for physical devices by retrieving and adapting past designs stored in its case memory. Each case in the system had an associated structure-behavior-function (SBF) device model that explained how the structure of the device accomplished its functions. These casespecific device models guided the process of modifying a past design to meet the functional specification of a new design problem. The device models also enabled verification of the design modifications. Kritik2 is a new and more complete implementation of Kritik. In this paper, we take a retrospective view on Kritik. In early papers, we had described Kritik as integrating case-based and model-based reasoning. In this integration, Kritik also grounds the computational process of case-based reasoning in the SBF content theory of device comprehension. The SBF models not only provide methods for many specific tasks in case-based design such as design adaptation and verification, but they also provide the vocabulary for the whole process of case-based design, from retrieval of old cases to storage of new ones. This grounding, we believe, is essential for building well-constrained theories of case-based design.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
2
| 2
|
cora
|
train
| 40
|
The node content
null
1-hop neighbor's text information: Learning to play the game of chess. : This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuro-dynamic Programming. :
2-hop neighbor's text information:Optimal Navigation in a Probibalistic World: In this paper, we define and examine two versions of the bridge problem. The first variant of the bridge problem is a determistic model where the agent knows a superset of the transitions and a priori probabilities that those transitions are intact. In the second variant, transitions can break or be fixed with some probability at each time step. These problems are applicable to planning in uncertain domains as well as packet routing in a computer network. We show how an agent can act optimally in these models by reduction to Markov decision processes. We describe methods of solving them but note that these methods are intractable for reasonably sized problems. Finally, we suggest neuro-dynamic programming as a method of value function approximation for these types of models.
2-hop neighbor's text information: Using genetic programming to evolve board evaluation functions. : In this paper, we employ the genetic programming paradigm to enable a computer to learn to play strategies for the ancient Egyptian boardgame Senet by evolving board evaluation functions. Formulating the problem in terms of board evaluation functions made it feasible to evaluate the fitness of game playing strategies by using tournament-style fitness evaluation. The game has elements of both strategy and chance. Our approach learns strategies which enable the computer to play consistently at a reasonably skillful level.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 41
|
The node content
null
1-hop neighbor's text information: Maximizing the robustness of a linear threshold classifier with discrete weights. Network: : Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
1-hop neighbor's text information: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs. :
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 42
|
The node content
null
1-hop neighbor's text information: A Comparison of Full and Partial Predicated Execution Support for ILP Processors. : One can effectively utilize predicated execution to improve branch handling in instruction-level parallel processors. Although the potential benefits of predicated execution are high, the tradeoffs involved in the design of an instruction set to support predicated execution can be difficult. On one end of the design spectrum, architectural support for full predicated execution requires increasing the number of source operands for all instructions. Full predicate support provides for the most flexibility and the largest potential performance improvements. On the other end, partial predicated execution support, such as conditional moves, requires very little change to existing architectures. This paper presents a preliminary study to qualitatively and quantitatively address the benefit of full and partial predicated execution support. With our current compiler technology, we show that the compiler can use both partial and full predication to achieve speedup in large control-intensive programs. Some details of the code generation techniques are shown to provide insight into the benefit of going from partial to full predication. Preliminary experimental results are very encouraging: partial predication provides an average of 33% performance improvement for an 8-issue processor with no predicate support while full predication provides an additional 30% improvement.
1-hop neighbor's text information: The Expandable Split Window Paradigm for Exploiting Fine-Grain Parallelism, : We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
0
| 1
|
cora
|
train
| 44
|
The node content
null
1-hop neighbor's text information: (1995) Linear space induction in first order logic with RELIEFF, : Current ILP algorithms typically use variants and extensions of the greedy search. This prevents them to detect significant relationships between the training objects. Instead of myopic impurity functions, we propose the use of the heuristic based on RELIEF for guidance of ILP algorithms. At each step, in our ILP-R system, this heuristic is used to determine a beam of candidate literals. The beam is then used in an exhaustive search for a potentially good conjunction of literals. From the efficiency point of view we introduce interesting declarative bias which enables us to keep the growth of the training set, when introducing new variables, within linear bounds (linear with respect to the clause length). This bias prohibits cross-referencing of variables in variable dependency tree. The resulting system has been tested on various artificial problems. The advantages and deficiencies of our approach are discussed.
1-hop neighbor's text information: (1995) Induction of decision trees using RELIEFF. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF.
1-hop neighbor's text information: Estimating attributes: Analysis and extension of relief. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.
2-hop neighbor's text information: Machine learning applied to diagnosis of sport injuries. : Machine learning techniques can be used to extract knowledge from data stored in medical databases. In our application, various machine learning algorithms were used to extract diagnostic knowledge to support the diagnosis of sport injuries. The applied methods include variants of the Assistant algorithm for top-down induction of decision trees, and variants of the Bayesian classifier. The available dataset was insufficent for reliable diagnosis of all sport injuries considered by the system. Consequently, expert-defined diagnostic rules were added and used as pre-classifiers or as generators of additional training instances for injuries with few training examples. Experimental results show that the classification accuracy and the explanation capability of the naive Bayesian classifier with the fuzzy discretization of numerical attributes was superior to other methods and was estimated as the most appro priate for practical use.
2-hop neighbor's text information: Feature subset selection as search with probabilistic estimates. : Irrelevant features and weakly relevant features may reduce the comprehensibility and accuracy of concepts induced by supervised learning algorithms. We formulate the search for a feature subset as an abstract search problem with probabilistic estimates. Searching a space using an evaluation function that is a random variable requires trading off accuracy of estimates for increased state exploration. We show how recent feature subset selection algorithms in the machine learning literature fit into this search problem as simple hill climbing approaches, and conduct a small experiment using a best-first search technique.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
0
| 2
|
cora
|
train
| 45
|
The node content
null
1-hop neighbor's text information: (1992) Generic Teleological Mechanisms and their Use in Case Adaptation, : In experience-based (or case-based) reasoning, new problems are solved by retrieving and adapting the solutions to similar problems encountered in the past. An important issue in experience-based reasoning is to identify different types of knowledge and reasoning useful for different classes of case-adaptation tasks. In this paper, we examine a class of non-routine case-adaptation tasks that involve patterned insertions of new elements in old solutions. We describe a model-based method for solving this task in the context of the design of physical devices. The method uses knowledge of generic teleological mechanisms (GTMs) such as cascading. Old designs are adapted to meet new functional specifications by accessing and instantiating the appropriate GTM. The Kritik2 system evaluates the computational feasibility and sufficiency of this method for design adaptation.
1-hop neighbor's text information: Some studies in machine learning using the game of Checkers. :
1-hop neighbor's text information:Meta-Cases: Explaining Case-Based Reasoning: AI research on case-based reasoning has led to the development of many laboratory case-based systems. As we move towards introducing these systems into work environments, explaining the processes of case-based reasoning is becoming an increasingly important issue. In this paper we describe the notion of a meta-case for illustrating, explaining and justifying case-based reasoning. A meta-case contains a trace of the processing in a problem-solving episode, and provides an explanation of the problem-solving decisions and a (partial) justification for the solution. The language for representing the problem-solving trace depends on the model of problem solving. We describe a task-method- knowledge (TMK) model of problem-solving and describe the representation of meta-cases in the TMK language. We illustrate this explanatory scheme with examples from Interactive Kritik, a computer-based de
2-hop neighbor's text information: A competitive approach to game learning. : Machine learning of game strategies has often depended on competitive methods that continually develop new strategies capable of defeating previous ones. We use a very inclusive definition of game and consider a framework within which a competitive algorithm makes repeated use of a strategy learning component that can learn strategies which defeat a given set of opponents. We describe game learning in terms of sets H and X of first and second player strategies, and connect the model with more familiar models of concept learning. We show the importance of the ideas of teaching set [20] and specification number [19] k in this new context. The performance of several competitive algorithms is investigated, using both worst-case and randomized strategy learning algorithms. Our central result (Theorem 4) is a competitive algorithm that solves games in a total number of strategies polynomial in lg(jHj), lg(jX j), and k. Its use is demonstrated, including an application in concept learning with a new kind of counterexample oracle. We conclude with a complexity analysis of game learning, and list a number of new questions arising from this work.
2-hop neighbor's text information: Learning to Play Games from Experience: An Application of Artificial Neural Networks and Temporal Difference Learning. :
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
2
| 2
|
cora
|
train
| 46
|
The node content
null
1-hop neighbor's text information: The Estimation of Probabilities in Attribute Selection Measures for Decision Structure Induction in Proceeding of the European Summer School on Machine Learning, : In this paper we analyze two well-known measures for attribute selection in decision tree induction, informativity and gini index. In particular, we are interested in the influence of different methods for estimating probabilities on these two measures. The results of experiments show that different measures, which are obtained by different probability estimation methods, determine the preferential order of attributes in a given node. Therefore, they determine the structure of a constructed decision tree. This feature can be very beneficial, especially in real-world applications where several different trees are often required.
1-hop neighbor's text information: An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.
1-hop neighbor's text information: R.S. and Imam, I.F. On Learning Decision Structures. : A decision structure is an acyclic graph that specifies an order of tests to be applied to an object (or a situation) to arrive at a decision about that object. and serves as a simple and powerful tool for organizing a decision process. This paper proposes a methodology for learning decision structures that are oriented toward specific decision making situations. The methodology consists of two phases: 1determining and storing declarative rules describing the decision process, 2deriving online a decision structure from the rules. The first step is performed by an expert or by an AQ-based inductive learning program that learns decision rules from examples of decisions (AQ15 or AQ17). The second step transforms the decision rules to a decision structure that is most suitable for the given decision making situation. The system, AQDT-2, implementing the second step, has been applied to a problem in construction engineering. In the experiments, AQDT-2 outperformed all other programs applied to the same problem in terms of the accuracy and the simplicity of the generated decision structures. Key words: machine learning, inductive learning, decision structures, decision rules, attribute selection.
2-hop neighbor's text information:Incremental Reduced Error Pruning: This paper outlines some problems that may occur with Reduced Error Pruning in relational learning algorithms, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of the algorithm cannot be recommended for domains which require a very specific concept description.
2-hop neighbor's text information:Multivariate Decision Trees: COINS Technical Report 92-82 December 1992 Abstract Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some well-known methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
0
| 2
|
cora
|
train
| 47
|
The node content
null
1-hop neighbor's text information: Generalization in reinforcement learning: Successful examples using sparse coarse coding. : On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes ("rollouts"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .
1-hop neighbor's text information: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 48
|
The node content
null
1-hop neighbor's text information: A case study in dynamic belief networks: monitoring walking, fall prediction and detection. :
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.
2-hop neighbor's text information: From Bayesian networks to causal networks. : This paper demonstrates the use of graphs as a mathematical tool for expressing independencies, and as a formal language for communicating and processing causal information for decision analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the theory of Bayesian networks and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. As an example, we show how the effect of smoking on lung cancer can be quantified from non-experimental data, using a minimal set of qualitative assumptions. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. fl Portions of this paper were presented at the 49th Session of the International Statistical Institute, Florence, Italy, August 25 - September 3, 1993.
2-hop neighbor's text information: Decision-theoretic foundations for causal reasoning. : We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 2
|
cora
|
train
| 49
|
The node content
null
1-hop neighbor's text information: Neuro-dynamic Programming. :
1-hop neighbor's text information: Dynamic Programming and Markov Processes. : The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence.
2-hop neighbor's text information:TDLeaf(): Combining Temporal Difference learning with game-tree search.: In this paper we present TDLeaf(), a variation on the TD() algorithm that enables it to be used in conjunction with minimax search. We present some experiments in both chess and backgammon which demonstrate its utility and provide comparisons with TD() and another less radical variant, TD-directed(). In particular, our chess program, KnightCap, used TDLeaf() to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success and the relationship between our results and Tesauro's results in backgammon.
2-hop neighbor's text information: Asynchronous modified policy iteration with single-sided updates. : We present a new algorithm for solving Markov decision problems that extends the modified policy iteration algorithm of Puterman and Shin [6] in two important ways: 1) The new algorithm is asynchronous in that it allows the values of states to be updated in arbitrary order, and it does not need to consider all actions in each state while updating the policy. 2) The new algorithm converges under more general initial conditions than those required by modified policy iteration. Specifically, the set of initial policy-value function pairs for which our algorithm guarantees convergence is a strict superset of the set for which modified policy iteration converges. This generalization was obtained by making a simple and easily implementable change to the policy evaluation operator used in updating the value function. Both the asynchronous nature of our algorithm and its convergence under more general conditions expand the range of problems to which our algorithm can be applied.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 2
|
cora
|
train
| 50
|
The node content
null
1-hop neighbor's text information: A practical Bayesian framework for backpropagation networks. : A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). and the Bayesian evidence is obtained.
2-hop neighbor's text information:The Effective Size of a Neural Network: A Principal Component Approach: Often when learning from data, one attaches a penalty term to a standard error term in an attempt to prefer simple models and prevent overfitting. Current penalty terms for neural networks, however, often do not take into account weight interaction. This is a critical drawback since the effective number of parameters in a network usually differs dramatically from the total number of possible parameters. In this paper, we present a penalty term that uses Principal Component Analysis to help detect functional redundancy in a neural network. Results show that our new algorithm gives a much more accurate estimate of network complexity than do standard approaches. As a result, our new term should be able to improve techniques that make use of a penalty term, such as weight decay, weight pruning, feature selection, Bayesian, and prediction-risk tech niques.
2-hop neighbor's text information: Bayesian nonlinear modelling for the prediction competition. : The 1993 energy prediction competition involved the prediction of a series of building energy loads from a series of environmental input variables. Non-linear regression using `neural networks' is a popular technique for such modeling tasks. Since it is not obvious how large a time-window of inputs is appropriate, or what preprocessing of inputs is best, this can be viewed as a regression problem in which there are many possible input variables, some of which may actually be irrelevant to the prediction of the output variable. Because a finite data set will show random correlations between the irrelevant inputs and the output, any conventional neural network (even with reg-ularisation or `weight decay') will not set the coefficients for these junk inputs to zero. Thus the irrelevant variables will hurt the model's performance. The Automatic Relevance Determination (ARD) model puts a prior over the regression parameters which embodies the concept of relevance. This is done in a simple and `soft' way by introducing multiple regularisation constants, one associated with each input. Using Bayesian methods, the regularisation constants for junk inputs are automatically inferred to be large, preventing those inputs from causing significant overfitting.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
1
| 2
|
cora
|
train
| 51
|
The node content
null
1-hop neighbor's text information:Fast Online Q(): Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed.
1-hop neighbor's text information: Applying online-search to reinforcement learning. : In reinforcement learning it is frequently necessary to resort to an approximation to the true optimal value function. Here we investigate the benefits of online search in such cases. We examine "local" searches, where the agent performs a finite-depth lookahead search, and "global" searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of these methods lies in taking a value function, which gives a rough solution to the hard problem of finding good trajectories from every single state, and combining that with online search, which then gives an accurate solution to the easier problem of finding a good trajectory specifically from the current state.
1-hop neighbor's text information:Modeling the Student with Reinforcement Learning: We describe a methodology for enabling an intelligent teaching system to make high level strategy decisions on the basis of low level student modeling information. This framework is less costly to construct, and superior to hand coding teaching strategies as it is more responsive to the learner's needs. In order to accomplish this, reinforcement learning is used to learn to associate superior teaching actions with certain states of the student's knowledge. Reinforcement learning (RL) has been shown to be flexible in handling noisy data, and does not need expert domain knowledge. A drawback of RL is that it often needs a significant number of trials for learning. We propose an off-line learning methodology using sample data, simulated students, and small amounts of expert knowledge to bypass this problem.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 1
|
cora
|
train
| 52
|
The node content
null
1-hop neighbor's text information: Neuronlike adaptive elements that can solve difficult learning control problems. : Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department.
1-hop neighbor's text information: Discovering complex Othello strategies through evolutionary neural networks. : An approach to develop new game playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an ff-fi search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment.
2-hop neighbor's text information: "Forward Models: Supervised Learning with a Distal Teacher," : Internal models of the environment have an important role to play in adaptive systems in general and are of particular importance for the supervised learning paradigm. In this paper we demonstrate that certain classical problems associated with the notion of the "teacher" in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system. In particular, we show how supervised learning algorithms can be utilized in cases in which an unknown dynamical system intervenes between actions and desired outcomes. Our approach applies to any supervised learning algorithm that is capable of learning in multi-layer networks. *This paper is a revised version of MIT Center for Cognitive Science Occasional Paper #40. We wish to thank Michael Mozer, Andrew Barto, Robert Jacobs, Eric Loeb, and James McClelland for helpful comments on the manuscript. This project was supported in part by BRSG 2 S07 RR07047-23 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, National Institutes of Health, by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by a grant from the Human Frontier Science Program, and by grant N00014-90-J-1942 awarded by the Office of Naval Research.
2-hop neighbor's text information: "The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State Spaces," : Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniques from game-theory and computational geometry to efficiently and adaptively concentrate high resolution only on critical areas. The current version of the algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces. Future versions will be designed to find a solution that optimizes a real-valued criterion. Many simulated problems have been tested, ranging from two-dimensional to nine-dimensional state-spaces, including mazes, path planning, non-linear dynamics, and planar snake robots in restricted spaces. In all cases, a good solution is found in less than ten trials and a few minutes.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 2
|
cora
|
train
| 53
|
The node content
null
1-hop neighbor's text information: Using Markov chains to analyze GAFOs. : Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided.
1-hop neighbor's text information: Modeling Hybrid Genetic Algorithms. : An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.
1-hop neighbor's text information: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 54
|
The node content
null
1-hop neighbor's text information:Robust Value Function Approximation by Working Backwards Computing an accurate value function is the key: In this paper, we examine the intuition that TD() is meant to operate by approximating asynchronous value iteration. We note that on the important class of discrete acyclic stochastic tasks, value iteration is inefficient compared with the DAG-SP algorithm, which essentially performs only one sweep instead of many by working backwards from the goal. The question we address in this paper is whether there is an analogous algorithm that can be used in large stochastic state spaces requiring function approximation. We present such an algorithm, analyze it, and give comparative results to TD on several domains. the state). Using VI to solve MDPs belonging to either of these special classes can be quite inefficient, since VI performs backups over the entire space, whereas the only backups useful for improving V fl are those on the "frontier" between already-correct and not-yet-correct V fl values. In fact, there are classical algorithms for both problem classes which compute V fl more efficiently by explicitly working backwards: for the deterministic class, Dijkstra's shortest-path algorithm; and for the acyclic class, Directed-Acyclic-Graph-Shortest-Paths (DAG-SP) [6]. 1 DAG-SP first topologically sorts the MDP, producing a linear ordering of the states in which every state x precedes all states reachable from x. Then, it runs through that list in reverse, performing one backup per state. Worst-case bounds for VI, Dijkstra, and DAG-SP in deterministic domains with X states and A actions/state are 1 Although [6] presents DAG-SP only for deterministic acyclic problems, it applies straightforwardly to the
2-hop neighbor's text information: Learning to Act using Real- Time Dynamic Programming. : fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526).
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
5
| 2
|
cora
|
train
| 55
|
The node content
null
1-hop neighbor's text information: Adapting the evaluation space to improve global learning. :
1-hop neighbor's text information: Adaptation in constant utility nonstationary environments. : Environments that vary over time present a fundamental problem to adaptive systems. Although in the worst case there is no hope of effective adaptation, some forms environmental variability do provide adaptive opportunities. We consider a broad class of non-stationary environments, those which combine a variable result function with an invariant utility function, and demonstrate via simulation that an adaptive strategy employing both evolution and learning can tolerate a much higher rate of environmental variation than an evolution-only strategy. We suggest that in many cases where stability has previously been assumed, the constant utility non-stationary environment may in fact be a more powerful viewpoint.
1-hop neighbor's text information: Evolution of mapmaking ability: Strategies for the evolution of learning, planning, and memory using genetic programming. : An essential component of an intelligent agent is the ability to observe, encode, and use information about its environment. Traditional approaches to Genetic Programming have focused on evolving functional or reactive programs with only a minimal use of state. This paper presents an approach for investigating the evolution of learning, planning, and memory using Genetic Programming. The approach uses a multi-phasic fitness environment that enforces the use of memory and allows fairly straightforward comprehension of the evolved representations . An illustrative problem of 'gold' collection is used to demonstrate the usefulness of the approach. The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 56
|
The node content
null
1-hop neighbor's text information: Graphical Models in Applied Multivariate Statistics. :
1-hop neighbor's text information: Using path diagrams as a structural equation modeling tool. :
1-hop neighbor's text information: A theory of inferred causation. : This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.
2-hop neighbor's text information:Bayesian Networks:
2-hop neighbor's text information:A Parallel Learning Algorithm for Bayesian Inference Networks: We present a new parallel algorithm for learning Bayesian inference networks from data. Our learning algorithm exploits both properties of the MDL-based score metric, and a distributed, asynchronous, adaptive search technique called nagging. Nagging is intrinsically fault tolerant, has dynamic load balancing features, and scales well. We demonstrate the viability, effectiveness, and scalability of our approach empirically with several experiments using on the order of 20 machines. More specifically, we show that our distributed algorithm can provide optimal solutions for larger problems as well as good solutions for Bayesian networks of up to 150 variables.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 2
|
cora
|
train
| 59
|
The node content
null
1-hop neighbor's text information: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
3
| 1
|
cora
|
train
| 61
|
The node content
null
1-hop neighbor's text information:Computational complexity reduction for BN2O networks using similarity of states: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity.
1-hop neighbor's text information: Efficient Inference in Bayes Nets as a Combinatorial Optimization Problem, : A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that efficient probabilistic inference in a belief network is a problem of finding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
6
| 1
|
cora
|
train
| 63
|
The node content
null
1-hop neighbor's text information: Learning in the presence of malicious errors, : In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems.
1-hop neighbor's text information: Efficient learning of typical finite automata from random walks. : This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the "typical" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random. Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings.
1-hop neighbor's text information:Learning Markov chains with variable memory length from noisy output: The problem of modeling complicated data sequences, such as DNA or speech, often arises in practice. Most of the algorithms select a hypothesis from within a model class assuming that the observed sequence is the direct output of the underlying generation process. In this paper we consider the case when the output passes through a memoryless noisy channel before observation. In particular, we show that in the class of Markov chains with variable memory length, learning is affected by factors, which, despite being super-polynomial, are still small in some practical cases. Markov models with variable memory length, or probabilistic finite suffix automata, were introduced in learning theory by Ron, Singer and Tishby who also described a polynomial time learning algorithm [11, 12]. We present a modification of the algorithm which uses a noise-corrupted sample and has knowledge of the noise structure. The same algorithm is still viable if the noise is not known exactly but a good estimation is available. Finally, some experimental results are presented for removing noise from corrupted English text, and to measure how the performance of the learning algorithm is affected by the size of the noisy sample and the noise rate.
2-hop neighbor's text information: Statistical queries and faulty PAC oracles. : In this paper we study learning in the PAC model of Valiant [18] in which the example oracle used for learning may be faulty in one of two ways: either by misclassifying the example or by distorting the distribution of examples. We first consider models in which examples are misclassified. Kearns [12] recently showed that efficient learning in a new model using statistical queries is a sufficient condition for PAC learning with classification noise. We show that efficient learning with statistical queries is sufficient for learning in the PAC model with malicious error rate proportional to the required statistical query accuracy. One application of this result is a new lower bound for tolerable malicious error in learning monomials of k literals. This is the first such bound which is independent of the number of irrelevant attributes n. We also use the statistical query model to give sufficient conditions for using distribution specific algorithms on distributions outside their prescribed domains. A corollary of this result expands the class of distributions on which we can weakly learn monotone Boolean formulae. We also consider new models of learning in which examples are not chosen according to the distribution on which the learner will be tested. We examine three variations of distribution noise and give necessary and sufficient conditions for polynomial time learning with such noise. We show containments and separations between the various models of faulty oracles. Finally, we examine hypothesis boosting algorithms in the context of learning with distribution noise, and show that Schapire's result regarding the strength of weak learnabil-ity [17] is in some sense tight in requiring the weak learner to be nearly distribution free.
2-hop neighbor's text information:The Power of a Pebble: Exploring and Mapping Directed Graphs: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic.
Here I give you the content of the node itself and the information of its neighbors. The relation between the node and its 1-hop neighbors is 'citation', while the relation between its 1-hop neighbors and 2-hop neighbors is also 'citation'. Question: Which of the following topics does this scientific publication discuss? Here are the 7 categories:
0: Rule_Learning
1: Neural_Networks
2: Case_Based
3: Genetic_Algorithms
4: Theory
5: Reinforcement_Learning
6: Probabilistic_Methods
Reply only one category id from 0 to 6 that you think this scientific publication might belong to.
|
4
| 2
|
cora
|
train
| 64
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 12