problem
stringlengths
219
3.68k
solution
stringclasses
7 values
dataset
stringclasses
1 value
split
stringclasses
1 value
__index_level_0__
int64
12
2.7k
Classify the node ' Mistake-driven learning in text categorization. : Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms which categorize documents by learning a linear separator in the feature space have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
1,801
Classify the node ' Issues in goal-driven explanation. : When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
1,863
Classify the node ' Preventing "overfitting" of Cross-Validation data. : Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then "folklore" also warns about "overfitting" the cross- validation data [Klockars and Sax, 1986, Tukey, 1949, Tukey, 1953]. In this paper, we explain how this "overfitting" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa- tionally efficient form of leave-one-out cross- validation to select such a hypothesis. Fi- nally, we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
1,864
Classify the node 'Technical Diagnosis: Fallexperte-D of further knowledge sources (domain knowledge, common knowledge) is investigated in the: Case based reasoning (CBR) uses the knowledge from former experiences ("known cases"). Since special knowledge of an expert is mainly subject to his experiences, the CBR techniques are a good base for the development of expert systems. We investigate the problem for technical diagnosis. Diagnosis is not considered as a classification task, but as a process to be guided by computer assisted experience. This corresponds to the flexible "case completion" approach. Flexibility is also needed for the expert view with predominant interest in the unexpected, unpredictible cases.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
1,866
Classify the node ' Techniques for extracting instruction level parallelism on MIMD architectures. : Extensive research has been done on extracting parallelism from single instruction stream processors. This paper presents some results of our investigation into ways to modify MIMD architectures to allow them to extract the instruction level parallelism achieved by current superscalar and VLIW machines. A new architecture is proposed which utilizes the advantages of a multiple instruction stream design while addressing some of the limitations that have prevented MIMD architectures from performing ILP operation. A new code scheduling mechanism is described to support this new architecture by partitioning instructions across multiple processing elements in order to exploit this level of parallelism.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
1,881
Classify the node ' Partition-based uniform error bounds, : This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
1,886
Classify the node ' Task-oriented Knowledge Acquisition and Reasoning for Design Support Systems. : We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
1,897
Classify the node ' "Staged hybrid genetic search for seismic data imaging," : Seismic data interpretation problems are typically solved using computationally intensive local search methods which often result in inferior solutions. Here, a traditional hybrid genetic algorithm is compared with different staged hybrid genetic algorithms on the geophysical imaging static corrections problem. The traditional hybrid genetic algorithm used here applied local search to every offspring produced by genetic search. The staged hybrid genetic algorithms were designed to temporally separate the local and genetic search components into distinct phases so as to minimize interference between the two search methods. The results show that some staged hybrid genetic algorithms produce higher quality solutions while using significantly less computational time for this problem.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Genetic Algorithms
cora
train
1,922
Classify the node 'Knowledge Acquisition with a Knowledge-Intensive Machine Learning System: In this paper, we investigate the integration of knowledge acquisition and machine learning techniques. We argue that existing machine learning techniques can be made more useful as knowledge acquisition tools by allowing the expert to have greater control over and interaction with the learning process. We describe a number of extensions to FOCL (a multistrategy Horn-clause learning program) that have greatly enhanced its power as a knowledge acquisition tool, paying particular attention to the utility of maintaining a connection between a rule and the set of examples explained by the rule. The objective of this research is to make the modification of a domain theory analogous to the use of a spread sheet. A prototype knowledge acquisition tool, FOCL-1-2-3, has been constructed in order to evaluate the strengths and weaknesses of this approach.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
1,923
Classify the node ' Fitness causes bloat in variable size representations. : We argue based upon the numbers of representations of given length, that increase in representation length is inherent in using a fixed evaluation function with a discrete but variable length representation. Two examples of this are analysed, including the use of Price's Theorem. Both examples confirm the tendency for solutions to grow in size is caused by fitness based selection.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Genetic Algorithms
cora
train
2,009
Classify the node ' Density estimation by wavelet thresholding. : Density estimation is a commonly used test case for non-parametric estimation methods. We explore the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients. Minimax rates of convergence are studied over a large range of Besov function classes B s;p;q and for a range of global L 0 p error measures, 1 p 0 < 1. A single wavelet threshold estimator is asymptotically minimax within logarithmic terms simultaneously over a range of spaces and error measures. In particular, when p 0 > p, some form of non-linearity is essential, since the minimax linear estimators are suboptimal by polynomial powers of n. A second approach, using an approximation of a Gaussian white noise model in a Mallows metric, is used Acknowledgements: We thank Alexandr Sakhanenko for helpful discussions and references to his work on Berry Esseen theorems used in Section 5. This work was supported in part by NSF DMS 92-09130. The second author would like to thank Universite de' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,013
Classify the node ' Incremental reduced error pruning. : This paper outlines some problems that may occur with Reduced Error Pruning in Inductive Logic Programming, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of this algorithm cannot be recommended for domains with a very specific concept description.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
2,049
Classify the node 'An Efficient Method To Estimate Bagging's Generalization Error: In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m- times, where m is the size of the training set and is the number of replicates. This paper presents several techniques for exploiting the bias-variance decomposition [GBD92, Wol96] to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. The best of our estimators exploits stacking [Wol92]. In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sets. This suggests a novel justification for using bagging| im proved estimation of generalization error.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,106
Classify the node ' Automated decomposition of model-based learning problems. : A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of decompositional, model-based learning (DML), a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,117
Classify the node ' Extraction of meta-knowledge to restrict the hypothesis space for ILP systems. : Many ILP systems, such as GOLEM, FOIL, and MIS, take advantage of user supplied meta-knowledge to restrict the hypothesis space. This meta-knowledge can be in the form of type information about arguments in the predicate being learned, or it can be information about whether a certain argument in the predicate is functionally dependent on the other arguments (supplied as mode information). This meta knowledge is explicitly supplied to an ILP system in addition to the data. The present paper argues that in many cases the meta knowledge can be extracted directly from the raw data. Three algorithms are presented that learn type, mode, and symmetric meta-knowledge from data. These algorithms can be incorporated in existing ILP systems in the form of a preprocessor that obviates the need for a user to explicitly provide this information. In many cases, the algorithms can extract meta- knowledge that the user is either unaware of, but which information can be used by the ILP system to restrict the hypothesis space.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
2,119
Classify the node 'Program Optimization for Faster Genetic Programming: We have used genetic programming to develop efficient image processing software. The ultimate goal of our work is to detect certain signs of breast cancer that cannot be detected with current segmentation and classification methods. Traditional techniques do a relatively good job of segmenting and classifying small-scale features of mammo-grams, such as micro-calcification clusters. Our strongly-typed genetic programs work on a multi-resolution representation of the mammogram, and they are aimed at handling features at medium and large scales, such as stel-lated lesions and architectural distortions. The main problem is efficiency. We employ program optimizations that speed up the evolution process by more than a factor of ten. In this paper we present our genetic programming system, and we describe our optimization techniques.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Genetic Algorithms
cora
train
2,132
Classify the node ' Bias, variance, and error correcting output codes for local learners. : This paper focuses on a bias variance decomposition analysis of a local learning algorithm, the nearest neighbor classifier, that has been extended with error correcting output codes. This extended algorithm often considerably reduces the 0-1 (i.e., classification) error in comparison with nearest neighbor (Ricci & Aha, 1997). The analysis presented here reveals that this performance improvement is obtained by drastically reducing bias at the cost of increasing variance. We also show that, even in classification problems with few classes (m5), extending the codeword length beyond the limit that assures column separation yields an error reduction. This error reduction is not only in the variance, which is due to the voting mechanism used for error-correcting output codes, but also in the bias.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,133
Classify the node ' On the convergence of stochastic iterative dynamic programming algorithms. : This project was supported in part by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-J-1942 from the Office of Naval Research. The project was also supported by NSF grant ASC-9217041 in support of the Center for Biological and Computational Learning at MIT, including funds provided by DARPA under the HPCC program. Michael I. Jordan is a NSF Presidential Young Investigator.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Reinforcement Learning
cora
train
2,201
Classify the node ' Bayesian training of backpropagation networks by the hybrid monte carlo method. : It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the "Hybrid Monte Carlo" method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of "free energy" differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Neural Networks
cora
train
2,217
Classify the node 'The Expectation-Maximization Algorithm for MAP Estimation: The Expectation-Maximization algorithm given by Dempster et al (1977) has enjoyed considerable popularity for solving MAP estimation problems. This note gives a simple derivation of the algorithm, due to Luttrell (1994), that better illustrates the convergence properties of the algorithm and its variants. The algorithm is illustrated with two examples: pooling data from multiple noisy sources and fitting a mixture density.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,257
Classify the node 'Bottom-up induction of logic programs with more than one recursive clause: In this paper we present a bottom-up algorithm called MRI to induce logic programs from their examples. This method can induce programs with a base clause and more than one recursive clause from a very small number of examples. MRI is based on the analysis of saturations of examples. It first generates a path structure, which is an expression of a stream of values processed by predicates. The concept of path structure was originally introduced by Identam-Almquist and used in TIM [ Idestam-Almquist, 1996 ] . In this paper, we introduce the concepts of extension and difference of path structure. Recursive clauses can be expressed as a difference between a path structure and its extension. The paper presents the algorithm and shows experimental results obtained by the method.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
2,258
Classify the node ' Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis, : We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that AC 0 is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distribution-dependent quantity yielding upper and lower bounds on the number of statistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with fl This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. Support also is sponsored by the National Science Foundation under Grant No. CC-9119319. Blum also supported in part by NSF National Young Investigator grant CCR-9357793. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF. respect to the uniform input distribution in polynomial time in the statistical query model. This result is information-theoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computational learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions on the hypercube.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,275
Classify the node ' Inverse Entailment and Progol. : This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Rule Learning
cora
train
2,308
Classify the node ' On-line adaptive critic for changing systems. : In this paper we propose a reactive critic, that is able to respond to changing situations. We will explain why this is usefull in reinforcement learning, where the critic is used to improve the control strategy. We take a problem for which we can derive the solution analytically. This enables us to investigate the relation between the parameters and the resulting approximations of the critic. We will also demonstrate how the reactive critic reponds to changing situations.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Reinforcement Learning
cora
train
2,413
Classify the node 'Problem Solving for Redesign: A knowledge-level analysis of complex tasks like diagnosis and design can give us a better understanding of these tasks in terms of the goals they aim to achieve and the different ways to achieve these goals. In this paper we present a knowledge-level analysis of redesign. Redesign is viewed as a family of methods based on some common principles, and a number of dimensions along which redesign problem solving methods can vary are distinguished. By examining the problem-solving behavior of a number of existing redesign systems and approaches, we came up with a collection of problem-solving methods for redesign and developed a task-method structure for redesign. In constructing a system for redesign a large number of knowledge-related choices and decisions are made. In order to describe all relevant choices in redesign problem solving, we have to extend the current notion of possible relations between tasks and methods in a PSM architecture. The realization of a task by a PSM, and the decomposition of a PSM into subtasks are the most common relations in a PSM architecture. However, we suggest to extend fl This work has been funded by NWO/SION within project 612-322-316, Evolutionary design in knowledge-based systems (the REVISE-project). Participants in the REVISE-project are: the TWIST group at the University of Twente, the SWI department of the University of Amsterdam, the AI department of the Vrije Universiteit van Amsterdam and the STEVIN group at the University of Twente. these relations with the notions of task refinement and method refinement. These notions represent intermediate decisions in a task-method structure, in which the competence of a task or method is refined without immediately paying attention to the operationalization in terms of subtasks. Explicit representation of this kind of intermediate decisions helps to make and represent decisions in a more piecemeal fashion.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
2,418
Classify the node ' (1995) "Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation", : Technical Report No. 9508, Department of Statistics, University of Toronto Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using "overrelaxed" variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,422
Classify the node 'Balls and Urns: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,424
Classify the node ' (1997) MCMC Convergence Diagnostic via the Central Limit Theorem. : Markov Chain Monte Carlo (MCMC) methods, as introduced by Gelfand and Smith (1990), provide a simulation based strategy for statistical inference. The application fields related to these methods, as well as theoretical convergence properties, have been intensively studied in the recent literature. However, many improvements are still expected to provide workable and theoretically well-grounded solutions to the problem of monitoring the convergence of actual outputs from MCMC algorithms (i.e. the convergence assessment problem). In this paper, we introduce and discuss a methodology based on the Central Limit Theorem for Markov chains to assess convergence of MCMC algorithms. Instead of searching for approximate stationarity, we primarily intend to control the precision of estimates of the invariant probability measure, or of integrals of functions with respect to this measure, through confidence regions based on normal approximation. The first proposed control method tests the normality hypothesis for normalized averages of functions of the Markov chain over independent parallel chains. This normality control provides good guarantees that the whole state space has been explored, even in multimodal situations. It can lead to automated stopping rules. A second tool connected with the normality control is based on graphical monitoring of the stabilization of the variance after n iterations near the limiting variance appearing in the CLT. Both methods require no knowledge of the sampler driving the chain. In this paper, we mainly focus on finite state Markov chains, since this setting allows us to derive consistent estimates of both the limiting variance and the variance after n iterations. Heuristic procedures based on Berry-Esseen bounds are investigated. An extension to the continuous case is also proposed. Numerical simulations illustrating the performance of these methods are given for several examples: a finite chain with multimodal invariant probability, a finite state random walk for which the theoretical rate of convergence to stationarity is known, and a continuous state chain with multimodal invariant probability issued from a Gibbs sampler.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,429
Classify the node ' a platform for emergencies management systems. : This paper describe the functional architecture of CHARADE a software platform devoted to the development of a new generation of intelligent environmental decision support systems. The CHARADE platform is based on the a task-oriented approach to system design and on the exploitation of a new architecture for problem solving, that integrates case-based reasoning and constraint reasoning. The platform is developed in an objectoriented environment and upon that a demonstrator will be developed for managing first intervention attack to forest fires.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
2,434
Classify the node 'Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,447
Classify the node ' An empirical comparison of selection measures for decision-tree induction. : Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,456
Classify the node ' A Model-Based Approach to Blame Assignment in Design. : We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
2,464
Classify the node ' Free energy coding. : In this paper, we introduce a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed free energy approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The expectation-maximization parameter estimation algorithms minimize this effective codeword length. We illustrate the performance of free energy coding on a simple problem where a compression factor of two is gained by using the new method.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Probabilistic Methods
cora
train
2,507
Classify the node ' Neural networks and statistical models. : ' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Neural Networks
cora
train
2,531
Classify the node ' Algorithms for partially observable markov decision processes. : Most exact algorithms for general pomdps use a form of dynamic programming in which a piecewise-linear and convex representation of one value function is transformed into another. We examine variations of the "incremental pruning" approach for solving this problem and compare them to earlier algorithms from theoretical and empirical perspectives. We find that incremental pruning is presently the most efficient algorithm for solving pomdps.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Reinforcement Learning
cora
train
2,597
Classify the node ' Opportunistic Reasoning: A Design Perspective. : An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal match. We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Case Based
cora
train
2,600
Classify the node ' Automatic Parameter Selection by Minimizing Estimated Error. : We address the problem of finding the parameter settings that will result in optimal performance of a given learning algorithm using a particular dataset as training data. We describe a "wrapper" method, considering determination of the best parameters as a discrete function optimization problem. The method uses best-first search and cross-validation to wrap around the basic induction algorithm: the search explores the space of parameter values, running the basic algorithm many times on training and holdout sets produced by cross-validation to get an estimate of the expected error of each parameter setting. Thus, the final selected parameter settings are tuned for the specific induction algorithm and dataset being studied. We report experiments with this method on 33 datasets selected from the UCI and StatLog collections using C4.5 as the basic induction algorithm. At a 90% confidence level, our method improves the performance of C4.5 on nine domains, degrades performance on one, and is statistically indistinguishable from C4.5 on the rest. On the sample of datasets used for comparison, our method yields an average 13% relative decrease in error rate. We expect to see similar performance improvements when using our method with other machine learning al gorithms.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,634
Classify the node ' A VLIW/SIMD Microprocessor for Artificial Neural Network Computations. : SPERT (Synthetic PERceptron Testbed) is a fully programmable single chip microprocessor designed for efficient execution of artificial neural network algorithms. The first implementation will be in a 1.2 m CMOS technology with a 50MHz clock rate, and a prototype system is being designed to occupy a double SBus slot within a Sun Sparcstation. SPERT will sustain over 300 fi 10 6 connections per second during pattern classification, and around 100 fi 10 6 connection updates per second while running the popular error backpropagation training algorithm. This represents a speedup of around two orders of magnitude over a Sparcstation-2 for algorithms of interest. An earlier system produced by our group, the Ring Array Processor (RAP), used commercial DSP chips. Compared with a RAP multiprocessor of similar performance, SPERT represents over an order of magnitude reduction in cost for problems where fixed-point arithmetic is satisfactory. fl International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Neural Networks
cora
train
2,648
Classify the node ' On the learnability of discrete distributions. : ' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,665
Classify the node 'Exploration in Machine Learning: Most researchers in machine learning have built their learning systems under the assumption that some external entity would do all the work of furnishing the learning experiences. Recently, however, investigators in several subfields of machine learning have designed systems that play an active role in choosing the situations from which they will learn. Such activity is generally called exploration. This paper describes a few of these exploratory learning projects, as reported in the literature, and attempts to extract a general account of the issues involved in exploration.' into one of the following categories: Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Theory
cora
train
2,699