Allen-UQ/Qwen2.5-7B-Instruct-GRPO-LoRA-Nei-Think
Text Generation
•
Updated
•
5
problem
stringlengths 919
42.1k
| solution
stringclasses 16
values | dataset
stringclasses 3
values | split
stringclasses 1
value |
|---|---|---|---|
Classify the node ' Reinforcement Learning for Job-Shop Scheduling, : We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm T D() is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step looka-head search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD sched-uler performs better than the best known existing algorithm for this task|Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Neuro-dynamic Programming. :
Neighbour node 1: Value Function Based Production Scheduling: Production scheduling, the problem of sequentially configuring a factory to meet forecasted demands, is a critical problem throughout the manufacturing industry. The requirement of maintaining product inventories in the face of unpredictable demand and stochastic factory output makes standard scheduling models, such as job-shop, inadequate. Currently applied algorithms, such as simulated annealing and constraint propagation, must employ ad-hoc methods such as frequent replanning to cope with uncertainty. In this paper, we describe a Markov Decision Process (MDP) formulation of production scheduling which captures stochasticity in both production and demands. The solution to this MDP is a value function which can be used to generate optimal scheduling decisions online. A simple example illustrates the theoretical superiority of this approach over replanning-based methods. We then describe an industrial application and two reinforcement learning methods for generating an approximate value function on this domain. Our results demonstrate that in both deterministic and noisy scenarios, value function approx imation is an effective technique.
Neighbour node 2: Generalization in reinforcement learning: Safely approximating the value function. : To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization.
Neighbour node 3: Learning to Predict User Operations for Adaptive Scheduling. : Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling.
Neighbour node 4: Robust Value Function Approximation by Working Backwards Computing an accurate value function is the key: In this paper, we examine the intuition that TD() is meant to operate by approximating asynchronous value iteration. We note that on the important class of discrete acyclic stochastic tasks, value iteration is inefficient compared with the DAG-SP algorithm, which essentially performs only one sweep instead of many by working backwards from the goal. The question we address in this paper is whether there is an analogous algorithm that can be used in large stochastic state spaces requiring function approximation. We present such an algorithm, analyze it, and give comparative results to TD on several domains. the state). Using VI to solve MDPs belonging to either of these special classes can be quite inefficient, since VI performs backups over the entire space, whereas the only backups useful for improving V fl are those on the "frontier" between already-correct and not-yet-correct V fl values. In fact, there are classical algorithms for both problem classes which compute V fl more efficiently by explicitly working backwards: for the deterministic class, Dijkstra's shortest-path algorithm; and for the acyclic class, Directed-Acyclic-Graph-Shortest-Paths (DAG-SP) [6]. 1 DAG-SP first topologically sorts the MDP, producing a linear ordering of the states in which every state x precedes all states reachable from x. Then, it runs through that list in reverse, performing one backup per state. Worst-case bounds for VI, Dijkstra, and DAG-SP in deterministic domains with X states and A actions/state are 1 Although [6] presents DAG-SP only for deterministic acyclic problems, it applies straightforwardly to the
Neighbour node 5: Value Function Approximations and Job-Shop Scheduling: We report a successful application of TD() with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses.
Neighbour node 6: Finding structure in reinforcement learning. : Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.
Neighbour node 7: Solving Combinatorial Optimization Tasks by Reinforcement Learning: A General Methodology Applied to Resource-Constrained Scheduling: This paper introduces a methodology for solving combinatorial optimization problems through the application of reinforcement learning methods. The approach can be applied in cases where several similar instances of a combinatorial optimization problem must be solved. The key idea is to analyze a set of "training" problem instances and learn a search control policy for solving new problem instances. The search control policy has the twin goals of finding high-quality solutions and finding them quickly. Results of applying this methodology to a NASA scheduling problem show that the learned search control policy is much more effective than the best known non-learning search procedure|a method based on simulated annealing.
Neighbour node 8: Learning to predict by the methods of temporal differences. : This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.
Neighbour node 9: High-Performance Job-Shop Scheduling With A Time-Delay TD() Network. : Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm T D(). A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-T D() network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly outperform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them.
|
Reinforcement Learning
|
cora
|
train
|
Classify the node 'Title: Impaired reductive regeneration of ascorbic acid in the Goto-Kakizaki diabetic rat.
Abstract: Ascorbic acid (AA) is a naturally occurring major antioxidant that is essential for the scavenging of toxic free radicals in both plasma and tissues. AA levels in plasma and tissues have been reported to be significantly lower than normal in diabetic animals and humans, and might contribute to the complications found at the late stages of diabetes. In this study, plasma and hepatic AA levels and AA regeneration were studied in the Goto-Kakizaki diabetic rat (GK rat) to elucidate the mechanism of decreasing plasma and hepatic AA levels in diabetes. AA concentrations in the plasma and liver were significantly lower in GK than in control rats. AA levels in primary cultured hepatocytes derived from GK rats were lower than those derived from control Wistar rats with or without dehydroascorbic acid (DHA) in the medium. Among various enzyme activities that reduce DHA to AA, the NADPH-dependent regeneration of AA in the liver was significantly suppressed in GK rats. Northern blot analysis revealed that only the expression of 3-alpha-hydroxysteroid dehydrogenase (AKR) was significantly suppressed in these rats. These results suggest that decreased AA-regenerating activity, probably through decreased expression of AKR, contributes to the decreased AA levels and increased oxidative stress in GK rats.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Significance of glutathione-dependent antioxidant system in diabetes-induced embryonic malformations.
Abstract: Hyperglycemia-induced embryonic malformations may be due to an increase in radical formation and depletion of intracellular glutathione (GSH) in embryonic tissues. In the past, we have investigated the role of the glutathione-dependent antioxidant system and GSH on diabetes-related embryonic malformations. Embryos from streptozotocin-induced diabetic rats on gestational day 11 showed a significantly higher frequency of embryonic malformations (neural lesions 21.5 vs. 2.8%, P<0.001; nonneural lesions 47.4 vs. 6.4%, P<0.001) and growth retardation than those of normal mothers. The formation of intracellular reactive oxygen species (ROS), estimated by flow cytometry, was increased in isolated embryonic cells of diabetic rats on gestational day 11. The concentration of intracellular GSH in embryonic tissues of diabetic pregnant rats on day 11 was significantly lower than that of normal rats. The activity of y-glutamylcysteine synthetase (gamma-GCS), the rate-limiting GSH synthesizing enzyme, in embryos of diabetic rats was significantly low, associated with reduced expression of gamma-GCS mRNA. Administration of buthionine sulfoxamine (BSO), a specific inhibitor of gamma-GCS, to diabetic rats during the period of maximal teratogenic susceptibility (days 6-11 of gestation) reduced GSH by 46.7% and increased the frequency of neural lesions (62.1 vs. 21.5%, P<0.01) and nonneural lesions (79.3 vs. 47.4%, P<0.01). Administration of GSH ester to diabetic rats restored GSH concentration in the embryos and reduced the formation of ROS, leading to normalization of neural lesions (1.9 vs. 21.5%) and improvement in nonneural lesions (26.7 vs. 47.4%) and growth retardation. Administration of insulin in another group of pregnant rats during the same period resulted in complete normalization of neural lesions (4.3 vs. 21.5%), nonneural lesions (4.3 vs. 47.4%), and growth retardation with the restoration of GSH contents. Our results indicate that GSH depletion and impaired responsiveness of GSH-synthesizing enzyme to oxidative stress during organogenesis may have important roles in the development of embryonic malformations in diabetes.
Neighbour node 1: Title: Enzymatic basis for altered ascorbic acid and dehydroascorbic acid levels in diabetes.
Abstract: Abnormal plasma ascorbic acid (AA) and dehydroascorbic acid (DHAA) levels observed in diabetes may be correlated to a deficiency in the recycling of AA. Ascorbic acid and DHAA levels are altered in diabetic liver in the present study. In addition, a coupling of the hexose monophosphate (HMP) shunt by way of NADPH to glutathione reductase and subsequent DHAA reduction is demonstrated. Ascorbic acid production was assayed directly and by way of the HMPS pathway. Results indicate that AA production from DHAA via the HMPS pathway occurs, and is significantly decreased in diabetic liver. Glucose-6-phosphate dehydrogenase (G6PDH) activity is shown to be decreased in diabetic liver. Since G6PDH is essential in providing NADPH for the reduction of glutathione required for subsequent DHAA reduction, its decreased activity is consistent with altered levels of AA and DHAA observed in diabetic tissues.
Neighbour node 2: Title: Vitamin C: an aldose reductase inhibitor that normalizes erythrocyte sorbitol in insulin-dependent diabetes mellitus.
Abstract: OBJECTIVE: Diabetic hyperglycemia promotes sorbitol production from glucose via aldose reductase. Since the intracellular accumulation of sorbitol, or its sequelae, are postulated to contribute to the progression of chronic diabetic complications, aldose reductase inhibitors (ARI) offer therapeutic promise. Others have shown that vitamin C at pharmacologic doses decreases erythrocyte (RBC) sorbitol. We examined whether smaller, physiologic doses of vitamin C were also effective in individuals with insulin-dependent diabetes mellitus (IDDM) and whether vitamin C was an ARI in vitro. METHODS: Vitamin C supplements (100 or 600 mg) were taken daily for 58 days by young adults with IDDM and nondiabetic adults in an otherwise free-living design. Diabetic control was monitored by fasting plasma glucose, glycosylated hemoglobin, and glycosuria and was moderate to poor throughout the study. RBC sorbitol was measured at baseline and again at 30 and 58 days. Three-day dietary records and 24-hour urine collections were performed for each sampling day. RESULTS: RBC sorbitol levels were significantly elevated in IDDMs, on average doubled, despite their more than adequate dietary intakes of vitamin C and normal plasma concentrations. Vitamin C supplementation at either dose normalized the RBC sorbitol in IDDMs within 30 days. This correction of sorbitol accumulation was independent of changes in diabetic control. Furthermore, our in vitro studies show that ascorbic acid inhibited aldose reductase activity. CONCLUSIONS: Vitamin C supplementation is effective in reducing sorbitol accumulation in the erythrocytes of diabetics. Given its tissue distribution and low toxicity, we suggest a superiority for vitamin C over pharmaceutic ARIs.
Neighbour node 3: Title: Abnormal insulin secretion and glucose metabolism in pancreatic islets from the spontaneously diabetic GK rat.
Abstract: Insulin secretion and islet glucose metabolism were compared in pancreatic islets isolated from GK/Wistar (GK) rats with spontaneous Type 2 (non-insulin-dependent) diabetes mellitus and control Wistar rats. Islet insulin content was 24.5 +/- 3.1 microU/ng islet DNA in GK rats and 28.8 +/- 2.5 microU/ng islet DNA in control rats, with a mean (+/- SEM) islet DNA content of 17.3 +/- 1.7 and 26.5 +/- 3.4 ng (p < 0.05), respectively. Basal insulin secretion at 3.3 mmol/l glucose was 0.19 +/- 0.03 microU.ng islet DNA-1.h-1 in GK rat islets and 0.04 +/- 0.07 in control islets. Glucose (16.7 mmol/l) stimulated insulin release in GK rat islets only two-fold while in control islets five-fold. Glucose utilization at 16.7 mmol/l glucose, as measured by the formation of 3H2O from [5-3H]glucose, was 2.4 times higher in GK rat islets (3.1 +/- 0.7 pmol.ng islet DNA-1.h-1) than in control islets (1.3 +/- 0.1 pmol.ng islet DNA-1.h-1; p < 0.05). In contrast, glucose oxidation, estimated as the production of 14CO2 from [U-14C]glucose, was similar in both types of islets and corresponded to 15 +/- 2 and 30 +/- 3% (p < 0.001) of total glucose phosphorylated in GK and control islets, respectively. Glucose cycling, i.e. the rate of dephosphorylation of the total amount of glucose phosphorylated, (determined as production of labelled glucose from islets incubated with 3H2O) was 16.4 +/- 3.4% in GK rat and 6.4 +/- 1.0% in control islets, respectively (p < 0.01). We conclude that insulin secretion stimulated by glucose is markedly impaired in GK rat islets.(ABSTRACT TRUNCATED AT 250 WORDS)
Neighbour node 4: Title: Hyperglycemia causes oxidative stress in pancreatic beta-cells of GK rats, a model of type 2 diabetes.
Abstract: Reactive oxygen species are involved in a diversity of biological phenomena such as inflammation, carcinogenesis, aging, and atherosclerosis. We and other investigators have shown that the level of 8-hydroxy-2'-deoxyguanosine (8-OHdG), a marker for oxidative stress, is increased in either the urine or the mononuclear cells of the blood of type 2 diabetic patients. However, the association between type 2 diabetes and oxidative stress in the pancreatic beta-cells has not been previously described. We measured the levels of 8-OHdG and 4-hydroxy-2-nonenal (HNE)-modified proteins in the pancreatic beta-cells of GK rats, a model of nonobese type 2 diabetes. Quantitative immunohistochemical analyses with specific antibodies revealed higher levels of 8-OHdG and HNE-modified proteins in the pancreatic beta-cells of GK rats than in the control Wistar rats, with the levels increasing proportionally with age and fibrosis of the pancreatic islets. We further investigated whether the levels of 8-OHdG and HNE-modified proteins would be modified in the pancreatic beta-cells of GK rats fed with 30% sucrose solution or 50 ppm of voglibose (alpha-glucosidase inhibitor). In the GK rats, the levels of 8-OHdG and HNE-modified proteins, as well as islet fibrosis, were increased by sucrose treatment but reduced by voglibose treatment. These results indicate that the pancreatic beta-cells of GK rats are oxidatively stressed, and that chronic hyperglycemia might be responsible for the oxidative stress observed in the pancreatic beta-cells.
Neighbour node 5: Title: Change in tissue concentrations of lipid hydroperoxides, vitamin C and vitamin E in rats with streptozotocin-induced diabetes.
Abstract: The tissue concentration of lipid hydroperoxides, which was determined by a specific method involving chemical derivatization and HPLC, increased significantly in the heart, liver, kidney and muscle of diabetic rats 8 weeks after the intraperitoneal injection of streptozotocin compared with that of the control group. These results demonstrate that an enhanced oxidative stress is caused in these tissues by diabetes. Vitamin C concentrations of the brain, heart, lung, liver, kidney and plasma of the diabetic rats decreased significantly after 8 weeks compared with those of the control group. Vitamin E concentrations of the brain, heart, liver, kidney, muscle and plasma of the diabetic rats increased significantly after 4 weeks compared with the control group. After 8 weeks, an elevation in vitamin E concentration was observed in the heart, liver, muscle and plasma of the diabetic rats.
Neighbour node 6: Title: Oxidative damage to DNA in diabetes mellitus.
Abstract: BACKGROUND: Increased production of reactive oxygen species (ROS) and lipid peroxidation may contribute to vascular complications in diabetes. to test whether DNA is also oxidatively damaged in diabetes, we measured 8-hydroxydeoxyguanosine (8-OHdG), an indicator of oxidative damage of DNA, in mononuclear cells. METHODS: For this laboratory-based study, 12 patients with insulin-dependent diabetes mellitus (IDDM) and 15 patients with non-insulin-dependent diabetes mellitus (NIDDM) were matched by age with ten healthy volunteers each. DNA was extracted from mononuclear cells from whole blood. 8-OHdG was assayed by high-pressure liquid chromatography, and ROS were assayed by chemiluminescence. FINDINGS: IDDM and NIDDM patients had significantly higher median concentrations (p , 0.001, U test) of 8-OHdG in their mononuclear cells than their corresponding controls (in fmol/micrograms DNA): 128.2 (interquartile range 96.0-223.2) and 95.2 (64.0-133.5) vs 28.2 (21.7-43.4) and 21.9 (18.0-24.4), respectively. ROS generation by mononuclear cells was also significantly greater (p < 0.01) in diabetic patients than in their controls (in mV): 238.0 (107.0-243.0) and 101.3 (66.0-134.0) vs 69.5 (49.8-91.9) and 56.0 (38.8-62.5), respectively. INTERPRETATION: IDDM and NIDDM patients showed greater oxidative damage to DNA, with increased generation of ROS, than controls. Such changes might contribute to accelerated aging and atherogenesis in diabetes and to the microangiopathic complications of the disease.
Neighbour node 7: Title: Vitamin C improves endothelium-dependent vasodilation in patients with non-insulin-dependent diabetes mellitus.
Abstract: Endothelium-dependent vasodilation is impaired in humans with diabetes mellitus. Inactivation of endothelium-derived nitric oxide by oxygen-derived free radicals contributes to abnormal vascular reactivity in experimental models of diabetes. To determine whether this observation is relevant to humans, we tested the hypothesis that the antioxidant, vitamin C, could improve endothelium-dependent vasodilation in forearm resistance vessels of patients with non-insulin-dependent diabetes mellitus. We studied 10 diabetic subjects and 10 age-matched, nondiabetic control subjects. Forearm blood flow was determined by venous occlusion plethysmography. Endothelium-dependent vasodilation was assessed by intraarterial infusion of methacholine (0.3-10 micrograms/min). Endothelium-independent vasodilation was measured by intraarterial infusion of nitroprusside (0.3-10 micrograms/min) and verapamil (10-300 micrograms/min). Forearm blood flow dose-response curves were determined for each drug before and during concomitant intraarterial administration of vitamin C (24 mg/min). In diabetic subjects, endothelium-dependent vasodilation to methacholine was augmented by simultaneous infusion of vitamin C (P = 0.002); in contrast, endothelium-independent vasodilation to nitroprusside and to verapamil were not affected by concomitant infusion of vitamin C (P = 0.9 and P = 0.4, respectively). In nondiabetic subjects, vitamin C administration did not alter endothelium-dependent vasodilation (P = 0.8). We conclude that endothelial dysfunction in forearm resistance vessels of patients with non-insulin-dependent diabetes mellitus can be improved by administration of the antioxidant, vitamin C. These findings support the hypothesis that nitric oxide inactivation by oxygen-derived free radicals contributes to abnormal vascular reactivity in diabetes.
Neighbour node 8: Title: The roles of oxidative stress and antioxidant treatment in experimental diabetic neuropathy.
Abstract: Oxidative stress is present in the diabetic state. Our work has focused on its presence in peripheral nerves. Antioxidant enzymes are reduced in peripheral nerves and are further reduced in diabetic nerves. That lipid peroxidation will cause neuropathy is supported by evidence of the development of neuropathy de novo when normal nerves are rendered alpha-tocopherol deficient and by the augmentation of the conduction deficit in diabetic nerves subjected to this insult. Oxidative stress appears to be primarily due to the processes of nerve ischemia and hyperglycemia auto-oxidation. The indexes of oxidative stress include an increase in nerve, dorsal root, and sympathetic ganglia lipid hydroperoxides and conjugated dienes. The most reliable and sensitive index, however, is a reduction in reduced glutathione. Experimental diabetic neuropathy results in myelinopathy of dorsal roots and a vacuolar neuropathy of dorsal root ganglion. The vacuoles are mitochondrial; we posit that lipid peroxidation causes mitochondrial DNA mutations that increase reduced oxygen species, causing further damage to mitochondrial respiratory chain and function and resulting in a sensory neuropathy. Alpha-lipoic acid is a potent antioxidant that prevents lipid peroxidation in vitro and in vivo. We evaluated the efficacy of the drug in doses of 20, 50, and 100 mg/kg administered intraperitoneally in preventing the biochemical, electrophysiological, and nerve blood flow deficits in the peripheral nerves of experimental diabetic neuropathy. Alpha-lipoic acid dose- and time-dependently prevented the deficits in nerve conduction and nerve blood flow and biochemical abnormalities (reductions in reduced glutathione and lipid peroxidation). The nerve blood flow deficit was 50% (P < 0.001). Supplementation dose-dependently prevented the deficit; at the highest concentration, nerve blood flow was not different from that of control nerves. Digital nerve conduction underwent a dose-dependent improvement at 1 month (P < 0.05). By 3 months, all treated groups had lost their deficit. The antioxidant drug is potentially efficacious for human diabetic sensory neuropathy.
Neighbour node 9: Title: Beta-cell insensitivity to glucose in the GK rat, a spontaneous nonobese model for type II diabetes.
Abstract: In early 1988, a colony of GK rats was started in Paris with progenitors issued from F35 of the original colony reported by Goto and Kakisaki. When studied longitudinally up to 8 mo, GK rats showed as early as 1 mo (weaning) significantly higher basal plasma glucose (9 mM) and insulin levels (doubled), altered glucose tolerance (intravenous glucose), and a very poor insulin secretory response to glucose in vivo compared with Wistar controls. Males and females were similarly affected. Studies of in vitro pancreatic function were carried out with the isolated perfused pancreas preparation. Compared with nondiabetic Wistar rats, GK rats at 2 mo showed a significantly increased basal insulin release, no insulin response to 16 mM glucose, and hyperresponse to 19 mM arginine. Pancreatic insulin stores were only 50% of that in Wistar rats. Perfusion of GK pancreases for 50 or 90 min with buffer containing no glucose partially improved the insulin response to 16 mM glucose and markedly diminished the response to 19 mM arginine, whereas the responses by Wistar pancreases were unchanged. These findings are similar to those reported in rats with non-insulin-dependent diabetes induced by neonatal streptozocin administration and support the concept that chronic elevation in plasma glucose may be responsible, at least in part, for the beta-cell desensitization to glucose in this model. The GK rat seems to be a valuable model for identifying the etiology of beta-cell desensitization to glucose.
Neighbour node 10: Title: Disturbed handling of ascorbic acid in diabetic patients with and without microangiopathy during high dose ascorbate supplementation.
Abstract: Abnormalities of ascorbic acid metabolism have been reported in experimentally-induced diabetes and in diabetic patients. Ascorbate is a powerful antioxidant, a cofactor in collagen biosynthesis, and affects platelet activation, prostaglandin synthesis and the polyol pathway. This suggests a possible close interrelationship between ascorbic acid metabolism and pathways known to be influenced by diabetes. We determined serum ascorbic acid and its metabolite, dehydroascorbic acid, as indices of antioxidant status, and the ratio, dehydroascorbate/ascorbate, as an index of oxidative stress, in 20 matched diabetic patients with and 20 without microangiopathy and in 22 age-matched control subjects. Each study subject then took ascorbic acid, 1 g daily orally, for six weeks with repeat measurements taken at three and six weeks. At baseline, patients with microangiopathy had lower ascorbic acid concentrations than those without microangiopathy and control subjects (42.1 +/- 19.3 vs 55.6 +/- 20.0, p less than 0.01, vs 82.9 +/- 30.9 mumol/l, p less than 0.001) and elevated dehydroascorbate/ascorbate ratios (0.87 +/- 0.46 vs 0.61 +/- 0.26, p less than 0.01, vs 0.38 +/- 0.14, p less than 0.001). At three weeks, ascorbate concentrations rose in all groups (p less than 0.0001) and was maintained in control subjects (151.5 +/- 56.3 mumol/l), but fell in both diabetic groups by six weeks (p less than 0.01). Dehydroascorbate/ascorbate ratios fell in all groups at three weeks (p less than 0.0001) but rose again in the diabetic groups by six weeks (p less than 0.001) and was unchanged in the control subjects. Dehydroascorbate concentrations rose significantly from baseline in all groups by six weeks of ascorbic acid supplementation (p less than 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)
|
Diabetes Mellitus, Experimental
|
pubmed
|
train
|
Classify the node 'Title: Interaction between HLA antigens and immunoglobulin (Gm) allotypes in susceptibility to type I diabetes.
Abstract: HLA-A,B,C and DR typing was performed on 108 Caucasian type I diabetic patients, 68 being Gm typed. The expected association with B8, B18, Bw62, DR3 and DR4 was observed as well as an excess of DR3/4 heterozygotes. DR2 was decreased in frequency. In the total patient group, no Gm association was observed but when the patients were subgrouped according to HLA type, HLA/Gm interactive effects were seen. An increase in Gm(1,3;5) was observed in DR3 positive, DR4 negative patients. This association occurred predominantly in females (compared with DR4 and DR3/4 patients of the same Gm phenotype who were predominantly male). Further genetic heterogeneity was identified within DR3/4 patients. Within this group, Bw62 was increased (strongly suggestive of Bw62-DR4 haplotypes) within B8, Gm heterozygotes compared with B8, Gm homozygotes. This finding can be interpreted as indicating a three-way interaction between genes on two HLA haplotypes and Gm-linked genes. These results reflect the genetic heterogeneity and complexity of insulin-dependent diabetes mellitus and explain in part the previous failure of simple genetic models to adequately explain inheritance patterns observed.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: A T cell receptor beta chain polymorphism is associated with patients developing insulin-dependent diabetes after the age of 20 years.
Abstract: We have studied the BglII polymorphism near the T cell receptor beta chain constant region (TcR-C beta) gene, HLA-DR genotypes and certain autoimmune features in 102 patients with type I (insulin-dependent) diabetes. There was a significant decrease in the frequency of the 1:1 genotype (P = 0.008) and an increase in the 1:2 genotype (P = 0.03) of the BglII TcR polymorphism in the group of patients who developed type-I diabetes after the age of 20 years. This group of patients also showed an increased incidence of autoantibodies (especially islet cell antibody), a family history of diabetes and the presence of other autoimmune diseases. The frequency of this polymorphism in patients who developed type I diabetes before the age of 20 years was similar to a non-diabetic group. These results suggest that there are two genetically distinct groups of patients with type I diabetes. HLA-DR3 and HLA-DR4 genotypes were also increased in the diabetic patients but no significant difference was observed between HLA-DR genotypes, the TcR-C beta genotypes, the age of diagnosis or with other autoimmune features. Patients developing type I (insulin-dependent) diabetes after the age of 20 years have an additional genetic susceptibility for diabetes associated with the TcR-C beta gene.
|
Diabetes Mellitus Type 1
|
pubmed
|
train
|
Classify the node 'Title: Regeneration of pancreatic beta cells from intra-islet precursor cells in an experimental model of diabetes.
Abstract: We previously reported that new beta cells differentiated in pancreatic islets of mice in which diabetes was produced by injection of a high dose of the beta cell toxin streptozotocin (SZ), which produces hyperglycemia due to rapid and massive beta cell death. After SZ-mediated elimination of existing beta cells, a population of insulin containing cells reappeared in islets. However, the number of new beta cells was small, and the animals remained severely hyperglycemic. In the present study, we tested whether restoration of normoglycemia by exogenous administered insulin would enhance beta cell differentiation and maturation. We found that beta cell regeneration improved in SZ-treated mice animals that rapidly attained normoglycemia following insulin administration because the number of beta cells per islet reached near 40% of control values during the first week after restoration of normoglycemia. Two presumptive precursor cell types appeared in regenerating islets. One expressed the glucose transporter-2 (Glut-2), and the other cell type coexpressed insulin and somatostatin. These cells probably generated the monospecific cells containing insulin that repopulated the islets. We conclude that beta cell neogenesis occurred in adult islets and that the outcome of this process was regulated by the insulin-mediated normalization of circulating blood glucose levels.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Growth inhibitors promote differentiation of insulin-producing tissue from embryonic stem cells.
Abstract: The use of embryonic stem cells for cell-replacement therapy in diseases like diabetes mellitus requires methods to control the development of multipotent cells. We report that treatment of mouse embryonic stem cells with inhibitors of phosphoinositide 3-kinase, an essential intracellular signaling regulator, produced cells that resembled pancreatic beta cells in several ways. These cells aggregated in structures similar, but not identical, to pancreatic islets of Langerhans, produced insulin at levels far greater than previously reported, and displayed glucose-dependent insulin release in vitro. Transplantation of these cell aggregates increased circulating insulin levels, reduced weight loss, improved glycemic control, and completely rescued survival in mice with diabetes mellitus. Graft removal resulted in rapid relapse and death. Graft analysis revealed that transplanted insulin-producing cells remained differentiated, enlarged, and did not form detectable tumors. These results provide evidence that embryonic stem cells can serve as the source of insulin-producing replacement tissue in an experimental model of diabetes mellitus. Strategies for producing cells that can replace islet functions described here can be adapted for similar uses with human cells.
|
Diabetes Mellitus, Experimental
|
pubmed
|
train
|
Classify the node 'Title: Transplantation of cultured pancreatic islets to BB rats.
Abstract: Pancreatic islets held in tissue culture before transplantation into artificially induced diabetics are not rejected. In animals and human identical twin transplants, the autoimmunity of naturally occurring diabetes may destroy islets, even if rejection is avoided. Therefore we studied whether autoimmune damage of islets can be avoided by pretransplant culture. Recipients were BB rats, which spontaneously developed diabetes. Donors were either Wistar Furth (WF) (major histocompatibility [MHC] identical to BB rats) or Lewis (MHC nonidentical to BB rats). Islets were inoculated into the portal vein either immediately after isolation or after 14 days in tissue culture (95% air, 5% CO2, 24 degrees C). Recipients of cultured islets received a single injection of 1 ml of antilymphocyte serum at the time of transplant. Recurrence of diabetes after transplantation of freshly isolated MHC incompatible Lewis islets occurred rapidly on the basis of rejection or autoimmune damage (or both). Precultured Lewis islets had prolonged or permanent survival. Freshly isolated MHC compatible WF islets were destroyed, and no improvement was seen with culture. We conclude that autoimmune destruction of transplanted islets can be avoided by tissue culture, as can rejection. This is important because this strategy is effective only if recipient and donor differ at the MHC locus. Islet donors may need to be selected on the basis of disparity of histocompatibility factors.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Intrathymic islet transplantation in the spontaneously diabetic BB rat.
Abstract: Recently it was demonstrated that pancreatic islet allografts transplanted to the thymus of rats made diabetic chemically are not rejected and induce specific unresponsiveness to subsequent extrathymic transplants. The authors report that the thymus can also serve as an effective islet transplantation site in spontaneously diabetic BB rats, in which autoimmunity and rejection can destroy islets. Intrathymic Lewis islet grafts consistently reversed hyperglycemia for more than 120 days in these rats, and in three of four recipients the grafts promoted subsequent survival of intraportal islets. In contrast intraportal islet allografts in naive BB hosts all failed rapidly. The authors also show that the immunologically privileged status of the thymus cannot prevent rejection of islet allografts in Wistar Furth (WF) rats sensitized with donor strain skin and that suppressor cells are not likely to contribute to the unresponsive state because adoptive transfer of spleen cells from WF rats bearing established intrathymic Lewis islets fails to prolong islet allograft survival in secondary hosts.
Neighbour node 1: Title: The role of CD4+ and CD8+ T cells in the destruction of islet grafts by spontaneously diabetic mice.
Abstract: Spontaneous development of diabetes in the nonobese diabetic (NOD) mouse is mediated by an immunological process. In disease-transfer experiments, the activation of diabetes has been reported to require participation of both CD4+ and CD8+ T-cell subsets. These findings seem to indicate that the CD4+ cells are the helper cells for the activation of cytotoxic CD8+ cells that directly destroy islet beta cells in type I diabetes. In this report we challenge this interpretation because of two observations: (i) Destruction of syngeneic islet grafts by spontaneously diabetic NOD mice (disease recurrence) is CD4+ and not CD8+ T-cell dependent. (ii) Disease recurrence in islet tissue grafted to diabetic NOD mice is not restricted by islet major histocompatibility complex antigens. From these observations we propose that islet destruction depends on CD4+ effector T cells that are restricted by major histocompatibility complex antigens expressed on NOD antigen-presenting cells. Both of these findings argue against the CD8+ T cell as a mediator of direct islet damage. We postulate that islet damage in the NOD mouse results from a CD4+ T-cell-dependent inflammatory response.
|
Diabetes Mellitus, Experimental
|
pubmed
|
train
|
Classify the node ' Task-oriented Knowledge Acquisition and Reasoning for Design Support Systems. : We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Structural similarity as guidance in case-based design. : This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach.
Neighbour node 1: A model of similarity-based retrieval. : We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for "many are called but few are chosen") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data.
Neighbour node 2: Conceptual Analogy: Conceptual analogy (CA) is an approach that integrates conceptualization, i.e., memory organization based on prior experiences and analogical reasoning (Borner 1994a). It was implemented prototypically and tested to support the design process in building engineering (Borner and Janetzko 1995, Borner 1995). There are a number of features that distinguish CA from standard approaches to CBR and AR. First of all, CA automatically extracts the knowledge needed to support design tasks (i.e., complex case representations, the relevance of object features and relations, and proper adaptations) from attribute-value representations of prior layouts. Secondly, it effectively determines the similarity of complex case representations in terms of adaptability. Thirdly, implemented and integrated into a highly interactive and adaptive system architecture it allows for incremental knowledge acquisition and user support. This paper surveys the basic assumptions and the psychological results which influenced the development of CA. It sketches the knowledge representation formalisms employed and characterizes the sub-processes needed to integrate memory organization and analogical reasoning.
Neighbour node 3: Towards formalizations in case-based reasoning for synthesis. : This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr.
|
Case Based
|
cora
|
train
|
Classify the node ' Sensitivities: an alternative to conditional probabilities for Bayesian belief networks. : We show an alternative way of representing a Bayesian belief network by sensitivities and probability distributions. This representation is equivalent to the traditional representation by conditional probabilities, but makes dependencies between nodes apparent and intuitively easy to understand. We also propose a QR matrix representation for the sensitivities and/or conditional probabilities which is more efficient, in both memory requirements and computational speed, than the traditional representation for computer-based implementations of probabilistic inference. We use sensitivities to show that for a certain class of binary networks, the computation time for approximate probabilistic inference with any positive upper bound on the error of the result is independent of the size of the network. Finally, as an alternative to traditional algorithms that use conditional probabilities, we describe an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neigh bors.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Computational complexity reduction for BN2O networks using similarity of states: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity.
Neighbour node 1: Efficient Inference in Bayes Nets as a Combinatorial Optimization Problem, : A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that efficient probabilistic inference in a belief network is a problem of finding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance.
|
Probabilistic Methods
|
cora
|
train
|
Classify the node 'Exception Handling in Agent Systems A critical challenge to creating effective agent-based systems is allowing them to operate effectively when the operating environment is complex, dynamic, and error-prone. In this paper we will review the limitations of current "agent-local" approaches to exception handling in agent systems, and propose an alternative approach based on a shared exception handling service that is "plugged", with little or no customization, into existing agent systems. This service can be viewed as a kind of "coordination doctor"; it knows about the different ways multi-agent systems can get "sick", actively looks system-wide for symptoms of such "illnesses", and prescribes specific interventions instantiated for this particular context from a body of general treatment procedures. Agents need only implement their normative behavior plus a minimal set of interfaces. We claim that this approach offers simplified agent development as well as more effective and easier to modify exception handling behavior. T...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: The Adaptive Agent Architecture: Achieving Fault-Tolerance Using Persistent Broker Teams Brokers are used in many multi-agent systems for locating agents, for routing and sharing information, for managing the system, and for legal purposes, as independent third parties. However, these multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures such as machine crashes, network breakdowns, and process failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to create robust brokered architectures that can recover from broker failures, and we present the Adaptive Agent Architecture (AAA) to show the feasibility of this approach. The AAA brokers form a team with a joint commitment to serve any agent that registers with the broker team as long as the agent remains registered with the team. This commitment enables the brokers to substitute for each other when needed. A multiagent system based on the AAA can continue to work despite broker failures as long...
Neighbour node 1: Supporting Conflict Management in Cooperative Design Teams The design of complex artifacts has increasingly become a cooperative process, with the detection and resolution of conflicts between design agents playing a central role. Effective tools for supporting the conflict management process, however, are still lacking. This paper describes a system called DCSS (the Design Collaboration Support System) developed to meet this challenge in design teams with both human and machine-based agents. Every design agent is provided with an "assistant" that provides domain-independent conflict detection, classification and resolution expertise. The design agents provide the domainspecific expertise needed to instantiate this general expertise, including the rationale for their actions, as a part of their design activities. DCSS has been used successfully to support the cooperative design of Local Area Networks by human and machine-based designers. This paper includes a description of DCSS's underlying model and implementation, examples of its operation...
Neighbour node 2: Supporting Conflict Resolution in Cooperative Design Systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemente...
Neighbour node 3: The Adaptive Agent Architecture: Achieving FaultTolerance Using Persistent Broker Teams Brokers are used in many multi-agent systems for locating agents, for routing and sharing information, for managing the system, and for legal purposes, as independent third parties. However, these multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures such as machine crashes, network breakdowns, and process failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to create robust brokered architectures that can recover from broker failures, and we present the Adaptive Agent Architecture (AAA) to show the feasibility of this approach. The AAA brokers form a team with a joint commitment to serve any agent that registers with the broker team as long as the agent remains registered with the team. This commitment enables the brokers to substitute for each other when needed. A multiagent system based on the AAA can continue to work despite broker failures as long...
|
Agents
|
citeseer
|
train
|
Classify the node 'Unsupervised Learning from Dyadic Data Dyadic data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This includes event co-occurrences, histogram data, and single stimulus preference data as special cases. Dyadic data arises naturally in many applications ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework for unsupervised learning from dyadic data by statistical mixture models. Our approach covers different models with flat and hierarchical latent class structures and unifies probabilistic modeling and structure discovery. Mixture models provide both, a parsimonious yet flexible parameterization of probability distributions with good generalization performance on sparse data, as well as structural information about data-inherent grouping structure. We propose an annealed version of the standard Expectation Maximization algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Learning to Order Things wcohen,schapire,singer¡ There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF¢¤£¦¥¨§� ©, which indicates whether it is advisable to rank £ before §. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the “Hedge ” algorithm, for finding a good linear combination of ranking “experts.” We use the ordering algorithm combined with the on-line learning algorithm to find a combination of “search experts, ” each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. 1
Neighbour node 1: Kernel Expansions With Unlabeled Examples Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a...
Neighbour node 2: An Introduction to Variational Methods for Graphical Methods . This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.
Neighbour node 3: Estimating Dependency Structure as a Hidden Variable This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. 1 INTRODUCTION A fundamental feature of a good model is the ability to uncover and exploit independencies in the data it is presented with. For many commonly used models, such as neural nets and belief networks, the dependency structure encoded in the model is fixed, in the sense that it is not allowed to vary depending on actual values of the variables or with the current case. However, dependency structures that are conditional on values of variables abound in the world around us. Consider for example bitmaps of handwritten digits. They obviously contain many dependencies between pixels; however, the pattern of these dependencies will vary acr...
Neighbour node 4: Empirical Risk Approximation: An Induction Principle for Unsupervised Learning Unsupervised learning algorithms are designed to extract structure from data without reference to explicit teacher information. The quality of the learned structure is determined by a cost function which guides the learning process. This paper proposes Empirical Risk Approximation as a new induction principle for unsupervised learning. The complexity of the unsupervised learning models are automatically controlled by the two conditions for learning: (i) the empirical risk of learning should uniformly converge towards the expected risk; (ii) the hypothesis class should retain a minimal variety for consistent inference. The maximal entropy principle with deterministic annealing as an efficient search strategy arises from the Empirical Risk Approximation principle as the optimal inference strategy for large learning problems. Parameter selection of learnable data structures is demonstrated for the case of k-means clustering. 1 What is unsupervised learning? Learning algorithms are desi...
Neighbour node 5: Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.
Neighbour node 6: A Theory of Proximity Based Clustering: Structure Detection by Optimization In this paper, a systematic optimization approach for clustering proximity or similarity data is developed. Starting from fundamental invariance and robustness properties, a set of axioms is proposed and discussed to distinguish different cluster compactness and separation criteria. The approach covers the case of sparse proximity matrices, and is extended to nested partitionings for hierarchical data clustering. To solve the associated optimization problems, a rigorous mathematical framework for deterministic annealing and mean--field approximation is presented. Efficient optimization heuristics are derived in a canonical way, which also clarifies the relation to stochastic optimization by Gibbs sampling. Similarity-based clustering techniques have a broad range of possible applications in computer vision, pattern recognition, and data analysis. As a major practical application we present a novel approach to the problem of unsupervised texture segmentation, which relies on statistical...
|
ML (Machine Learning)
|
citeseer
|
train
|
Classify the node ' Metric Entropy and Minimax Risk in Classification, : We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy two-valued classification problems in terms of the Assouad density and the' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Consistency of Posterior Distributions for Neural Networks: In this paper we show that the posterior distribution for feedforward neural networks is asymptotically consistent. This paper extends earlier results on universal approximation properties of neural networks to the Bayesian setting. The proof of consistency embeds the problem in a density estimation problem, then uses bounds on the bracketing entropy to show that the posterior is consistent over Hellinger neighborhoods. It then relates this result back to the regression setting. We show consistency in both the setting of the number of hidden nodes growing with the sample size, and in the case where the number of hidden nodes is treated as a parameter. Thus we provide a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework.
Neighbour node 1: "A General Lower Bound on the Number of Examples Needed for Learning," : We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor.
|
Theory
|
cora
|
train
|
Classify the node ' Strongly typed genetic programming in evolving cooperation strategies. : ' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Strongly Typed Genetic Programming. : BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate parse trees such that the arguments of each function in each tree have the required types. An extension to STGP which makes it easier to use is the concept of generic functions, which are not true strongly typed functions but rather templates for classes of such functions. To illustrate STGP, we present three examples involving vector and matrix manipulation: (1) a basis representation problem (which can be constructed to be deceptive by any reasonable definition of "deception"), (2) the n-dimensional least-squares regression problem, and (3) preliminary work on the Kalman filter.
Neighbour node 1: Voting for Schemata: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem.
Neighbour node 2: Augmenting collective adaptation with a simple process agent. : We have integrated the distributed search of genetic programming based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. However, there is still considerable scope for improvement. In collective adaptation, search agents gather knowledge of their environment and deposit it in a central information repository. Process agents are then able to manipulate that focused knowledge, exploiting the exploration of the search agents. We examine the utility of increasing the capabilities of the centralized pro cess agents.
Neighbour node 3: Entailment for specification refinement. : Specification refinement is part of formal program derivation, a method by which software is directly constructed from a provably correct specification. Because program derivation is an intensive manual exercise used for critical software systems, an automated approach would allow it to be viable for many other types of software systems. The goal of this research is to determine if genetic programming (GP) can be used to automate the specification refinement process. The initial steps toward this goal are to show that a well-known proof logic for program derivation can be encoded such that a GP-based system can infer sentences in the logic for proof of a particular sentence. The results are promising and indicate that GP can be useful in aiding pro gram derivation.
Neighbour node 4: Type inheritance in strongly typed genetic programming. : This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing.
Neighbour node 5: Evolving Teamwork and Coordination with Genetic Programming: Some problems can be solved only by multi-agent teams. In using genetic programming to produce such teams, one faces several design decisions. First, there are questions of team diversity and of breeding strategy. In one commonly used scheme, teams consist of clones of single individuals; these individuals breed in the normal way and are cloned to form teams during fitness evaluation. In contrast, teams could also consist of distinct individuals. In this case one can either allow free interbreeding between members of different teams, or one can restrict interbreeding in various ways. A second design decision concerns the types of coordination-facilitating mechanisms provided to individual team members; these range from sensors of various sorts to complex communication systems. This paper examines three breeding strategies (clones, free, and restricted) and three coordination mechanisms (none, deictic sensing, and name-based sensing) for evolving teams of agents in the Serengeti world, a simple predator/prey environment. Among the conclusions are the fact that a simple form of restricted interbreeding outperforms free interbreeding in all teams with distinct individuals, and the fact that name-based sensing consistently outperforms deictic sensing.
Neighbour node 6: ABSTRACT: In general, the machine learning process can be accelerated through the use of heuristic knowledge about the problem solution. For example, monomorphic typed Genetic Programming (GP) uses type information to reduce the search space and improve performance. Unfortunately, monomorphic typed GP also loses the generality of untyped GP: the generated programs are only suitable for inputs with the specified type. Polymorphic typed GP improves over mono-morphic and untyped GP by allowing the type information to be expressed in a more generic manner, and yet still imposes constraints on the search space. This paper describes a polymorphic GP system which can generate polymorphic programs: programs which take inputs of more than one type and produces outputs of more than one type. We also demonstrate its operation through the generation of the map polymorphic program.
Neighbour node 7: A genetic prototype learner. : Supervised classification problems have received considerable attention from the machine learning community. We propose a novel genetic algorithm based prototype learning system, PLEASE, for this class of problems. Given a set of prototypes for each of the possible classes, the class of an input instance is determined by the prototype nearest to this instance. We assume ordinal attributes and prototypes are represented as sets of feature-value pairs. A genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space as determined by corresponding feature-value pairs. Comparisons with C4.5 on a set of artificial problems of controlled complexity demonstrate the effectiveness of the pro posed system.
Neighbour node 8: Competitive environments evolve better solutions for complex tasks. :
Neighbour node 9: Clique detection via genetic programming. : Genetic programming is applied to the task of finding all of the cliques in a graph. Nodes in the graph are represented as tree structures, which are then manipulated to form candidate cliques. The intrinsic properties of clique detection complicates the design of a good fitness evaluation. We analyze those properties, and show the clique detector is found to be better at finding the maximum clique in the graph, not the set of all cliques.
Neighbour node 10: Evolving behavioral strategies in predators and prey. : The predator/prey domain is utilized to conduct research in Distributed Artificial Intelligence. Genetic Programming is used to evolve behavioral strategies for the predator agents. To further the utility of the predator strategies, the prey population is allowed to evolve at the same time. The expected competitive learning cycle did not surface. This failing is investigated, and a simple prey algorithm surfaces, which is consistently able to evade capture from the predator algorithms.
Neighbour node 11: Modeling Distributed Search via Social Insects: Complex group behavior arises in social insects colonies as the integration of the actions of simple and redundant individual insects [Adler and Gordon, 1992, Oster and Wilson, 1978]. Furthermore, the colony can act as an information center to expedite foraging [Brown, 1989]. We apply these lessons from natural systems to model collective action and memory in a computational agent society. Collective action can expedite search in combinatorial optimization problems [Dorigo et al., 1996]. Collective memory can improve learning in multi-agent systems [Garland and Alterman, 1996]. Our collective adaptation integrates the simplicity of collective action with the pattern detection of collective memory to significantly improve both the gathering and processing of knowledge. As a test of the role of the society as an information center, we examine the ability of the society to distribute task allocation without any omnipotent centralized control.
|
Genetic Algorithms
|
cora
|
train
|
Classify the node 'Title: Longitudinal patterns of glycemic control and diabetes care from diagnosis in a population-based cohort with type 1 diabetes. The Wisconsin Diabetes Registry.
Abstract: Glycosylated hemoglobin is an indicator of long-term glycemic control and a strong predictor of diabetic complications. This paper provides a comprehensive description of glycemic control (total glycosylated hemoglobin (GHb)) up to 4.5 years duration of diabetes by age, duration, and sex in a population-based cohort (n = 507) aged less than 20 years followed from diagnosis of Type 1 diabetes in Wisconsin during 1987-1994 Important aspects of demographics and diabetes care are described to allow comparison with other populations. Since large variations between laboratories are known to exist in the measurement of GHb, levels are also interpreted relative to the frequency of short-term complications. GHb increased after diagnosis, but leveled off after 2-3 years. Peak GHb values occurred in the age group 12-15 years. The within-individual standard deviation in GHb between tests, adjusted for age and duration was 1.6%. The mean GHb at last testing was 11.3%, with a standard deviation across individuals of 2.9%. The majority (74%) of individuals saw a diabetes specialist at least once. The mean number of insulin injections per day increased from 2.2 to 2.5 across the 4.5-year duration, and the insulin dose increased from 0.6 to 0.9 units per day per kg body weight. Despite the quite satisfactory level of care, 38% of subjects had GHb levels associated with significant short-term complications.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Hospital admission patterns subsequent to diagnosis of type 1 diabetes in children : a systematic review.
Abstract: BACKGROUND: Patients with type 1 diabetes are known to have a higher hospital admission rate than the underlying population and may also be admitted for procedures that would normally be carried out on a day surgery basis for non-diabetics. Emergency admission rates have sometimes been used as indicators of quality of diabetes care. In preparation for a study of hospital admissions, a systematic review was carried out on hospital admissions for children diagnosed with type 1 diabetes, whilst under the age of 15. The main thrust of this review was to ascertain where there were gaps in the literature for studies investigating post-diagnosis hospitalisations, rather than to try to draw conclusions from the disparate data sets. METHODS: A systematic search of the electronic databases PubMed, Cochrane LibrarMEDLINE and EMBASE was conducted for the period 1986 to 2006, to identify publications relating to hospital admissions subsequent to the diagnosis of type 1 diabetes under the age of 15. RESULTS: Thirty-two publications met all inclusion criteria, 16 in Northern America, 11 in Europe and 5 in Australasia. Most of the studies selected were focussed on diabetic ketoacidosis (DKA) or diabetes-related hospital admissions and only four studies included data on all admissions. Admission rates with DKA as primary diagnosis varied widely between 0.01 to 0.18 per patient-year as did those for other diabetes-related co-morbidity ranging from 0.05 to 0.38 per patient year, making it difficult to interpret data from different study designs. However, people with Type 1 diabetes are three times more likely to be hospitalised than the non-diabetic populations and stay in hospital twice as long. CONCLUSION: Few studies report on all admissions to hospital in patients diagnosed with type 1 diabetes whilst under the age of 15 years. Health care costs for type 1 patients are higher than those for the general population and information on associated patterns of hospitalisation might help to target interventions to reduce the cost of hospital admissions.
Neighbour node 1: Title: The association of increased total glycosylated hemoglobin levels with delayed age at menarche in young women with type 1 diabetes.
Abstract: CONTEXT: Delayed menarche is associated with subsequent reproductive and skeletal complications. Previous research has found delayed growth and pubertal maturation with type 1 diabetes and poor glycemic control. The effect of diabetes management on menarche is important to clarify, because tighter control might prevent these complications. OBJECTIVE: The objective of this study was to investigate age at menarche in young women with type 1 diabetes and examine the effect of diabetes management [e.g. total glycosylated hemoglobin (GHb) level, number of blood glucose checks, insulin therapy intensity, and insulin dose] on age at menarche in those diagnosed before menarche. DESIGN: The Wisconsin Diabetes Registry Project is a follow-up study of a type 1 diabetes population-based incident cohort initially enrolled between 1987 and 1992. SETTING: This study was performed in 28 counties in south-central Wisconsin. PATIENTS OR OTHER PARTICIPANTS: The study participants were recruited through referrals, self-report, and hospital/clinic ascertainment. Individuals with newly diagnosed type 1 diabetes, less than 30 yr old, were invited to participate. Of 288 young women enrolled, 188 reported menarche by 2002; 105 were diagnosed before menarche. INTERVENTIONS: There were no interventions. MAIN OUTCOME MEASURE: The main outcome measure was age at menarche. RESULTS: Mean age at menarche was 12.78 yr, compared with 12.54 yr in the United States (P = 0.01). Ages at menarche and diagnosis were not associated. For those diagnosed before menarche, age at menarche was delayed 1.3 months with each 1% increase in mean total GHb level in the 3 yr before menarche. CONCLUSIONS: Age at menarche was moderately delayed in young women with type 1 diabetes. Delayed menarche could potentially be minimized with improved GHb levels.
|
Diabetes Mellitus Type 1
|
pubmed
|
train
|
Classify the node 'Title: Onset of diabetes in Zucker diabetic fatty (ZDF) rats leads to improved recovery of function after ischemia in the isolated perfused heart.
Abstract: The aim of this study was to determine whether the transition from insulin resistance to hyperglycemia in a model of type 2 diabetes leads to intrinsic changes in the myocardium that increase the sensitivity to ischemic injury. Hearts from 6-, 12-, and 24-wk-old lean (Control) and obese Zucker diabetic fatty (ZDF) rats were isolated, perfused, and subjected to 30 min of low-flow ischemia (LFI) and 60 min of reperfusion. At 6 wk, ZDF animals were insulin resistant but not hyperglycemic. By 12 wk, the ZDF group was hyperglycemic and became progressively worse by 24 wk. In spontaneously beating hearts rate-pressure product (RPP) was depressed in the ZDF groups compared with age-matched Controls, primarily due to lower heart rate. Pacing significantly increased RPP in all ZDF groups; however, this was accompanied by a significant decrease in left ventricular developed pressure. There was also greater contracture during LFI in the ZDF groups compared with the Control group; surprisingly, however, functional recovery upon reperfusion was significantly higher in the diabetic 12- and 24-wk ZDF groups compared with age-matched Control groups and the 6-wk ZDF group. This improvement in recovery in the ZDF diabetic groups was independent of substrate availability, severity of ischemia, and duration of diabetes. These data demonstrate that, although the development of type 2 diabetes leads to progressive contractile and metabolic abnormalities during normoxia and LFI, it was not associated with increased susceptibility to ischemic injury.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Polyol pathway and modulation of ischemia-reperfusion injury in Type 2 diabetic BBZ rat hearts.
Abstract: We investigated the role of polyol pathway enzymes aldose reductase (AR) and sorbitol dehydrogenase (SDH) in mediating injury due to ischemia-reperfusion (IR) in Type 2 diabetic BBZ rat hearts. Specifically, we investigated, (a) changes in glucose flux via cardiac AR and SDH as a function of diabetes duration, (b) ischemic injury and function after IR, (c) the effect of inhibition of AR or SDH on ischemic injury and function. Hearts isolated from BBZ rats, after 12 weeks or 48 weeks diabetes duration, and their non-diabetic littermates, were subjected to IR protocol. Myocardial function, substrate flux via AR and SDH, and tissue lactate:pyruvate (L/P) ratio (a measure of cytosolic NADH/NAD+), and lactate dehydrogenase (LDH) release (a marker of IR injury) were measured. Zopolrestat, and CP-470,711 were used to inhibit AR and SDH, respectively. Myocardial sorbitol and fructose content, and associated changes in L/P ratios were significantly higher in BBZ rats compared to non-diabetics, and increased with disease duration. Induction of IR resulted in increased ischemic injury, reduced ATP levels, increases in L/P ratio, and poor cardiac function in BBZ rat hearts, while inhibition of AR or SDH attenuated these changes and protected hearts from IR injury. These data indicate that AR and SDH are key modulators of myocardial IR injury in BBZ rat hearts and that inhibition of polyol pathway could in principle be used as a therapeutic adjunct for protection of ischemic myocardium in Type 2 diabetic patients.
|
Diabetes Mellitus Type 2
|
pubmed
|
train
|
Classify the node ' A counter example to the stronger version of the binary tree hypothesis, : The paper describes a counter example to the hypothesis which states that a greedy decision tree generation algorithm that constructs binary decision trees and branches on a single attribute-value pair rather than on all values of the selected attribute will always lead to a tree with fewer leaves for any given training set. We show also that RELIEFF is less myopic than other impurity functions and that it enables the induction algorithm that generates binary decision trees to reconstruct optimal (the smallest) decision trees in more cases.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Estimating attributes: Analysis and extension of relief. : In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.
|
Rule Learning
|
cora
|
train
|
Classify the node 'Title: Analysis of the type 2 diabetes-associated single nucleotide polymorphisms in the genes IRS1, KCNJ11, and PPARG2 in type 1 diabetes.
Abstract: It has been proposed that type 1 and 2 diabetes might share common pathophysiological pathways and, to some extent, genetic background. However, to date there has been no convincing data to establish a molecular genetic link between them. We have genotyped three single nucleotide polymorphisms associated with type 2 diabetes in a large type 1 diabetic family collection of European descent: Gly972Arg in the insulin receptor substrate 1 (IRS1) gene, Glu23Lys in the potassium inwardly-rectifying channel gene (KCNJ11), and Pro12Ala in the peroxisome proliferative-activated receptor gamma2 gene (PPARG2). We were unable to confirm a recently published association of the IRS1 Gly972Arg variant with type 1 diabetes. Moreover, KCNJ11 Glu23Lys showed no association with type 1 diabetes (P > 0.05). However, the PPARG2 Pro12Ala variant showed evidence of association (RR 1.15, 95% CI 1.04-1.28, P = 0.008). Additional studies need to be conducted to confirm this result.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: No association of the IRS1 and PAX4 genes with type I diabetes.
Abstract: To reassess earlier suggested type I diabetes (T1D) associations of the insulin receptor substrate 1 (IRS1) and the paired domain 4 gene (PAX4) genes, the Type I Diabetes Genetics Consortium (T1DGC) evaluated single-nucleotide polymorphisms (SNPs) covering the two genomic regions. Sixteen SNPs were evaluated for IRS1 and 10 for PAX4. Both genes are biological candidate genes for T1D. Genotyping was performed in 2300 T1D families on both Illumina and Sequenom genotyping platforms. Data quality and concordance between the platforms were assessed for each SNP. Transmission disequilibrium testing neither show T1D association of SNPs in the two genes, nor did haplotype analysis. In conclusion, the earlier suggested associations of IRS1 and PAX4 to T1D were not supported, suggesting that they may have been false positive results. This highlights the importance of thorough quality control, selection of tagging SNPs, more than one genotyping platform in high throughput studies, and sufficient power to draw solid conclusions in genetic studies of human complex diseases.
Neighbour node 1: Title: No association of multiple type 2 diabetes loci with type 1 diabetes.
Abstract: AIMS/HYPOTHESIS: We used recently confirmed type 2 diabetes gene regions to investigate the genetic relationship between type 1 and type 2 diabetes, in an average of 7,606 type 1 diabetic individuals and 8,218 controls, providing >80% power to detect effects as small as an OR of 1.11 at a false-positive rate of 0.003. METHODS: The single nucleotide polymorphisms (SNPs) with the most convincing evidence of association in 12 type 2 diabetes-associated gene regions, PPARG, CDKAL1, HNF1B, WFS1, SLC30A8, CDKN2A-CDKN2B, IGF2BP2, KCNJ11, TCF7L2, FTO, HHEX-IDE and THADA, were analysed in type 1 diabetes cases and controls. PPARG and HHEX-IDE were additionally tested for association in 3,851 type 1 diabetes families. Tests for interaction with HLA class II genotypes, autoantibody status, sex, and age-at-diagnosis of type 1 diabetes were performed with all 12 gene regions. RESULTS: Only PPARG and HHEX-IDE showed any evidence of association with type 1 diabetes cases and controls (p = 0.004 and p = 0.003, respectively; p > 0.05 for other SNPs). The potential association of PPARG was supported by family analyses (p = 0.003; p (combined) = 1.0 x 10(-4)). No SNPs showed evidence of interaction with any covariate (p > 0.05). CONCLUSIONS/INTERPRETATION: We found no convincing genetic link between type 1 and type 2 diabetes. An association of PPARG (rs1801282/Pro12Ala) could be consistent with its known function in inflammation. Hence, our results reinforce evidence suggesting that type 1 diabetes is a disease of the immune system, rather than being due to inherited defects in beta cell function or regeneration or insulin resistance.
|
Diabetes Mellitus Type 1
|
pubmed
|
train
|
Classify the node 'Analyzing Web Robots and Their Impact on Caching Understanding the nature and the characteristics of Web robots is an essential step to analyze their impact on caching. Using a multi-layer hierarchical workload model, this paper presents a characterization of the workload generated by autonomous agents and robots. This characterization focuses on the statistical properties of the arrival process and on the robot behavior graph model. A set of criteria is proposed for identifying robots in real logs. We then identify and characterize robots from real logs applying a multi-layered approach. Using a stack distance based analytical model for the interaction between robots and Web site caching, we assess the impact of robots' requests on Web caches. Our analyses point out that robots cause a significant increase in the miss ratio of a server-side cache. Robots have a referencing pattern that completely disrupts locality assumptions. These results indicate not only the need for a better understanding of the behavior of robots, but also the need of Web caching policies that treat robots' requests differently than human generated requests.' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Aliasing on the World Wide Web: Prevalence and Performance Implications Aliasing occurs in Web transactions when requests containing different URLs elicit replies containing identical data payloads. Aliasing can cause cache misses, and there is reason to suspect that offthe -shelf Web authoring tools might increase aliasing on the Web. Existing research literature, however, says little about the prevalence of aliasing in user-initiated transactions or its impact on endto -end performance in large multi-level cache hierarchies.
Neighbour node 1: WWW Robots and Search Engines The Web robots are programs that automatically traverse through networks. Currently, their most visible and familiar application is to provide indices for search engines, such as Lycos and Alta Vista, and semiautomatically maintained topic references or subject directories. In this article, we survey the state-of-art of the Web robots, and the search engines that utilize the results of robot searches. We also present notions about robot ethics and distributed Web robots.
|
IR (Information Retrieval)
|
citeseer
|
train
|
Classify the node 'Generating Code for Agent UML Sequence Diagrams For several years, a new category of description techniques exists: Agent UML [10] which is based on UML. Agent UML is an extension of UML to tackle dierences between agents and objects. Since this description technique is rather new, it does not supply tools or algorithms for protocol synthesis. Protocol synthesis corresponds to generate code for a formal description of a protocol. The derived program behaves like the formal description. This work presents rst elements to help designers generating code for Agent UML sequence diagrams. The protocol synthesis is applied to the example of English Auction protocol.' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Model Checking Agent UML Protocol Diagrams Agents in multiagent systems use protocols in order to exchange messages and to coordinate together. Since agents and objects are not exactly the same, designers do not use directly communication protocols used in distributed systems but a new type called interaction protocols encompassing agent features such as richer messages and the ability to cooperate and to coordinate. Obviously, designers consider formal description techniques used for communication protocols. New graphical modeling languages based on UML appeared several years ago. Agent UML is certainly the best known. Until now, no validation is given for Agent UML. The aim of this paper is to present how to model check Agent UML protocol diagrams.
|
Agents
|
citeseer
|
train
|
Classify the node 'Towards Lifetime Maintenance of Case Base Indexes for Continual Case Based Reasoning Abstract. One of the key areas of case based reasoning is how to main-tain the domain knowledge in the face of a changing environment. During case retrieval, a key process of CBR, feature-value pairs attached to the cases are used to rank the cases for the user. Different feature-value pairs may have different importance measures in this process, often represented by feature weights attached to the cases. How to maintain the weights so that they are up to date and current is one of the key factors deter-mining the success of CBR. Our focus in this paper is on the lifetime maintenance of the feature-weights in a case base. Our task is to de-sign a CBR maintenance system that not only learns a user's preference in the selection of cases but also tracks the user's evolving preferences in the cases. Our approach is to maintain feature weighting in a dy-namic context through an integration with a learning system inspired by a back-propagation neural network. In this paper we explain the new system architecture and reasoning algorithms, contrasting our approach with the previous ones. The effectiveness of the system is demonstrated through experiments in a real world application domain. 1' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Case-Based Learning Algorithms Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.
|
ML (Machine Learning)
|
citeseer
|
train
|
Classify the node ' (1997b) Probabilistic Modeling for Combinatorial Optimization, : Probabilistic models have recently been utilized for the optimization of large combinatorial search problems. However, complex probabilistic models that attempt to capture inter-parameter dependencies can have prohibitive computational costs. The algorithm presented in this paper, termed COMIT, provides a method for using probabilistic models in conjunction with fast search techniques. We show how COMIT can be used with two very different fast search algorithms: hillclimbing and Population-based incremental learning (PBIL). The resulting algorithms maintain many of the benefits of probabilistic modeling, with far less computational expense. Extensive empirical results are provided; COMIT has been successfully applied to jobshop scheduling, traveling salesman, and knapsack problems. This paper also presents a review of probabilistic modeling for combi natorial optimization.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: (1997) MIMIC: Finding Optima by Estimating Probability Densities, : In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space and, in turn, to refine our estimate of the structure. Our technique obtains significant speed gains over other randomized optimization procedures.
Neighbour node 1: A promising genetic algorithm approach to job-shop scheduling, rescheduling, and open-shop scheduling problems. :
Neighbour node 2: Hill Climbing with Learning (An Abstraction of Genetic Algorithm). : Simple modification of standard hill climbing optimization algorithm by taking into account learning features is discussed. Basic concept of this approach is the socalled probability vector, its single entries determine probabilities of appearance of '1' entries in n-bit vectors. This vector is used for the random generation of n-bit vectors that form a neighborhood (specified by the given probability vector). Within the neighborhood a few best solutions (with smallest functional values of a minimized function) are recorded. The feature of learning is introduced here so that the probability vector is updated by a formal analogue of Hebbian learning rule, well-known in the theory of artificial neural networks. The process is repeated until the probability vector entries are close either to zero or to one. The resulting probability vector unambiguously determines an n-bit vector which may be interpreted as an optimal solution of the given optimization task. Resemblance with genetic algorithms is discussed. Effectiveness of the proposed method is illustrated by an example of looking for global minima of a highly multimodal function.
Neighbour node 3: Introduction to the Theory of Neural Computa 92 tion. : Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models
|
Genetic Algorithms
|
cora
|
train
|
Classify the node ' Incremental reduced error pruning. : This paper outlines some problems that may occur with Reduced Error Pruning in Inductive Logic Programming, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of this algorithm cannot be recommended for domains with a very specific concept description.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: More Efficient Windowing: Windowing has been proposed as a procedure for efficient memory use in the ID3 decision tree learning algorithm. However, previous work has shown that windowing may often lead to a decrease in performance. In this work, we try to argue that separate-and-conquer rule learning algorithms are more appropriate for windowing than divide-and-conquer algorithms, because they learn rules independently and are less susceptible to changes in class distributions. In particular, we will present a new windowing algorithm that achieves additional gains in efficiency by exploiting this property of separate-and-conquer algorithms. While the presented algorithm is only suitable for redundant, noise-free data sets, we will also briefly discuss the problem of noisy data in windowing and present some preliminary ideas how it might be solved with an extension of the algorithm introduced in this paper.
Neighbour node 1: Transferring and retraining learned information filters. : Any system that learns how to filter documents will suffer poor performance during an initial training phase. One way of addressing this problem is to exploit filters learned by other users in a collaborative fashion. We investigate "direct transfer" of learned filters in this setting|a limiting case for any collaborative learning system. We evaluate the stability of several different learning methods under direct transfer, and conclude that symbolic learning methods that use negatively correlated features of the data perform poorly in transfer, even when they perform well in more conventional evaluation settings. This effect is robust: it holds for several learning methods, when a diverse set of users is used in training the classifier, and even when the learned classifiers can be adapted to the new user's distribution. Our experiments give rise to several concrete proposals for improving generalization performance in a collaborative setting, including a beneficial variation on a feature selection method that has been widely used in text categorization.
|
Rule Learning
|
cora
|
train
|
Classify the node 'Constraints and Agents in MADEsmart As part of the DARPA Rapid Design Exploration and Optimization (RaDEO) program, Boeing, Philadelphia, is involved in an on-going concurrent design engineering research project called MADEsmart which seeks to partially automate the Integrated Product Team (IPT) concept used by Boeing for organizing the design engineering process, with the aid of intelligent agent technology. Although currently only in an early stage of development, the project is expected to crucially employ a constraint-centered System Design Management Agent developed by the University of Toronto's IE Department in conjunction with Boeing. The SDMA will use the constraint-based Toronto Ontologies for a Virtual Enterprise (TOVE) ontologies, and its domain theories for design engineering and dependent underlying theories, phrased as KIF/Ontolingua assertions in an axiomatic system running in the constraint logic system ECLiPSe, as its primary knowledge resource to monitor an ongoing design project, offering resource-all...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Environment Centered Analysis and Design of Coordination Mechanisms Environment Centered Analysis and Design of Coordination Mechanisms May 1995 KEITH S. DECKER B.S., Carnegie Mellon University M.S., Rensselaer Polytechnic Institute Ph.D., University of Massachusetts Amherst Directed by: Professor Victor R. Lesser Committee: Professor Paul R. Cohen Professor John A. Stankovic Professor Douglas L. Anderton Coordination, as the act of managing interdependencies between activities, is one of the central research issues in Distributed Artificial Intelligence. Many researchers have shown that there is no single best organization or coordination mechanism for all environments. Problems in coordinating the activities of distributed intelligent agents appear in many domains: the control of distributed sensor networks; multi-agent scheduling of people and/or machines; distributed diagnosis of errors in local-area or telephone networks; concurrent engineering; `software agents' for information gathering. The design of coordination mechanisms for groups of compu...
|
Agents
|
citeseer
|
train
|
Classify the node 'Title: Extracellular matrix protein-coated scaffolds promote the reversal of diabetes after extrahepatic islet transplantation.
Abstract: BACKGROUND: The survival and function of transplanted pancreatic islets is limited, owing in part to disruption of islet-matrix attachments during the isolation procedure. Using polymer scaffolds as a platform for islet transplantation, we investigated the hypothesis that replacement of key extracellular matrix components known to surround islets in vivo would improve graft function at an extrahepatic implantation site. METHODS: Microporous polymer scaffolds fabricated from copolymers of lactide and glycolide were adsorbed with collagen IV, fibronectin, laminin-332 or serum proteins before seeding with 125 mouse islets. Islet-seeded scaffolds were then implanted onto the epididymal fat pad of syngeneic mice with streptozotocin-induced diabetes. Nonfasting glucose levels, weight gain, response to glucose challenges, and histology were used to assess graft function for 10 months after transplantation. RESULTS: Mice transplanted with islets seeded onto scaffolds adsorbed with collagen IV achieved euglycemia fastest and their response to glucose challenge was similar to normal mice. Fibronectin and laminin similarly promoted euglycemia, yet required more time than collagen IV and less time than serum. Histopathological assessment of retrieved grafts demonstrated that coating scaffolds with specific extracellular matrix proteins increased total islet area in the sections and vessel density within the transplanted islets, relative to controls. CONCLUSIONS: Extracellular matrix proteins adsorbed to microporous scaffolds can enhance the function of transplanted islets, with collagen IV maximizing graft function relative to the other proteins tested. These scaffolds enable the creation of well-defined microenvironments that promote graft efficacy at extrahepatic sites.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Autoimmunity and familial risk of type 1 diabetes.
Abstract: There is evidence that the process leading to type I diabetes may start in early infancy or already in utero. Even though diabetes-associated antibodies can be detected in up to half of the pregnancies of mothers with type I diabetes, pregnancy itself has no major effect on these antibodies. If such antibodies are present in the mother, they are transferred to the fetal circulation and are detectable in cord blood. Most of the transplacentally transferred antibodies disappear by 6 months of age, but may persist even longer. Antibodies present in cord blood may represent true induction of beta-cell autoimmunity, but such a phenomenon is extremely rare. The offspring of affected mothers have a 2% to 3% risk of type I diabetes, which is about one third of that in the offspring of affected fathers. A novel conceivable explanation is that exogenous insulin transplacentally transferred in immune complexes might lead to the induction of tolerance to insulin, which may be the primary autoantigen in type I diabetes. The possible protective or predisposing effect of diabetes-associated antibodies detectable at birth on progression to clinical type I diabetes later will be assessed in ongoing prospective birth cohort studies.
Neighbour node 1: Title: Single-donor, marginal-dose islet transplantation in patients with type 1 diabetes.
Abstract: CONTEXT: Islet allografts from 2 to 4 donors can reverse type 1 diabetes. However, for islet transplants to become a widespread clinical reality, diabetes reversal must be achieved with a single donor to reduce risks and costs and increase the availability of transplantation. OBJECTIVE: To assess the safety of a single-donor, marginal-dose islet transplant protocol using potent induction immunotherapy and less diabetogenic maintenance immunosuppression in recipients with type 1 diabetes. A secondary objective was to assess the proportion of islet transplant recipients who achieve insulin independence in the first year after single-donor islet transplantation. DESIGN, SETTING, AND PARTICIPANTS: Prospective, 1-year follow-up trial conducted July 2001 to August 2003 at a single US center and enrolling 8 women with type 1 diabetes accompanied by recurrent hypoglycemia unawareness or advanced secondary complications. INTERVENTIONS: Study participants underwent a primary islet allotransplant with 7271 (SD, 1035) islet equivalents/kg prepared from a single cadaver donor pancreas. Induction immunosuppression was with antithymocyte globulin, daclizumab, and etanercept. Maintenance immunosuppression consisted of mycophenolate mofetil, sirolimus, and no or low-dose tacrolimus. MAIN OUTCOME MEASURES: Safety (assessed by monitoring the severity and duration of adverse events) and efficacy (assessed by studying the recipients' insulin requirements, C-peptide levels, oral and intravenous glucose tolerance results, intravenous arginine stimulation responses, glycosylated hemoglobin levels, and hypoglycemic episodes) associated with the study transplant protocol. RESULTS: There were no serious, unexpected, or procedure- or immunosuppression-related adverse events. All 8 recipients achieved insulin independence and freedom from hypoglycemia. Five remained insulin-independent for longer than 1 year. Graft failure in 3 recipients was preceded by subtherapeutic sirolimus exposure in the absence of measurable tacrolimus trough levels. CONCLUSIONS: The tested transplant protocol restored insulin independence and protected against hypoglycemia after single-donor, marginal-dose islet transplantation in 8 of 8 recipients. These results may be related to improved islet engraftment secondary to peritransplant administration of antithymocyte globulin and etanercept. These findings may have implications for the ongoing transition of islet transplantation from clinical investigation to routine clinical care.
Neighbour node 2: Title: Management of insulin-dependent diabetes mellitus.
Abstract: Insulin therapy has been lifesaving for patients with insulin-dependent diabetes mellitus. Unfortunately, longer lifespan has unmasked microvascular, neurological and macrovascular complications that result in profound morbidity and increased mortality. Driven by the conviction that better physiological control of glycaemic levels will prevent and/or ameliorate long term complications, and by the desire to make diabetes care as user-friendly as possible, clinical research efforts have led to the development of new treatment methods with the aim of achieving near normal metabolic control. Such methods include the use of self monitoring, multiple daily insulin injection regimens, external and implantable insulin pumps, and whole organ pancreas and isolated islet cell transplantation. In addition, dietary manipulation, including the use of alpha-glucosidase inhibitors, has played a role in controlling glycaemia.
Neighbour node 3: Title: Five-year follow-up after clinical islet transplantation.
Abstract: Islet transplantation can restore endogenous beta-cell function to subjects with type 1 diabetes. Sixty-five patients received an islet transplant in Edmonton as of 1 November 2004. Their mean age was 42.9 +/- 1.2 years, their mean duration of diabetes was 27.1 +/- 1.3 years, and 57% were women. The main indication was problematic hypoglycemia. Forty-four patients completed the islet transplant as defined by insulin independence, and three further patients received >16,000 islet equivalents (IE)/kg but remained on insulin and are deemed complete. Those who became insulin independent received a total of 799,912 +/- 30,220 IE (11,910 +/- 469 IE/kg). Five subjects became insulin independent after one transplant. Fifty-two patients had two transplants, and 11 subjects had three transplants. In the completed patients, 5-year follow-up reveals that the majority ( approximately 80%) have C-peptide present post-islet transplant, but only a minority ( approximately 10%) maintain insulin independence. The median duration of insulin independence was 15 months (interquartile range 6.2-25.5). The HbA(1c) (A1C) level was well controlled in those off insulin (6.4% [6.1-6.7]) and in those back on insulin but C-peptide positive (6.7% [5.9-7.5]) and higher in those who lost all graft function (9.0% [6.7-9.3]) (P < 0.05). Those who resumed insulin therapy did not appear more insulin resistant compared with those off insulin and required half their pretransplant daily dose of insulin but had a lower increment of C-peptide to a standard meal challenge (0.44 +/- 0.06 vs. 0.76 +/- 0.06 nmol/l, P < 0.001). The Hypoglycemic score and lability index both improved significantly posttransplant. In the 128 procedures performed, bleeding occurred in 15 and branch portal vein thrombosis in 5 subjects. Complications of immunosuppressive therapy included mouth ulcers, diarrhea, anemia, and ovarian cysts. Of the 47 completed patients, 4 required retinal laser photocoagulation or vitrectomy and 5 patients with microalbuminuria developed macroproteinuria. The need for multiple antihypertensive medications increased from 6% pretransplant to 42% posttransplant, while the use of statin therapy increased from 23 to 83% posttransplant. There was no change in the neurothesiometer scores pre- versus posttransplant. In conclusion, islet transplantation can relieve glucose instability and problems with hypoglycemia. C-peptide secretion was maintained in the majority of subjects for up to 5 years, although most reverted to using some insulin. The results, though promising, still point to the need for further progress in the availability of transplantable islets, improving islet engraftment, preserving islet function, and reducing toxic immunosuppression.
Neighbour node 4: Title: Beneficial effect of pretreatment of islets with fibronectin on glucose tolerance after islet transplantation.
Abstract: The scarcity of available islets is an obstacle for clinically successful islet transplantation. One solution might be to increase the efficacy of the limited islets. Isolated islets are exposed to a variety of cellular stressors, and disruption of the cell-matrix connections damages islets. We examined the effect of fibronectin, a major component of the extracellular matrix, on islet viability, mass and function, and also examined whether fibronectin-treated islets improved the results of islet transplantation. Islets cultured with fibronectin for 48 hours maintained higher cell viability (0.146 +/- 0.010 vs. 0.173 +/- 0.007 by MTT assay), and also had a greater insulin and DNA content (86.8 +/- 3.6 vs. 72.8 +/- 3.2 ng/islet and 35.2 +/- 1.4 vs. 30.0 +/- 1.5 ng/islet, respectively) than islets cultured without fibronectin (control). Absolute values of insulin secretion were higher in fibronectin-treated islets than in controls; however, the ratio of stimulated insulin secretion to basal secretion was not significantly different (206.9 +/- 23.3 vs. 191.7 +/- 20.2% when the insulin response to 16.7 mmol/l glucose was compared to that of 3.3 mmol/l glucose); the higher insulin secretion was thus mainly due to larger islet cell mass. The rats transplanted with fibronectin-treated islets had lower plasma glucose and higher plasma insulin levels within 2 weeks after transplantation, and had more favorable glucose tolerance 9 weeks after transplantation. These results indicate that cultivation with fibronectin might preserve islet cell viability, mass and insulin secretory function, which could improve glucose tolerance following islet transplantation.
Neighbour node 5: Title: Immunology: Insulin auto-antigenicity in type 1 diabetes.
Abstract: Spontaneous type 1 diabetes occurs when the autoimmune destruction of pancreatic beta-islet cells prevents production of the hormone insulin. This causes an inability to regulate glucose metabolism, which results in dangerously raised blood glucose concentrations. It is generally accepted that thymus-derived lymphocytes (T cells) are critically involved in the onset and progression of type 1 diabetes, but the antigens that initiate and drive this destructive process remain poorly characterized--although several candidates have been considered. Nakayama et al. and Kent et al. claim that insulin itself is the primary autoantigen that initiates spontaneous type 1 diabetes in mice and humans, respectively, a result that could have implications for more effective prevention and therapy. However, I believe that this proposed immunological role of insulin may be undermined by the atypical responses of T cells to the human insulin fragment that are described by Kent et al..
Neighbour node 6: Title: Reversal of diabetes by pancreatic islet transplantation into a subcutaneous, neovascularized device.
Abstract: BACKGROUND: Transplantation of pancreatic islets for the treatment of type 1 diabetes allows for physiologic glycemic control and insulin-independence when sufficient islets are implanted via the portal vein into the liver. Intrahepatic islet implantation requires specific infrastructure and expertise, and risks inherent to the procedure include bleeding, thrombosis, and elevation of portal pressure. Additionally, the relatively higher drug metabolite concentrations in the liver may contribute to the delayed loss of graft function of recent clinical trials. Identification of alternative implantation sites using biocompatible devices may be of assistance improving graft outcome. A desirable bioartificial pancreas should be easy to implant, biopsy, and retrieve, while allowing for sustained graft function. The subcutaneous (SC) site may require a minimally invasive procedure performed under local anesthesia, but its use has been hampered so far by lack of early vascularization, induction of local inflammation, and mechanical stress on the graft. METHODS: Chemically diabetic rats received syngeneic islets into the liver or SC into a novel biocompatible device consisting of a cylindrical stainless-steel mesh. The device was implanted 40 days prior to islet transplantation to allow embedding by connective tissue and neovascularization. Reversal of diabetes and glycemic control was monitored after islet transplantation. RESULTS: Syngeneic islets transplanted into a SC, neovascularized device restored euglycemia and sustained function long-term. Removal of graft-bearing devices resulted in hyperglycemia. Explanted grafts showed preserved islets and intense vascular networks. CONCLUSIONS: Ease of implantation, biocompatibility, and ability to maintain long-term graft function support the potential of our implantable device for cellular-based reparative therapies.
Neighbour node 7: Title: Prospective and challenges of islet transplantation for the therapy of autoimmune diabetes.
Abstract: Pancreatic islet cell transplantation is an attractive treatment of type 1 diabetes (T1D). The success enhanced by the Edmonton protocol has fostered phenomenal progress in the field of clinical islet transplantation in the past 5 years, with 1-year rates of insulin independence after transplantation near 80%. Long-term function of the transplanted islets, however, even under the Edmonton protocol, seems difficult to accomplish, with only 10% of patients maintaining insulin independence 5 years after transplantation. These results differ from the higher metabolic performance achieved by whole pancreas allotransplantation, and autologous islet cell transplantation, and form the basis for a limited applicability of islet allografts to selected adult patients. Candidate problems in islet allotransplantation deal with alloimmunity, autoimmunity, and the need for larger islet cell masses. Employment of animal islets and stem cells, as alternative sources of insulin production, will be considered to face the problem of human tissue shortage. Emerging evidence of the ability to reestablish endogenous insulin production in the pancreas even after the diabetic damage occurs envisions the exogenous supplementation of islets to patients also as a temporary therapeutic aid, useful to buy time toward a possible self-healing process of the pancreatic islets. All together, islet cell transplantation is moving forward.
Neighbour node 8: Title: Streptozotocin-induced diabetes in large animals (pigs/primates): role of GLUT2 transporter and beta-cell plasticity.
Abstract: BACKGROUND: To induce irreversible diabetes in large animals, the efficiency of streptozotocin (STZ) was evaluated in pigs, primates and compared to the gold standard model in rats. METHODS: Low (50 mg/kg) and high (150 mg/kg) doses of STZ were tested. Hepatic/renal function, glucose metabolism (intravenous glucose tolerance tests, fasting blood glucose) and histomorphometry were evaluated prior to, 1, and 4 weeks after STZ treatment. RESULTS: In rats and primates, expressing a high level of GLUT2 expression on beta cells, a dose of 50 mg/kg STZ induced irreversible diabetes (due to the 97% destruction of beta cell mass) without provoking liver or renal failure. In pigs, despite the use of high STZ dose, partial correction of hyperglycaemia was observed four weeks after STZ injection (decreased fasting blood glucose and intravenous glucose tolerance tests; increased insulin production). The correction of hyperglycaemia was associated with significant hypertrophy of immature pig beta-cell clusters (+30%, P<0.05), whereas no hypertrophy was observed in rats/primates. CONCLUSION: These results demonstrated that STZ might be used to induce irreversible diabetes in rats and primates. In contrast, the low STZ sensitivity in pigs related to a low expression of GLUT2, higher number of immature beta cells and compensatory beta-cell hypertrophy, renders STZ-induced diabetes inappropriate for studying islet allografts in swine.
Neighbour node 9: Title: The epididymal fat pad as a transplant site for minimal islet mass.
Abstract: The epididymal fat pad was evaluated as a site of islet transplantation in a syngeneic murine model of diabetes by comparing the transplant outcomes to that of islets transplanted intraportal. Mouse islets engrafted on the intra-abdominal epididymal fat pad ameliorated streptozotocin-induced hyperglycemia with similar efficacy as grafts implanted intraportally. Mice that received as few as 50 islets, either intraportal or in the epididymal fat pad, displayed similar glucose tolerance curves. Bioluminescence imaging and glucose measurement showed stable luminescence signals and blood glucose levels for over 5 months in both transplant sites using transgenic luciferase-positive islets. Prompt recurrent hyperglycemia occurred in all mice after removal of the epididymal fat pad bearing the islet graft. Histological examination of the grafts showed well-granulated insulin containing cells surrounded by healthy adipocytes. This study indicates that the epididymal fat pad maybe a useful islet transplant site in the mouse model for effective glycemic control.
Neighbour node 10: Title: Autoimmune destruction of pancreatic beta cells.
Abstract: Type 1 diabetes results from the destruction of insulin-producing pancreatic beta cells by a beta cell-specific autoimmune process. Beta cell autoantigens, macrophages, dendritic cells, B lymphocytes, and T lymphocytes have been shown to be involved in the pathogenesis of autoimmune diabetes. Beta cell autoantigens are thought to be released from beta cells by cellular turnover or damage and are processed and presented to T helper cells by antigen-presenting cells. Macrophages and dendritic cells are the first cell types to infiltrate the pancreatic islets. Naive CD4+ T cells that circulate in the blood and lymphoid organs, including the pancreatic lymph nodes, may recognize major histocompatibility complex and beta cell peptides presented by dendritic cells and macrophages in the islets. These CD4+ T cells can be activated by interleukin (IL)-12 released from macrophages and dendritic cells. While this process takes place, beta cell antigen-specific CD8+ T cells are activated by IL-2 produced by the activated TH1 CD4+ T cells, differentiate into cytotoxic T cells and are recruited into the pancreatic islets. These activated TH1 CD4+ T cells and CD8+ cytotoxic T cells are involved in the destruction of beta cells. In addition, beta cells can also be damaged by granzymes and perforin released from CD8+ cytotoxic T cells and by soluble mediators such as cytokines and reactive oxygen molecules released from activated macrophages in the islets. Thus, activated macrophages, TH1 CD4+ T cells, and beta cell-cytotoxic CD8+ T cells act synergistically to destroy beta cells, resulting in autoimmune type 1 diabetes.
Neighbour node 11: Title: Discordant trends in microvascular complications in adolescents with type 1 diabetes from 1990 to 2002.
Abstract: OBJECTIVE: Since the Diabetes Control and Complications Trial, diabetes management goals have changed. The aims of the present study were to assess complication rates, including nerve abnormalities, in adolescents from 1990 to 2002 and to investigate associated risk factors. RESEARCH DESIGN AND METHODS: Cross-sectional analysis of complications was assessed in three study periods (1990-1994 [T1], 1995-1998 [T2], and 1999-2002 [T3]) in adolescents matched for age and diabetes duration (n = 878, median age 14.6 years, median duration 7.5 years). Retinopathy was assessed by seven-field stereoscopic fundal photography, albumin excretion rate (AER) from three consecutive timed overnight urine collections, peripheral nerve function by thermal and vibration thresholds, and autonomic nerve function by cardiovascular reflexes. RESULTS: Retinopathy declined significantly (T1, 49%; T2, 31%; and T3, 24%; P < 0.0001), early elevation of AER (> or = 7.5 microg/min) declined (38, 30, and 25%, respectively, P = 0.022), and microalbuminuria (AER > or = 20 microg/min) declined (7, 3, and 3%, respectively; P = 0.017, T1 vs. T2 and T3). Autonomic nerve abnormalities were unchanged (18, 21, and 18%, respectively; P = 0.60), but peripheral nerve abnormalities increased (12, 19, and 24%, respectively; P = 0.0017). More patients were treated with three or more injections per day (12, 46, and 67%, respectively; P < 0.0001) and insulin dose increased (1.08, 1.17, and 1.22 units x kg(-1) x day(-1), respectively; P < 0.0001), but median HbA(1c) (A1C) was unchanged (8.5, 8.5, and 8.4%, respectively). BMI and height SD score increased: BMI 0.46, 0.67, and 0.79, respectively (P < 0.0001), and height -0.09, 0.05, and 0.27, respectively (P < 0.0001). CONCLUSIONS: Retinopathy and microalbuminuria declined over time in this cohort, but the increased rate of peripheral nerve abnormalities is of concern. Despite intensified management (higher insulin dose and more injections), A1C has not changed and remains well above the recommended targets for adolescents.
Neighbour node 12: Title: Impaired revascularization of transplanted mouse pancreatic islets is chronic and glucose-independent.
Abstract: BACKGROUND: Pancreatic islets are avascular immediately after transplantation and depend on revascularization. Recently, the authors found decreased vascular density in mouse islets 1 month after implantation into nondiabetic recipients. This study investigated possible differences in revascularization between islets implanted into nondiabetic and diabetic recipients, and also evaluated changes in vascular density up to 6 months posttransplantation. METHODS: Islets were syngenically transplanted beneath the renal capsule of normoglycemic or alloxan-diabetic C57BL/6 mice. One to 6 months later, the animals were killed and the grafts removed. Histologic slides were prepared and stained with Bandeiraea simplicifolia. RESULTS: The vascular density in all transplanted islets was decreased compared with native islets. There were no differences in the islet graft vascular density between nondiabetic and diabetic animals. No improvement over time occurred. CONCLUSIONS: The vascular density is decreased in islets implanted to cure diabetic recipients. No improvement occurs in transplanted islets after 1 month posttransplantation.
Neighbour node 13: Title: The challenge of type 1 diabetes mellitus.
Abstract: Diabetes mellitus is a heterogeneous group of diseases characterized by high blood glucose levels due to defects in insulin secretion, insulin action, or both. With the number of cases expected to increase rapidly in the years to come, diabetes is a growing health challenge worldwide. Of the approximately 16 million diabetics in the United States, about 1.5 million suffer from type 1 diabetes. In this catabolic disorder afflicting predominantly young individuals, blood insulin is almost completely absent, leading to hyperglycemia and alterations in lipid metabolism. Type 1 diabetes is thought to be induced by a toxic or infectious insult that occurs in genetically predisposed individuals. With recent advances in the understanding of the involved immunology and cellular and molecular mechanisms, researchers strive to battle the disease with new preventive and corrective strategies.
Neighbour node 14: Title: Achieving and maintaining insulin independence in human islet transplant recipients.
Abstract: For islet transplants to complete the transition from clinical research to clinical care restoration of insulin independence must be achieved--as with pancreas transplants--with a single donor. To achieve this critical milestone more consistently it will be imperative to pursue the following complementary strategies simultaneously: 1) enhancing the metabolic potency, inflammatory resilience, and immune stealth of isolated islets; 2) inhibiting the thrombotic and inflammatory responses to transplanted islets; and 3) achieving immune protection with strategies lacking diabetogenic side effects. Maintaining insulin independence will be a different challenge requiring us to clarify whether failure of initially successful islet allografts in type 1 diabetes is related: to 1) failure of immunosuppressive regimens to control alloimmunity and autoimmunity; 2) failure of islet regeneration in the presence of currently applied immunosuppressive regimens; and/or 3) failure of islet neogenesis in the absence of an adequate mass and viability of co-transplanted/engrafted islet precursor cells.
|
Diabetes Mellitus, Experimental
|
pubmed
|
train
|
Classify the node 'The BubbleBadge: A Wearable Public Display We are exploring the design space of wearable computers by designing "public" wearable computer displays. This paper describes our first prototype, the BubbleBadge. By effectively turning the wearer's private display "inside out", the BubbleBadge transforms the wearable computing concept by making digital information public rather than private. User tests showed that the device introduces a new way to interact with information-providing devices, suggesting that it would be valuable to explore the concept further. Keywords Wearable computers, interaction technology, public displays INTRODUCTION A wearable computer is defined as a continuously running, augmenting and mediating computational device [2]. Wearable computers are usually highly private, since both input and output is controlled and seen only by the user, who is effectively "hiding" behind a hand-held keyboard and a head-mounted display. But while wearable computing can be a powerful tool for the single user, there is usuall...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Augmented Workspace: The World as Your Desktop . We live in a three dimensional world, and much of what we do and how we interact in the physical world has a strong spatial component. Unfortunately, most of our interaction with the virtual world is two dimensional. We are exploring the extension of the 2D desktop workspace into the 3D physical world, using a stereoscopic see-through head-mounted display. We have built a prototype that enables us to overlay virtual windows on the physical world. This paper describes the Augmented Workspace, which allows a user to pos ition windows in a 3D work area. Keywords. Ubiquitous computing, cooperative buildings, human-computer interaction, physical space, context awareness, visualization. 1. Introduction In our daily lives, much of what we do and how we interact has a strong spatial component. Your calendar is on a wall, or on a certain part of your desk, and sticky notes are placed on walls and whiteboards. Yet, as an increasing portion of our work is done on computers, a large m...
Neighbour node 1: The WearBoy: A Platform for Low-cost Public Wearable Devices We introduce the WearBoy -- a wearable, modified Nintendo GameBoy -- as a platform for exploring public wearable devices. We have minimized a Color GameBoy to enable users to comfortably wear it, making the device not much larger than the actual screen. Technical properties of the WearBoy are discussed, along with two applications using the platform. 1. Introduction Currently, many wearable computing prototypes are rather clumsy and heavy to wear, and often rely on several different electronic devices connected together by cables hidden in the user's clothing. This might be necessary for computationally demanding applications, but in many cases the application does not need much computational power, especially not if wireless access to more powerful resources is available. Several such low-end wearable platforms have been built and tested, e.g. the Thinking Tags [1]. These prototypes are usually custom designed around a small microcontroller with some additional features, but commonl...
|
HCI (Human-Computer Interaction)
|
citeseer
|
train
|
Classify the node 'Recombination Operator, its Correlation to the Fitness Landscape and Search Performance: The author reserves all other publication and other rights in association with the copyright in the thesis, and except as hereinbefore provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatever without the author's prior written permission.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Genetic Algorithms in Search, Optimization and Machine Learning. : Angeline, P., Saunders, G. and Pollack, J. (1993) An evolutionary algorithm that constructs recurrent neural networks, LAIR Technical Report #93-PA-GNARLY, Submitted to IEEE Transactions on Neural Networks Special Issue on Evolutionary Programming.
Neighbour node 1: On the Virtues of Parameterized Uniform Crossover, : Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues.
Neighbour node 2: "Evolution in Time and Space: The Parallel Genetic Algorithm." In Foundations of Genetic Algorithms, : The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.
Neighbour node 3: Genetic programming of minimal neural nets using occam\'s razor. : A genetic programming method is investigated for optimizing both the architecture and the connection weights of multilayer feedforward neural networks. The genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. The weights are trained by a next-ascent hillclimb-ing search. A new fitness function is proposed that quantifies the principle of Occam's razor. It makes an optimal trade-off between the error fitting ability and the parsimony of the network. We discuss the results for two problems of differing complexity and study the convergence and scaling properties of the algorithm.
Neighbour node 4: "A Survey of Evolutionary Strategies," :
|
Genetic Algorithms
|
cora
|
train
|
Classify the node 'Title: Chronic renal failure in non-insulin-dependent diabetes mellitus. A population-based study in Rochester, Minnesota.
Abstract: STUDY OBJECTIVE: To identify the incidence of clinically defined chronic renal failure by clinical type of diabetes in a community diabetic incidence cohort, and to evaluate the relation between persistent proteinuria and chronic renal failure in non-insulin-dependent diabetes mellitus. DESIGN: Retrospective incidence cohort study. SETTING: Population-based in Rochester, Minnesota. PATIENTS: Residents of Rochester, Minnesota, with diabetes initially diagnosed between 1945 and 1979 who had follow-up to 1984 for clinically defined chronic renal failure. MEASUREMENTS AND MAIN RESULTS: Among 1832 persons with non-insulin-dependent diabetes who were initially free of chronic renal failure, 25 developed chronic renal failure (incidence, 133 per 100,000 person-years: CI, 86 to 196). The subsequent incidence of chronic renal failure among 136 insulin-dependent diabetic Rochester residents, three of whom developed chronic renal failure, was 170 per 100,000 person-years (CI, 35 to 497). After adjusting for potential confounding factors, we found that the risk for chronic renal failure associated with the presence of persistent proteinuria at the time of the diagnosis of non-insulin-dependent diabetes was increased 12-fold (hazard ratio, 12.1; CI, 4.3 to 34.0). When persistent proteinuria developed after the diagnosis of non-insulin-dependent diabetes mellitus, the cumulative risk for chronic renal failure 10 years after the diagnosis of persistent proteinuria was 11%. CONCLUSIONS: These population-based data suggest that most cases of chronic renal failure in diabetes occur in persons with non-insulin-dependent diabetes. These data also identify the increased risk for chronic renal failure among persons with non-insulin-dependent diabetes mellitus who have persistent proteinuria present at or developing after the diagnosis of non-insulin-dependent diabetes mellitus, such data may be useful for directing interventions to prevent or delay the development of chronic renal failure.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: The effect of comorbid illness and functional status on the expected benefits of intensive glucose control in older patients with type 2 diabetes: a decision analysis.
Abstract: BACKGROUND: Physicians are uncertain about when to pursue intensive glucose control among older patients with diabetes. OBJECTIVE: To assess the effect of comorbid illnesses and functional status, mediated through background mortality, on the expected benefits of intensive glucose control. DESIGN: Decision analysis. DATA SOURCES: Major clinical studies in diabetes and geriatrics. TARGET POPULATION: Patients 60 to 80 years of age who have type 2 diabetes and varied life expectancies estimated from a mortality index that was validated at the population level. TIME HORIZON: Patient lifetime. PERSPECTIVE: Health care system. INTERVENTION: Intensive glucose control (hemoglobin A1c [HbA1c] level of 7.0) versus moderate glucose control (HbA1c level of 7.9). OUTCOME MEASURES: Lifetime differences in incidence of complications and average quality-adjusted days. RESULTS OF BASE-CASE ANALYSIS: Healthy older patients of different age groups had expected benefits of intensive glucose control ranging from 51 to 116 quality-adjusted days. Within each age group, the expected benefits of intensive control steadily declined as the level of comorbid illness and functional impairment increased (mortality index score, 1 to 26 points). For patients 60 to 64 years of age with new-onset diabetes, the benefits declined from 106 days at baseline good health (life expectancy, 14.6 years) to 44 days with 3 additional index points (life expectancy, 9.7 years) and 8 days with 7 additional index points (life expectancy, 4.8 years). A similar decline in benefits occurred among patients with prolonged duration of diabetes. RESULTS OF SENSITIVITY ANALYSIS: With alternative model assumptions (such as Framingham models), expected benefits of intensive control declined as mortality index scores increased. LIMITATIONS: Diabetes clinical trial data were lacking for frail, older patients. The mortality index was not validated for use in predicting individual-level life expectancies. Adverse effects of intensive control were not taken into account. CONCLUSION: Among older diabetic patients, the presence of multiple comorbid illnesses or functional impairments is a more important predictor of limited life expectancy and diminishing expected benefits of intensive glucose control than is age alone.
|
Diabetes Mellitus Type 2
|
pubmed
|
train
|
Classify the node 'A Multi-Plane State Machine Agent Model This paper presents a framework for implementing collaborative network agents. Agents are assembled dynamically from components into a structure described by a multi-plane state machine model. This organization lends itself to an elegant implementations of remote control, collaboration, checkpointing and mobility, dening features of an agent system. It supports techniques, like agent surgery dicult to reproduce with other approaches. The reference implementation for our model, the Bond agent system, is distributed under an open source license and can be downloaded from http://bond.cs.purdue.edu. 1 Introduction The eld of agents is witnessing the convergence of researchers from several elds. Some see agents as a natural extension of the object-oriented programming paradigm, [14, 15]. One of the most popular books on articial intelligence reinterprets the whole eld in terms of agents [2]. Contemporary work on the theory of behavior provides the foundations for theoretical mo...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: The Isomorphism Between a Class of Place Transition Nets and a Multi-Plane State Machine Agent Model Recently we introduced a multi-plane state machine model of an agent, released an implementation of the model, and designed several applications of the agent framework. In this paper we address the translation from the Petri net language to the Blueprint language used for agent description as well as the translation from Blueprint to Petri Nets. The simulation of a class of Place Transition Nets is part of an effort to create an agent--based workflow management system. Contents 1 Introduction 1 2 A Multi-Plane State Machine Agent Model 3 2.1 Bond Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Bond Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Bond Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Using Planes to Implement Facets of Behavior . . . . . . . . . . . . . . . . . . . . . 5 3 Simulating a Class of Place-Transition Nets on the Bond ...
|
Agents
|
citeseer
|
train
|
Classify the node 'Situation Aware Computing with Wearable Computers 1 Motivation for contextual aware computing: For most computer systems, even virtual reality systems, sensing techniques are a means of getting input directly from the user. However, wearable sensors and computers offer a unique opportunity to re-direct sensing technology towards recovering more general user context. Wearable computers have the potential to "see" as the user sees, "hear" as the user hears, and experience the life of the user in a "first-person" sense. This increase in contextual and user information may lead to more intelligent and fluid interfaces that use the physical world as part of the interface. Wearable computers are excellent platforms for contextually aware applications, but these applications are also necessary to use wearables to their fullest. Wearables are more than just highly portable computers, they perform useful work even while the wearer isn't directly interacting with the system. In such environments the user needs to concentrate on his environment, not on the computer interface, so the wearable needs to use information from the wearer's context to be the least distracting. For example, imagine an interface which is aware of the user's location: while being in the subway, the system might alert him with a' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video Hidden Markov models (HMM's) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that demonstrate a realtime HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon. 1 Introduction While there are many different types of gestures, the most structured sets belong to the sign languages. In sign language, each gesture already has assigned meaning, and strong rules of context and grammar may be applied to make recognition tractable. To date, most work on sign language recognition has employed expensi...
|
HCI (Human-Computer Interaction)
|
citeseer
|
train
|
Classify the node 'Overview of Datalog Extensions with Tuples and Sets Datalog (with negation) is the most powerful query language for relational database with a well-defined declarative semantics based on the work in logic programming. However, Datalog only allows inexpressive flat structures and cannot directly support complex values such as nested tuples and sets common in novel database applications. For these reasons, Datalog has been extended in the past several years to incorporate tuple and set constructors. In this paper, we examine four different Datalog extensions: LDL, COL, Hilog and Relationlog. 1 Introduction Databases and logic programming are two independently developed areas in computer science. Database technology has evolved in order to effectively and efficiently organize, manage and maintain large volumes of ever increasingly complex data reliably in various memory devices. The underlying structure of databases has been the primary focus of research which leads to the development of data models. The most well-known and widely used da...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Introduction to the Relationlog System Advanced applications require construction, efficient access and management of large databases with rich data structures and inference mechanisms. However, such capabilities are not directly supported by the existing database systems. In this paper, we describe Relationlog, a persistent deductive database system that is able to directly support the storage, efficient access and inference of data with complex structures. 1 Introduction Advanced applications require construction, efficient access and management of large databases with rich data structures and inference mechanisms. However, such capabilities are not directly supported by the existing database systems. Deductive databases have the potential to meet the demands of advanced applications. They grew out of the integration of logic programming and relational database technologies. They are intended to combine the best of the two approaches, such as representational and operational uniformity, inference capabilities, recursion,...
Neighbour node 1: Relationlog: A Typed Extension to Datalog with Sets and Tuples This paper presents a novel logic programming based language for nested relational and complex value models called Relationlog. It stands in the same relationship to the nested relational and complex value models as Datalog stands to the relational model. The main novelty of the language is the introduction of powerful mechanisms, namely, partial and complete set terms, for representing and manipulating both partial and complete information on nested sets, tuples and relations. They generalize the set grouping and set enumeration mechanisms of LDL and allow the user to directly encode the open and closed world assumptions on nested sets, tuples, and relations. They allow direct inference and access to deeply embedded values in a complex value relation as if the relation is normalized, which greatly increases the ease of use of the language. As a result, the extended relational algebra operations can be represented in Relationlog directly, and more importantly, recursively in a way similar to Datalog. Like Datalog, Relationlog has a well-defined Herbrand model-theoretic semantics, which captures the intended semantics of nested sets, tuples and relations, and also a well-defined proof-theoretic semantics which coincides with its model-theoretic semantics.
|
DB (Databases)
|
citeseer
|
train
|
Classify the node ' Mutation rates as adaptations. : In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of a gene for the mutation rate of the individual. This allowed the mutation rate itself to evolve in a lineage. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: Between-host evolution of mutation-rate and within-host evolution of virulence.: It has been recently realized that parasite virulence (the harm caused by parasites to their hosts) can be an adaptive trait. Selection for a particular level of virulence can happen either at at the level of between-host tradeoffs or as a result of short-sighted within-host competition. This paper describes some simulations which study the effect that modifier genes for changes in mutation rate have on suppressing this short-sighted development of virulence, and investigates the interaction between this and a simplified model of im mune clearance.
Neighbour node 1: The coevolution of mutation rates. : In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of genes for longevity and mutation rate in the individuals. This made it possible for a lineage to evolve to be immortal. It also allowed the evolution of no mutation or extremely high mutation rates. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes.
Neighbour node 2: Optimal mutation rates in genetic search. : The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism.
|
Genetic Algorithms
|
cora
|
train
|
Classify the node 'Co-clustering documents and words using Bipartite Spectral Graph Partitioning Both document clustering and word clustering are important and well-studied problems. By using the vector space model, a document collection may be represented as a word-document matrix. In this paper, we present the novel idea of modeling the document collection as a bipartite graph between documents and words. Using this model, we pose the clustering probliem as a graph partitioning problem and give a new spectral algorithm that simultaneously yields a clustering of documents and words. This co-clustrering algorithm uses the second left and right singular vectors of an appropriately scaled word-document matrix to yield good bipartitionings. In fact, it can be shown that these singular vectors give a real relaxation to the optimal solution of the graph bipartitioning problem. We present several experimental results to verify that the resulting co-clustering algoirhm works well in practice and is robust in the presence of noise.' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Concept Decompositions for Large Sparse Text Data using Clustering Abstract. Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99 % is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like ” and “self-similar ” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means
Neighbour node 1: Web Document Clustering: A Feasibility Demonstration Abstract Users of Web search engines are often forced to sift through the long ordered list of document “snippets” returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial. 1
Neighbour node 2: Document Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...
Neighbour node 3: Criterion Functions for Document Clustering: Experiments and Analysis In recent years, we have witnessed a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intranets. This has led to an increased interest in developing methods that can help users to effectively navigate, summarize, and organize this information with the ultimate goal of helping them to find what they are looking for. Fast and high-quality document clustering algorithms play an important role towards this goal as they have been shown to provide both an intuitive navigation/browsing mechanism by organizing large amounts of information into a small number of meaningful clusters as well as to greatly improve the retrieval performance either via cluster-driven dimensionality reduction, term-weighting, or query expansion. This ever-increasing importance of document clustering and the expanded range of its applications led to the development of a number of new and novel algorithms with different complexity-quality trade-offs. Among them, a class of clustering algorithms that have relatively low computational requirements are those that treat the clustering problem as an optimization process which seeks to maximize or minimize a particular clustering criterion function defined over the entire clustering solution.
Neighbour node 4: Authoritative Sources in a Hyperlinked Environment The link structure of a hypermedia environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. Versions of this principle have been studied in the hypertext research community and (in a context predating hypermedia) through journal citation analysis in the field of bibliometrics. But for the problem of searching in hyperlinked environments such as the World Wide Web, it is clear from the prevalent techniques that the information inherent in the links has yet to be fully exploited. In this work we develop a new method for automatically extracting certain types of information about a hypermedia environment from its link structure, and we report on experiments that demonstrate its effectiveness for a variety of search problems on the www. The central problem we consider is that of determining the relative "authority" of pages in such environments. This issue is central to a number of basic hypertext search t...
Neighbour node 5: Document Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...
|
IR (Information Retrieval)
|
citeseer
|
train
|
Classify the node 'The Bivariate Marginal Distribution Algorithm The paper deals with the Bivariate Marginal Distribution Algorithm (BMDA). BMDA is an extension of the Univariate Marginal Distribution Algorithm (UMDA). It uses the pair gene dependencies in order to improve algorithms that use simple univariate marginal distributions. BMDA is a special case of the Factorization Distribution Algorithm, but without any problem specic knowledge in the initial stage. The dependencies are being discovered during the optimization process itself. In this paper BMDA is described in detail. BMDA is compared to dierent algorithms including the simple genetic algorithm with dierent crossover methods and UMDA. For some tness functions the relation between problem size and the number of tness evaluations until convergence is shown. 1. Introduction Genetic algorithms work with populations of strings of xed length. In this paper binary strings will be considered. From current population better strings are selected at the expense of worse ones. New strings ar...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Identifying Linkage Groups by Nonlinearity/Non-monotonicity Detection This paper presents and discusses direct linkage identification procedures based on nonlinearity/non-monotonicity detection. The algorithm we propose checks arbitrary nonlinearity/non-monotonicity of fitness change by perturbations in a pair of loci to detect their linkage. We first discuss condition of the linkage identification by nonlinearity check (LINC) procedure (Munetomo & Goldberg, 1998) and its allowable nonlinearity. Then we propose another condition of the linkage identification by nonmonotonicity detection (LIMD) and prove its equality to the LINC with allowable nonlinearity (LINC-AN). The procedures can identify linkage groups for problems with at most order-k difficulty by checking O(2 k ) strings and the computational cost for each string is O(l 2 ) where l is the string length. 1 Introduction The definition of linkage in genetics is `the tendency for alleles of different genes to be passed together from one generation to the next' (Winter, Hickey...
Neighbour node 1: Feature Subset Selection by Bayesian networks based optimization In this paper we perform a comparison among FSS-EBNA, a randomized, populationbased and evolutionary algorithm, and two genetic and other two sequential search approaches in the well known Feature Subset Selection (FSS) problem. In FSS-EBNA, the FSS problem, stated as a search problem, uses the EBNA (Estimation of Bayesian Network Algorithm) search engine, an algorithm within the EDA (Estimation of Distribution Algorithm) approach. The EDA paradigm is born from the roots of the GA community in order to explicitly discover the relationships among the features of the problem and not disrupt them by genetic recombination operators. The EDA paradigm avoids the use of recombination operators and it guarantees the evolution of the population of solutions and the discovery of these relationships by the factorization of the probability distribution of best individuals in each generation of the search. In EBNA, this factorization is carried out by a Bayesian network induced by a chea...
Neighbour node 2: BOA: The Bayesian Optimization Algorithm In this paper, an algorithm based on the concepts of genetic algorithms that uses an estimation of a probability distribution of promising solutions in order to generate new candidate solutions is proposed. To estimate the distribution, techniques for modeling multivariate data by Bayesian networks are used. The proposed algorithm identifies, reproduces and mixes building blocks up to a specified order. It is independent of the ordering of the variables in the strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm. However, prior information is not essential. Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows. 1 INTRODUCTION Recently, there has been a growing interest in optimization methods that explicitly model the good solutions found so far and use the constructed model to guide the fu...
|
ML (Machine Learning)
|
citeseer
|
train
|
Classify the node ' Issues in goal-driven explanation. : When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: An architecture for goal-driven explanation. : In complex and changing environments explanation must be a a dynamic and goal-driven process. This paper discusses an evolving system implementing a novel model of explanation generation | Goal-Driven Interactive Explanation | that models explanation as a goal-driven, multi-strategy, situated process inter-weaving reasoning with action. We describe a preliminary implementation of this model in gobie, a system that generates explanations for its internal use to support plan generation and execution.
Neighbour node 1: Inferential Theory of Learning: Developing Foundations for Multistrategy Learning, in Machine Learning: A Multistrategy Approach, Vol. IV, R.S. : The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization.
Neighbour node 2: Goal-Driven Learning. : In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Technical Report #85, Cognitive Science Program, Indiana University, Bloomington, Indiana, January 1993.
Neighbour node 3: Abduction, experience, and goals: A model of everyday abductive explanation. :
Neighbour node 4: Introspective Reasoning using Meta-Explanations for Multistrat-egy Learning. : In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.
|
Case Based
|
cora
|
train
|
Classify the node 'Cyclic Association Rules We study the problem of discovering association rules that display regular cyclic variation over time. For example, if we compute association rules over monthly sales data, we may observe seasonal variation where certain rules are true at approximately the same month each year. Similarly, association rules can also display regular hourly, daily, weekly, etc., variation that is cyclical in nature. We demonstrate that existing methods cannot be naively extended to solve this problem of cyclic association rules. We then present two new algorithms for discovering such rules. The first one, which we call the sequential algorithm, treats association rules and cycles more or less independently. By studying the interaction between association rules and time, we devise a new technique called cycle pruning, which reduces the amount of time needed to find cyclic association rules. The second algorithm, which we call the interleaved algorithm, uses cycle pruning and other optimization techniques f...' into one of the following categories:
Agents; ML (Machine Learning); IR (Information Retrieval); DB (Databases); HCI (Human-Computer Interaction); AI (Artificial Intelligence).
Refer to neighbour nodes:
Neighbour node 0: Mining Optimized Support Rules for Numeric Attributes Mining association rules on large data sets has received considerable attention in recent years. Association rules are useful for determining correlations between attributes of a relation and have applications in marketing, financial and retail sectors. Furthermore, optimized association rules are an effective way to focus on the most interesting characteristics involving certain attributes. Optimized association rules are permitted to contain uninstantiated attributes and the problem is to determine instantiations such that either the support, confidence or gain of the rule is maximized. In this paper, we generalize the optimized support association rule problem by permitting rules to contain disjunctions over uninstantiated numeric attributes. Our generalized association rules enable us to extract more useful information about seasonal and local patterns involving the uninstantiated attribute. For rules containing a single numeric attribute, we present a dynamic programming algorith...
Neighbour node 1: On-Line Analytical Mining of Association Rules With wide applications of computers and automated data collection tools, massive amounts of data have been continuously collected and stored in databases, which creates an imminent need and great opportunities for mining interesting knowledge from data. Association rule mining is one kind of data mining techniques which discovers strong association or correlation relationships among data. The discovered rules may help market basket or cross-sales analysis, decision making, and business management. In this thesis, we propose and develop an interesting association rule mining approach, called on-line analytical mining of association rules, which integrates the recently developed OLAP (on-line analytical processing) technology with some efficient association mining methods. It leads to flexible, multi-dimensional, multi-level association rule mining with high performance. Several algorithms are developed based on this approach for mining various kinds of associations in multi-dimensional ...
Neighbour node 2: Efficient Mining of Partial Periodic Patterns in Time Series Database Partial periodicity search, i.e., search for partial periodic patterns in time-series databases, is an interesting data mining problem. Previous studies on periodicity search mainly consider finding full periodic patterns, where every point in time contributes (precisely or approximately) to the periodicity. However, partial periodicity is very common in practice since it is more likely that only some of the time episodes may exhibit periodic patterns. We present several algorithms for efficient mining of partial periodic patterns, by exploring some interesting properties related to partial periodicity, such as the Apriori property and the max-subpattern hit set property, and by shared mining of multiple periods. The max-subpattern hit set property is a vital new property which allows us to derive the counts of all frequent patterns from a relatively small subset of patterns existing in the time series. We show that mining partial periodicity needs only two scans over the time series database, even for mining multiple periods. The performance study shows our proposed methods are very efficient in mining long periodic patterns.
|
DB (Databases)
|
citeseer
|
train
|
Classify the node 'Title: Reversal of diabetes in BB rats by transplantation of encapsulated pancreatic islets.
Abstract: Prolonged survival of pancreatic islet allografts implanted in diabetic BB rats was achieved by encapsulation of individual islets in a protective biocompatible alginate-polylysine-alginate membrane without immunosuppression. Intraperitoneal transplantation of the encapsulated islets reversed the diabetic state of the recipients within 3 days and maintained normoglycemia for 190 days. Normal body weight and urine volume were maintained during this period, and no cataracts were detected in the transplant recipients. In contrast, control rats receiving transplants of unencapsulated islets experienced normoglycemia for less than 2 wk. These results demonstrated that microencapsulation can protect allografted islets from both graft rejection and autoimmune destruction without immunosuppression in an animal model that mimics human insulin-dependent diabetes.' into one of the following categories:
Diabetes Mellitus, Experimental; Diabetes Mellitus Type 1; Diabetes Mellitus Type 2.
Refer to neighbour nodes:
Neighbour node 0: Title: Normalization of diabetes in spontaneously diabetic cynomologus monkeys by xenografts of microencapsulated porcine islets without immunosuppression.
Abstract: Porcine pancreatic islets were microencapsulated in alginate-polylysine-alginate capsules and transplanted intraperitoneally into nine spontaneously diabetic monkeys. After one, two, or three transplants of 3-7 x 10(4) islets per recipient, seven of the monkeys became insulin independent for periods ranging from 120 to 804 d with fasting blood glucose levels in the normoglycemic range. Glucose clearance rates in the transplant recipients were significantly higher than before the graft administration and the insulin secretion during glucose tolerance tests was significantly higher compared with pretransplant tests. Porcine C-peptide was detected in all transplant recipients throughout their period of normoglycemia while none was found before the graft administration. Hemoglobin A1C levels dropped significantly within 2 mo after transplantation. While ketones were detected in the urine of all recipients before the graft administration, all experimental animals became ketone free 2 wk after transplantation. Capsules recovered from two recipients 3 mo after the restoration of normoglycemia were found physically intact with enclosed islets clearly visible. The capsules were free of cellular overgrowth. Examination of internal organs of two of the animals involved in our transplantation studies for the duration of 2 yr revealed no untoward effect of the extended presence of the microcapsules.
|
Diabetes Mellitus, Experimental
|
pubmed
|
train
|
Classify the node ' A Case-based Approach to Reactive Control for Autonomous Robots. : We propose a case-based method of selecting behavior sets as an addition to traditional reactive robotic control systems. The new system (ACBARR | A Case BAsed Reactive Robotic system) provides more flexible performance in novel environments, as well as overcoming a standard "hard" problem for reactive systems, the box canyon. Additionally, ACBARR is designed in a manner which is intended to remain as close to pure reactive control as possible. Higher level reasoning and memory functions are intentionally kept to a minimum. As a result, the new reasoning does not significantly slow the system down from pure reactive speeds.' into one of the following categories:
Rule Learning; Neural Networks; Case Based; Genetic Algorithms; Theory; Reinforcement Learning; Probabilistic Methods.
Refer to neighbour nodes:
Neighbour node 0: "Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation," : This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.
Neighbour node 1: Using Case-Based Reasoning for Mobile Robot Navigation: This paper presents an approach to mobile robot path planning using case-based reasoning together with map-based path planning. The map-based path planner is used to seed the case-base with innovative solutions. The casebase stores the paths and the information about their traversability. While planning the route those paths are preferred that according to the former experience are least risky.
|
Case Based
|
cora
|
train
|