id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
15,825,071
https://en.wikipedia.org/wiki/DIO2
Type II iodothyronine deiodinase (iodothyronine 5'-deiodinase, iodothyronine 5'-monodeiodinase) is an enzyme that in humans is encoded by the DIO2 gene. Function The protein encoded by this gene belongs to the iodothyronine deiodinase family. It activates thyroid hormone by converting the prohormone thyroxine (T4) by outer ring deiodination (ORD) to bioactive 3,3',5-triiodothyronine (T3). It is highly expressed in the thyroid, and may contribute significantly to the relative increase in thyroidal T3 production in patients with Graves' disease and thyroid adenomas. This protein contains selenocysteine (Sec) residues encoded by the UGA codon, which normally signals translation termination. The 3' UTR of Sec-containing genes have a common stem-loop structure, the Sec insertion sequence (SECIS), which is necessary for the recognition of UGA as a Sec codon rather than as a stop signal. Alternative splicing results in multiple transcript variants encoding different isoforms. Interactions DIO2 has been shown to interact with USP33. See also Deiodinase References Further reading Selenoproteins
DIO2
[ "Chemistry" ]
280
[ "Biochemistry stubs", "Protein stubs" ]
15,826,481
https://en.wikipedia.org/wiki/Unsaid
The term "unsaid" refers what is not explicitly stated, what is hidden and/or implied in the speech of an individual or a group of people. The unsaid may be the product of intimidation; of a mulling over of thought; or of bafflement in the face of the inexpressible. Linguistics Sociolinguistics points out that in normal communication what is left unsaid is as important as what is actually said—that we expect our auditors regularly to fill in the social context/norms of our conversations as we proceed. Basil Bernstein saw one difference between the restricted code and the elaborated code of speech is that more would be left implicit in the former than the latter. Ethnology In ethnology, ethnomethodology established a strong link between unsaid and axiomatic. Harold Garfinkel, following Durkheim, stressed that in any given situation, even a legally binding contract, the terms of agreement rest upon the 90% of unspoken assumptions that underlie the visible (spoken) tip of the interactive iceberg. Edward T. Hall argued that much cross-cultural miscommunication stemmed from neglect of the silent, unspoken, but differing cultural patterns that each participant unconsciously took for granted. Psychoanalysis Luce Irigaray has emphasised the importance of listening to the unsaid dimension of discourse in psychoanalytic practice—something which may shed light on the unconscious phantasies of the person being analysed. Other psychotherapies have also emphasised the importance of the non-verbal component of the patient's communication, sometimes privileging this over the verbal content. Behind all such thinking stands Freud's dictum: "no mortal can keep a secret. If his lips are silent, he chatters with his fingertips...at every pore". Cultural examples Sherlock Holmes is said to have owed his success to his attention to the unsaid in his client's communications. In Small World, the heroine cheekily excuses her lack of note-taking to a Sorbonne professor by saying: "it is not what you say that impresses me most, it is what you are silent about: ideas, morality, love, death, things...Vos silences profonds". See also References Further reading External links Human communication Nonverbal communication Sociolinguistics Ethnology Psychotherapy
Unsaid
[ "Biology" ]
493
[ "Human communication", "Behavior", "Human behavior" ]
15,826,631
https://en.wikipedia.org/wiki/Black%20Warrior%20Basin
The Black Warrior Basin is a geologic sedimentary basin of western Alabama and northern Mississippi in the United States. It is named for the Black Warrior River and is developed for coal and coalbed methane production, as well as for conventional oil and natural gas production. Coalbed methane of the Black Warrior Basin has been developed and in production longer than in any other location in the United States. The coalbed methane is produced from the Pennsylvanian Pottsville Coal Interval. The Black Warrior basin was a foreland basin during the Ouachita Orogeny during the Pennsylvanian and Permian Periods. The basin also received sediments from the Appalachian orogeny during the Pennsylvanian. The western margin of the basin lies beneath the sediments of the Mississippi embayment where it is contiguous with the Arkoma Basin of northern Arkansas and northeastern Oklahoma. The region existed as a quiescent continental shelf environment through the early Paleozoic from the Cambrian through the Mississippian with the deposition of shelf sandstones, shale, limestone, dolomite and chert. References Further reading Hatch J.R. and M.J. Pawlewicz. (2007). Geologic assessment of undiscovered oil and gas resources of the Black Warrior Basin Province, Alabama and Mississippi [Digital Data Series 069-I]. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey. External links Geological Survey of Alabama; Alabama State Oil and Gas Board Pashin, J.C. (2005). Pottsville Stratigraphy and the Union Chapel Lagerstatte. (PDF) Pennsylvanian Footprints in the Black Warrior Basin of Alabama, Alabama Paleontological Society Monograph no.1. Buta, R. J., Rindsberg, A. K., and Kopaska-Merkel, D. C., eds. Internet Map Application for the Black Warrior Basin Province, USGS Energy Resources Program, Map Service for the Black Warrior Basin Province, 2002 National Assessment of Oil and Gas Sedimentary basins of North America Coal mining regions in the United States Coal mining in Appalachia Geology of Alabama Geology of Mississippi Geologic provinces of the United States Methane Mining in Alabama Mining in Mississippi
Black Warrior Basin
[ "Chemistry" ]
445
[ "Greenhouse gases", "Methane" ]
15,826,762
https://en.wikipedia.org/wiki/Cahaba%20Basin
The Cahaba Basin is a geologic area of central Alabama developed for coal and coalbed methane (CBM) production. Centered in eastern Bibb and southwestern Shelby Counties, the basin is significantly smaller in area and production than the larger Black Warrior Basin in Tuscaloosa and western Jefferson Counties to the northwest. The coalbed methane is produced from the Gurnee Field of the Pottsville Coal Interval. Coalbed gas production has been continuous since at least 1990 and annual gas production has increased from 344,875 Mcf in 1990 to 3,154, 554 Mcf through October 2007. Geology The Cahaba Basin is located across an anticline from the neighboring Black Warrior Basin. Within the Cahaba Basin, the Pennsylvanian age coal beds have an average bed thickness of . The developed formations are known as the Gurnee Field of the Pottsville Formation. Development The coal resources of the Cahaba Basin have been developed for over a century and contributed to the Birmingham area's rise as an iron and steel production center. Numerous small coal mines continue to operate in the basin. Several CBM developers operate within the Cahaba Basin with GeoMet, Inc. and CDX Gas being two of the largest. The field has been developed for CBM since the 1980s. GeoMet, Inc. and CDX both operate pipelines which join the SONAT Bessemer Calera Pipeline and Enbridge Pipeline respectively. GeoMet, Inc. operates a discharge water pipeline to the Black Warrior River. References External links Geological Survey of Alabama; Alabama State Oil and Gas Board Coalbed Methane Association of Alabama; non-profit trade association CDX Gas – a significant Cahaba Basin CBM developer GeoMet, Inc. - a significant Cahaba Basin CBM developer Geography of Bibb County, Alabama Geography of Shelby County, Alabama Methane Coal mining regions in the United States Mining in Alabama
Cahaba Basin
[ "Chemistry" ]
388
[ "Greenhouse gases", "Methane" ]
15,828,681
https://en.wikipedia.org/wiki/Dimroth%20rearrangement
The Dimroth rearrangement is a rearrangement reaction taking place with certain 1,2,3-triazoles where endocyclic and exocyclic nitrogen atoms switch place. This organic reaction was discovered in 1909 by Otto Dimroth. With R a phenyl group the reaction takes place in boiling pyridine for 24 hours. This type of triazole has an amino group in the 5 position. After ring-opening to a diazo intermediate, C-C bond rotation is possible with 1,3-migration of a proton. Certain 1-alkyl-2-iminopyrimidines also display this type of rearrangement. In the first step is an addition reaction of water followed by ring-opening of the hemiaminal to the aminoaldehyde followed by ring closure. A known drug example of the Dimroth rearrangement includes in the synthesis of Bemitradine [88133-11-3]. References Rearrangement reactions Name reactions
Dimroth rearrangement
[ "Chemistry" ]
214
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
15,828,771
https://en.wikipedia.org/wiki/Stable%20theory
In the mathematical field of model theory, a theory is called stable if it satisfies certain combinatorial restrictions on its complexity. Stable theories are rooted in the proof of Morley's categoricity theorem and were extensively studied as part of Saharon Shelah's classification theory, which showed a dichotomy that either the models of a theory admit a nice classification or the models are too numerous to have any hope of a reasonable classification. A first step of this program was showing that if a theory is not stable then its models are too numerous to classify. Stable theories were the predominant subject of pure model theory from the 1970s through the 1990s, so their study shaped modern model theory and there is a rich framework and set of tools to analyze them. A major direction in model theory is "neostability theory," which tries to generalize the concepts of stability theory to broader contexts, such as simple and NIP theories. Motivation and history A common goal in model theory is to study a first-order theory by analyzing the complexity of the Boolean algebras of (parameter) definable sets in its models. One can equivalently analyze the complexity of the Stone duals of these Boolean algebras, which are type spaces. Stability restricts the complexity of these type spaces by restricting their cardinalities. Since types represent the possible behaviors of elements in a theory's models, restricting the number of types restricts the complexity of these models. Stability theory has its roots in Michael Morley's 1965 proof of Łoś's conjecture on categorical theories. In this proof, the key notion was that of a totally transcendental theory, defined by restricting the topological complexity of the type spaces. However, Morley showed that (for countable theories) this topological restriction is equivalent to a cardinality restriction, a strong form of stability now called -stability, and he made significant use of this equivalence. In the course of generalizing Morley's categoricity theorem to uncountable theories, Frederick Rowbottom generalized -stability by introducing -stable theories for some cardinal , and finally Shelah introduced stable theories. Stability theory was much further developed in the course of Shelah's classification theory program. The main goal of this program was to show a dichotomy that either the models of a first-order theory can be nicely classified up to isomorphism using a tree of cardinal-invariants (generalizing, for example, the classification of vector spaces over a fixed field by their dimension), or are so complicated that no reasonable classification is possible. Among the concrete results from this classification theory were theorems on the possible spectrum functions of a theory, counting the number of models of cardinality as a function of . Shelah's approach was to identify a series of "dividing lines" for theories. A dividing line is a property of a theory such that both it and its negation have strong structural consequences; one should imply the models of the theory are chaotic, while the other should yield a positive structure theory. Stability was the first such dividing line in the classification theory program, and since its failure was shown to rule out any reasonable classification, all further work could assume the theory to be stable. Thus much of classification theory was concerned with analyzing stable theories and various subsets of stable theories given by further dividing lines, such as superstable theories. One of the key features of stable theories developed by Shelah is that they admit a general notion of independence called non-forking independence, generalizing linear independence from vector spaces and algebraic independence from field theory. Although non-forking independence makes sense in arbitrary theories, and remains a key tool beyond stable theories, it has particularly good geometric and combinatorial properties in stable theories. As with linear independence, this allows the definition of independent sets and of local dimensions as the cardinalities of maximal instances of these independent sets, which are well-defined under additional hypotheses. These local dimensions then give rise to the cardinal-invariants classifying models up to isomorphism. Definition and alternate characterizations Let T be a complete first-order theory. For a given infinite cardinal , T is -stable if for every set A of cardinality in a model of T, the set S(A) of complete types over A also has cardinality . This is the smallest the cardinality of S(A) can be, while it can be as large as . For the case , it is common to say T is -stable rather than -stable. T is stable if it is -stable for some infinite cardinal . Restrictions on the cardinals for which a theory can simultaneously by -stable are described by the stability spectrum, which singles out the even tamer subset of superstable theories. A common alternate definition of stable theories is that they do not have the order property. A theory has the order property if there is a formula and two infinite sequences of tuples , in some model M such that defines an infinite half graph on , i.e. is true in M . This is equivalent to there being a formula and an infinite sequence of tuples in some model M such that defines an infinite linear order on A, i.e. is true in M . There are numerous further characterizations of stability. As with Morley's totally transcendental theories, the cardinality restrictions of stability are equivalent to bounding the topological complexity of type spaces in terms of Cantor-Bendixson rank. Another characterization is via the properties that non-forking independence has in stable theories, such as being symmetric. This characterizes stability in the sense that any theory with an abstract independence relation satisfying certain of these properties must be stable and the independence relation must be non-forking independence. Any of these definitions, except via an abstract independence relation, can instead be used to define what it means for a single formula to be stable in a given theory T. Then T can be defined to be stable if every formula is stable in T. Localizing results to stable formulas allows these results to be applied to stable formulas in unstable theories, and this localization to single formulas is often useful even in the case of stable theories. Examples and non-examples For an unstable theory, consider the theory DLO of dense linear orders without endpoints. Then the atomic order relation has the order property. Alternatively, unrealized 1-types over a set A correspond to cuts (generalized Dedekind cuts, without the requirements that the two sets be non-empty and that the lower set have no greatest element) in the ordering of A, and there exist dense orders of any cardinality with -many cuts. Another unstable theory is the theory of the Rado graph, where the atomic edge relation has the order property. For a stable theory, consider the theory of algebraically closed fields of characteristic p, allowing . Then if K is a model of , counting types over a set is equivalent to counting types over the field k generated by A in K. There is a (continuous) bijection from the space of n-types over k to the space of prime ideals in the polynomial ring . Since such ideals are finitely generated, there are only many, so is -stable for all infinite . Some further examples of stable theories are listed below. The theory of any module over a ring (in particular, any theory of vector spaces or abelian groups). The theory of non-abelian free groups. The theory of differentially closed fields of characteristic p. When , the theory is -stable. The theory of any nowhere dense graph class. These include graph classes with bounded expansion, which in turn include planar graphs and any graph class of bounded degree. Geometric stability theory Geometric stability theory is concerned with the fine analysis of local geometries in models and how their properties influence global structure. This line of results was later key in various applications of stability theory, for example to Diophantine geometry. It is usually taken to start in the late 1970s with Boris Zilber's analysis of totally categorical theories, eventually showing that they are not finitely axiomatizble. Every model of a totally categorical theory is controlled by (i.e. is prime and minimal over) a strongly minimal set, which carries a matroid structure determined by (model-theoretic) algebraic closure that gives notions of independence and dimension. In this setting, geometric stability theory then asks the local question of what the possibilities are for the structure of the strongly minimal set, and the local-to-global question of how the strongly minimal set controls the whole model. The second question is answered by Zilber's Ladder Theorem, showing every model of a totally categorical theory is built up by a finite sequence of something like "definable fiber bundles" over the strongly minimal set. For the first question, Zilber's Trichotomy Conjecture was that the geometry of a strongly minimal set must be either like that of a set with no structure, or the set must essentially carry the structure of a vector space, or the structure of an algebraically closed field, with the first two cases called locally modular. This conjecture illustrates two central themes. First, that (local) modularity serves to divide combinatorial or linear behavior from nonlinear, geometric complexity as in algebraic geometry. Second, that complicated combinatorial geometry necessarily comes from algebraic objects; this is akin to the classical problem of finding a coordinate ring for an abstract projective plane defined by incidences, and further examples are the group configuration theorems showing certain combinatorial dependencies among elements must arise from multiplication in a definable group. By developing analogues of parts of algebraic geometry in strongly minimal sets, such as intersection theory, Zilber proved a weak form of the Trichotomy Conjecture for uncountably categorical theories. Although Ehud Hrushovski developed the Hrushovski construction to disprove the full conjecture, it was later proved with additional hypotheses in the setting of "Zariski geometries". Notions from Shelah's classification program, such as regular types, forking, and orthogonality, allowed these ideas to be carried to greater generality, especially in superstable theories. Here, sets defined by regular types play the role of strongly minimal sets, with their local geometry determined by forking dependence rather than algebraic dependence. In place of the single strongly minimal set controlling models of a totally categorical theory, there may be many such local geometries defined by regular types, and orthogonality describes when these types have no interaction. Applications While stable theories are fundamental in model theory, this section lists applications of stable theories to other areas of mathematics. This list does not aim for completeness, but rather a sense of breadth. Since the theory of differentially closed fields of characteristic 0 is -stable, there are many applications of stability theory in differential algebra. For example, the existence and uniqueness of the differential closure of such a field (an analogue of the algebraic closure) were proved by Lenore Blum and Shelah respectively, using general results on prime models in -stable theories. In Diophantine geometry, Ehud Hrushovski used geometric stability theory to prove the Mordell-Lang conjecture for function fields in all characteristics, which generalizes Faltings's theorem about counting rational points on curves and the Manin-Mumford conjecture about counting torsion points on curves. The key point in the proof was using Zilber's Trichotomy in differential fields to show certain arithmetically defined groups are locally modular. In online machine learning, the Littlestone dimension of a concept class is a complexity measure characterizing learnability, analogous to the VC-dimension in PAC learning. Bounding the Littlestone dimension of a concept class is equivalent to a combinatorial characterization of stability involving binary trees. This equivlanece has been used, for example, to prove that online learnability of a concept class is equivalent to differentially private PAC learnability. In functional analysis, Jean-Louis Krivine and Bernard Maurey defined a notion of stability for Banach spaces, equivalent to stating that no quantifier-free formula has the order property (in continuous logic, rather than first-order logic). They then showed that every stable Banach space admits an almost-isometric embedding of for some . This is part of a broader interplay between functional analysis and stability in continuous logic; for example, early results of Alexander Grothendieck in functional analysis can be interpreted as equivalent to fundamental results of stability theory. A countable (possibly finite) structure is ultrahomogeneous if every finite partial automorphism extends to an automorphism of the full structure. Gregory Cherlin and Alistair Lachlan provided a general classification theory for stable ultrahomogeneous structures, including all finite ones. In particular, their results show that for any fixed finite relational language, the finite homogeneous structures fall into finitely many infinite families with members parametrized by numerical invariants and finitely many sporadic examples. Furthermore, every sporadic example becomes part of an infinite family in some richer language, and new sporadic examples always appear in suitably richer languages. In arithmetic combinatorics, Hrushovski proved results on the structure of approximate subgroups, for example implying a strengthened version of Gromov's theorem on groups of polynomial growth. Although this did not directly use stable theories, the key insight was that fundamental results from stable group theory could be generalized and applied in this setting. This directly led to the Breuillard-Green-Tao theorem classifying approximate subgroups. Generalizations For about twenty years after its introduction, stability was the main subject of pure model theory. A central direction of modern pure model theory, sometimes called "neostability" or "classification theory,"consists of generalizing the concepts and techniques developed for stable theories to broader classes of theories, and this has fed into many of the more recent applications of model theory. Two notable examples of such broader classes are simple and NIP theories. These are orthogonal generalizations of stable theories, since a theory is both simple and NIP if and only if it is stable. Roughly, NIP theories keep the good combinatorial behavior from stable theories, while simple theories keep the good geometric behavior of non-forking independence. In particular, simple theories can be characterized by non-forking independence being symmetric, while NIP can be characterized by bounding the number of types realized over either finite or infinite sets. Another direction of generalization is to recapitulate classification theory beyond the setting of complete first-order theories, such as in abstract elementary classes. See also Stability spectrum Spectrum of a theory Morley's categoricity theorem NIP theories Notes References External links A map of the model-theoretic classification of theories, highlighting stability Two book reviews discussing stability and classification theory for non-model theorists: Fundamentals of Stability Theory and Classification Theory An overview of (geometric) stability theory for non-model theorists Model theory
Stable theory
[ "Mathematics" ]
3,075
[ "Mathematical logic", "Model theory" ]
15,831,300
https://en.wikipedia.org/wiki/Tellegen%27s%20theorem
Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory. The Tellegen theorem is applicable to a multitude of network systems. The basic assumptions for the systems are the conservation of flow of extensive quantities (Kirchhoff's current law, KCL) and the uniqueness of the potentials at the network nodes (Kirchhoff's voltage law, KVL). The Tellegen theorem provides a useful tool to analyze complex network systems including electrical circuits, biological and metabolic networks, pipeline transport networks, and chemical process networks. The theorem Consider an arbitrary lumped network that has branches and nodes. In an electrical network, the branches are two-terminal components and the nodes are points of interconnection. Suppose that to each branch we assign arbitrarily a branch potential difference and a branch current for , and suppose that they are measured with respect to arbitrarily picked associated reference directions. If the branch potential differences satisfy all the constraints imposed by KVL and if the branch currents satisfy all the constraints imposed by KCL, then Tellegen's theorem is extremely general; it is valid for any lumped network that contains any elements, linear or nonlinear, passive or active, time-varying or time-invariant. The generality is extended when and are linear operations on the set of potential differences and on the set of branch currents (respectively) since linear operations don't affect KVL and KCL. For instance, the linear operation may be the average or the Laplace transform. More generally, operators that preserve KVL are called Kirchhoff voltage operators, operators that preserve KCL are called Kirchhoff current operators, and operators that preserve both are simply called Kirchhoff operators. These operators need not necessarily be linear for Tellegen's theorem to hold. The set of currents can also be sampled at a different time from the set of potential differences since KVL and KCL are true at all instants of time. Another extension is when the set of potential differences is from one network and the set of currents is from an entirely different network, so long as the two networks have the same topology (same incidence matrix) Tellegen's theorem remains true. This extension of Tellegen's Theorem leads to many theorems relating to two-port networks. Definitions We need to introduce a few necessary network definitions to provide a compact proof. Incidence matrix: The matrix is called node-to-branch incidence matrix for the matrix elements being A reference or datum node is introduced to represent the environment and connected to all dynamic nodes and terminals. The matrix , where the row that contains the elements of the reference node is eliminated, is called reduced incidence matrix. The conservation laws (KCL) in vector-matrix form: The uniqueness condition for the potentials (KVL) in vector-matrix form: where are the absolute potentials at the nodes to the reference node . Proof Using KVL: because by KCL. So: Applications Network analogs have been constructed for a wide variety of physical systems, and have proven extremely useful in analyzing their dynamic behavior. The classical application area for network theory and Tellegen's theorem is electrical circuit theory. It is mainly in use to design filters in signal processing applications. A more recent application of Tellegen's theorem is in the area of chemical and biological processes. The assumptions for electrical circuits (Kirchhoff laws) are generalized for dynamic systems obeying the laws of irreversible thermodynamics. Topology and structure of reaction networks (reaction mechanisms, metabolic networks) can be analyzed using the Tellegen theorem. Another application of Tellegen's theorem is to determine stability and optimality of complex process systems such as chemical plants or oil production systems. The Tellegen theorem can be formulated for process systems using process nodes, terminals, flow connections and allowing sinks and sources for production or destruction of extensive quantities. A formulation for Tellegen's theorem of process systems: where are the production terms, are the terminal connections, and are the dynamic storage terms for the extensive variables. References In-line references General references Basic Circuit Theory by C.A. Desoer and E.S. Kuh, McGraw-Hill, New York, 1969 "Tellegen's Theorem and Thermodynamic Inequalities", G.F. Oster and C.A. Desoer, J. Theor. Biol 32 (1971), 219–241 "Network Methods in Models of Production", Donald Watson, Networks, 10 (1980), 1–15 External links Circuit example for Tellegen's theorem G.F. Oster and C.A. Desoer, Tellegen's Theorem and Thermodynamic Inequalities Network thermodynamics Circuit theorems Eponymous theorems of physics
Tellegen's theorem
[ "Physics" ]
1,063
[ "Circuit theorems", "Eponymous theorems of physics", "Equations of physics", "Physics theorems" ]
15,832,717
https://en.wikipedia.org/wiki/Computational%20statistics
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is fast developing. The view that the broader concept of computing must be taught as part of general statistical education is gaining momentum. As in traditional statistics the goal is to transform raw data into knowledge, but the focus lies on computer intensive statistical methods, such as cases with very large sample size and non-homogeneous data sets. The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics", and 'computational statistics' as "aiming at the design of algorithm for implementing statistical methods on computers, including the ones unthinkable before the computer age (e.g. bootstrap, simulation), as well as to cope with analytically intractable problems" [sic]. The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation, artificial neural networks and generalized additive models. History Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology. In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution. With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise. Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo. One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards. By the mid-1950s, several articles and patents for devices had been proposed for random number generators. The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions. This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible. Methods Maximum likelihood estimation Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model. Monte Carlo method Monte Carlo is a statistical method that relies on repeated random sampling to obtain numerical results. The concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. Markov chain Monte Carlo The Markov chain Monte Carlo method creates samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, such as its expected value or variance. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution. Bootstrapping The bootstrap is a resampling technique used to generate samples from an empirical probability distribution defined by an original sample of the population. It can be used to find a bootstrapped estimator of a population parameter. It can also be used to estimate the standard error of an estimator as well as to generate bootstrapped confidence intervals. The jackknife is a related technique. Applications Computational biology Computational linguistics Computational physics Computational mathematics Computational materials science Machine Learning Computational statistics journals Communications in Statistics - Simulation and Computation Computational Statistics Computational Statistics & Data Analysis Journal of Computational and Graphical Statistics Journal of Statistical Computation and Simulation Journal of Statistical Software The R Journal The Stata Journal Statistics and Computing Wiley Interdisciplinary Reviews: Computational Statistics Associations International Association for Statistical Computing See also Algorithms for statistical classification Data science Statistical methods in artificial intelligence Free statistical software List of statistical algorithms List of statistical packages Machine learning References Further reading Articles Books External links Associations International Association for Statistical Computing Statistical Computing section of the American Statistical Association Journals Computational Statistics & Data Analysis Journal of Computational & Graphical Statistics Statistics and Computing Numerical analysis Computational fields of study Mathematics of computing
Computational statistics
[ "Mathematics", "Technology" ]
1,073
[ "Computational fields of study", "Computational mathematics", "Mathematical relations", "Computing and society", "Numerical analysis", "Computational statistics", "Approximations" ]
15,833,063
https://en.wikipedia.org/wiki/Scribd
Scribd Inc. (pronounced ) operates three primary platforms: Scribd, Everand, and SlideShare. Scribd is a digital document library that hosts over 195 million documents. Everand is a digital content subscription service offering a wide selection of ebooks, audiobooks, magazines, podcasts, and sheet music. SlideShare is an online platform featuring over 15 million presentations from subject matter experts. The company was founded in 2007 by Trip Adler, Jared Friedman, and Tikhon Bernstam, and headquartered in San Francisco, California. Tony Grimminck took over as CEO in 2024. History Founding (2007–2013) Scribd began as a site to host and share documents. While at Harvard, Trip Adler was inspired to start Scribd after learning about the lengthy process required to publish academic papers. His father, a doctor at Stanford, was told it would take 18 months to have his medical research published. Adler wanted to create a simple way to publish and share written content online. He co-founded Scribd with Jared Friedman and attended the inaugural class of Y Combinator in the summer of 2006. There, Scribd received its initial $120,000 in seed funding and then launched in a San Francisco apartment in March 2007. Scribd was called "the YouTube for documents", allowing anyone to self-publish on the site using its document reader. The document reader turns PDFs, Word documents, and PowerPoints into Web documents that can be shared on any website that allows embeds. In its first year, Scribd grew rapidly to 23.5 million visitors as of November 2008. It also ranked as one of the top 20 social media sites according to Comscore. In June 2009, Scribd launched the Scribd Store, enabling writers to easily upload and sell digital copies of their work online. That same month, the site partnered with Simon & Schuster to sell e-books on Scribd. The deal made digital editions of 5,000 titles available for purchase on Scribd, including books from bestselling authors like Stephen King, Dan Brown, and Mary Higgins Clark. In October 2009, Scribd launched its branded reader for media companies including The New York Times, Los Angeles Times, Chicago Tribune, The Huffington Post, TechCrunch, and MediaBistro. ProQuest began publishing dissertations and theses on Scribd in December 2009. In August 2010, many notable documents hosted on Scribd became viral phenomenons, including the California Proposition 8 ruling, which received over 100,000 views in about 24 minutes, and HP's lawsuit against Mark Hurd's move to Oracle. Subscription service (2013–2023) In October 2013, Scribd officially launched its unlimited subscription service for e-books. This gave users unlimited access to Scribd's library of digital books for a flat monthly fee. The company also announced a partnership with HarperCollins which made the entire backlist of HarperCollins' catalog available on the subscription service. According to Chantal Restivo-Alessi, chief digital officer at HarperCollins, this marked the first time that the publisher has released such a large portion of its catalog. In March 2014, Scribd announced a deal with Lonely Planet, offering the travel publisher's entire library on its subscription service. In May 2014, Scribd further increased its subscription offering with 10,000 titles from Simon & Schuster. These titles included works from authors such as: Ray Bradbury, Doris Kearns Goodwin, Ernest Hemingway, Walter Isaacson, Stephen King, Chuck Klosterman, and David McCullough. Scribd has been criticized for advertising a free 14 day trial for which payment is required before readers can trial the products. Readers discover this when they attempt to download material. Scribd added audiobooks to its subscription service in November 2014 and comic books in February 2015. In February 2016, it was announced that only titles from a rotating selection of the library would be available for unlimited reading, and subscribers would have credits to read three books and one audiobook per month from the entire library with unused credits rolling over to the next month. The reporting system was discontinued on February 6, 2018, in favor of a system of "constantly rotating catalogs of ebooks and audiobooks" that provided "an unlimited number of books and audiobooks, alongside unlimited access to news, magazines, documents, and sheet music" for a monthly subscription fee of US$8.99. However, under this unlimited service, Scribd would occasionally "limit the titles that you’re able to access within a specific content library in a 30-day period." In October 2018, Scribd announced a joint subscription to Scribd and The New York Times for $12.99 per month. Audiobooks In November 2014, Scribd added audiobooks to its subscription library. Wired noted that this was the first subscription service to offer unlimited access to audiobooks, and "it represents a much larger shift in the way digital content is consumed over the net." In April 2015, the company expanded its audiobook catalog in a deal with Penguin Random House. This added 9,000 audiobooks to its platform including titles from authors like Lena Dunham, John Grisham, Gillian Flynn, and George R.R. Martin. Comics In February 2015, Scribd introduced comics to its subscription service. The company added 10,000 comics and graphic novels from publishers including Marvel, Archie, Boom! Studios, Dynamite, IDW, and Valiant. These included series such as Guardians of the Galaxy, Daredevil, X-O Manowar, and The Avengers. However, in December 2016, comics were eliminated from the service due to low demand. Unbundling (2023 - present) In November 2023, Scribd unbundled from one single product into three distinct ones: Everand, Scribd, and Slideshare. Everand was launched as a new subscription-based service, focused solely on a customer looking for entertainment in the form of books, magazines, podcasts and more. Timeline In February 2010, Scribd unveiled its first mobile plans for e-readers and smartphones. In April 2010 Scribd launched a new feature called "Readcast", which allows automatic sharing of documents on Facebook and Twitter. Also in April 2010, Scribd announced its integration of Facebook social plug-ins at the Facebook f8 Developer Conference. Scribd rolled out a redesign on September 13, 2010, to become, according to TechCrunch, "the social network for reading". In October 2013, Scribd launched its e-book subscription service, allowing readers to pay a flat monthly fee in exchange for unlimited access to all of Scribd's book titles. In August 2020, Scribd announced its acquisition of the LinkedIn-owned SlideShare for an undisclosed amount. In November 2023, Scribd unbundled into three distinct products: Everand, Scribd, and Slideshare. Everand was launched as a new product, focusing solely on books, magazines, podcasts and more. Financials The company was initially funded with US$120,000 from Y Combinator in 2006, and received over US$3.7 million in June 2007 from Redpoint Ventures and The Kinsey Hills Group. In December 2008, the company raised US$9 million in a second round of funding led by Charles River Ventures with re-investment from Redpoint Ventures and Kinsey Hills Group. David O. Sacks, former PayPal COO and founder of Yammer and Geni, joined Scribd's board of directors in January 2010. In January 2011, Scribd raised $13 million in a Series C round led by MLC Investments of Australia and SVB Capital. In January 2015, the company raised US$22 million from Khosla Ventures with partner Keith Rabois joining the Scribd board of directors. In 2019, Scribd raised $58 million in a financing round led by Spectrum Equity. Technology In July 2008, Scribd began using iPaper, a rich document format similar to PDF and built for the web, which allows users to embed documents into a web page. iPaper was built with Adobe Flash, allowing it to be viewed the same across different operating systems (Windows, Mac OS, and Linux) without conversion, as long as the reader has Flash installed (although Scribd has announced non-Flash support for the iPhone). All major document types can be formatted into iPaper including Word docs, PowerPoint presentations, PDFs, OpenDocument documents, OpenOffice.org XML documents, and PostScript files. All iPaper documents are hosted on Scribd. Scribd allows published documents to either be private or open to the larger Scribd community. The iPaper document viewer is also embeddable in any website or blog, making it simple to embed documents in their original layout regardless of file format. Scribd iPaper required Flash cookies to be enabled, which is the default setting in Flash. On May 5, 2010, Scribd announced that they would be converting the entire site to HTML5 at the Web 2.0 Conference in San Francisco. TechCrunch reported that Scribd is migrating away from Flash to HTML5. "Scribd co-founder and chief technology officer Jared Friedman tells me: 'We are scrapping three years of Flash development and betting the company on HTML5 because we believe HTML5 is a dramatically better reading experience than Flash. Now any document can become a Web page.'" Scribd has its own API to integrate external/third-party applications, but is no longer offering new API accounts. Since 2010, Scribd has been available on mobile phones and e-readers, in addition to personal computers. As of December 2013, Scribd became available on app stores and various mobile devices. Reception Accusations of defrauding and stealing from users Scribd has been accused of "[having] built its business on stealing from former customers" after numerous complaints of continuing to charge former subscribers on a monthly basis who had cancelled their subscriptions long prior to the charges. Accusations of copyright infringement Scribd has been accused of copyright infringement. In 2007, one year after its inception, Scribd was served with 25 Digital Millennium Copyright Act (DMCA) takedown notices. In March 2009, The Guardian writes, "Harry Potter author [J.K. Rowling] is among writers shocked to discover their books available as free downloads. Neil Blair, Rowling’s lawyer, said the Harry Potter downloads were 'unauthorised and unlawful'...Rowling's novels aren't the only ones to be available from Scribd. A quick search throws up novels from Salman Rushdie, Ian McEwan, Jeffrey Archer, Ken Follett, Philippa Gregory, and J.R.R. Tolkien." In September 2009, American author Elaine Scott alleged that Scribd "shamelessly profits from the stolen copyrighted works of innumerable authors". Her attorneys sought class action status in their efforts to win damages from Scribd for allegedly "egregious copyright infringement" and accused it of calculated copyright infringement for profit. The suit was dropped in July 2010. Controversies In March 2009, the passwords of several Comcast customers were leaked on Scribd. The passwords were later removed when the news was published by The New York Times. In July 2010, the script of The Social Network (2010) movie was uploaded and leaked on Scribd; it was promptly taken down per Sony's DMCA request. Following a decision of the Istanbul 12th Criminal Court of Peace, dated March 8, 2013, access to Scribd is blocked for Internet users in Turkey. In July 2014, Scribd was sued by Disability Rights Advocates (represented by Haben Girma), on behalf of the National Federation of the Blind and a blind Vermont resident, for allegedly failing to provide access to blind readers, in violation of the Americans with Disability Act. Scribd moved to dismiss, arguing that the ADA only applied to physical locations. In March 2015, the U.S. District Court of Vermont ruled that the ADA covered online businesses as well. A settlement agreement was reached, with Scribd agreeing to provide content accessible to blind readers by the end of 2017. BookID To counteract the uploading of unauthorized content, Scribd created BookID, an automated copyright protection system that helps authors and publishers identify unauthorized use of their works on Scribd. This technology works by analyzing documents for semantic data, metadata, images, and other elements and creates an encoded "fingerprint" of the copyrighted work. Supported file formats Supported formats include: Microsoft Excel (.xls, .xlsx) Microsoft PowerPoint (.ppt, .pps, .pptx, .ppsx) Microsoft Word (.doc, .docx) OpenDocument (.odt, .odp, .ods, .odf, .odg) OpenOffice.org XML (.sxw, .sxi, .sxc, .sxd) Plain text (.txt) Portable Document Format (.pdf) PostScript (.ps) Rich text format (.rtf) Tagged image file format (.tif, .tiff) See also Slideshare Everand Amazon Lending Library and Kindle Unlimited Document collaboration Oyster (company) Wayback Machine WebCite References External links 2007 establishments in California American companies established in 2007 Android (operating system) software Companies based in San Francisco Ebook suppliers File sharing communities Internet properties established in 2007 Online retailers of the United States Privately held companies based in California Retail companies established in 2007 Subscription services Y Combinator companies
Scribd
[ "Technology" ]
2,933
[ "File sharing communities", "Computing websites" ]
13,160,155
https://en.wikipedia.org/wiki/Energy%20Performance%20of%20Buildings%20Directive%202024
The Energy Performance of Buildings Directive (2024/1275, the "EPBD") is the European Union's main legislative instrument aiming to promote the improvement of the energy performance of buildings within the European Union. It was inspired by the Kyoto Protocol which commits the EU and all its parties by setting binding emission reduction targets. History Directive 2002/91/EC The first version of the EPBD, directive 2002/91/EC, was approved on 16 December 2002 and entered into force on 4 January 2003. EU Member States (MS) had to comply with the Directive within three years of the inception date (4 January 2006), by bringing into force necessary laws, regulations and administrative provisions. In the case of lack of qualified and/or accredited experts, the directive allowed for a further extension in implementation by 4 January 2006. The Directive required that the MS strengthen their building regulations and introduce energy performance certification of buildings. More specifically, it required member states to comply with Article 7 (Energy Performance Certificates), Article 8 (Inspection of boilers) and Article 9 (Inspection of air conditioning systems). Directive 2010/31/EU Directive 2002/91/EC was later on replaced by the so-called "EPBD recast", which was approved on 19 May 2010 and entered into force on 18 June 2010. This version of the EPBD (Directive 2010/31/EU) broadened its focus on Nearly Zero-energy buildings, cost optimal levels of minimum energy performance requirements as well as improved policies. According to the recast: for buildings offered for sale or rent, the energy performance certificates shall be stated in the advertisements Member States shall lay down the necessary measures to establish inspection schemes for heating and air-conditioning systems or take measures with equivalent impact all new buildings shall be nearly zero energy buildings by 31 December 2020; the same applies to all new public buildings after 31 December 2018. Member States shall set minimum energy performance requirements for new buildings, for buildings subject to major renovation, as well as for the replacement or retrofit of building elements Member States shall draw up lists of national financial measures and instruments to improve the energy efficiency of buildings. Directive 2018/844/EU On 30 November 2016, the European Commission published the "Clean Energy For All Europeans", a package of measures boosting the clean energy transition in line with its commitment to cut emissions by at least 40% by 2030, modernise the economy and create conditions for sustainable jobs and growth. The proposal for a revised directive on the EPBD (COM/2016/0765) puts energy efficiency first and supports cost-effective building renovation. The proposal updated the EPBD through: The incorporation of long-term building renovation strategies (Article of 4 Energy Efficiency Directive), the support to mobilise finance and a clear vision for the decarbonisation of buildings by 2050 The encouragement of the use of information communication and smart technologies to ensure the efficient operation of buildings Streamlined provisions in the case of delivery failure of the expected results introduces building automation and control (BAC) systems as an alternative to physical inspections encourages the roll-out of the required infrastructure for e-mobility and introduces a "smartness indicator" strengthens the links between public funding for building renovation and energy performance certificates and incentivises tackling energy poverty through building renovation. On 11 October 2017, the European Parliament's Committee on Industry, Research and Energy (ITRE) voted positively on a draft report led by Danish MEP Bendt Bendtsen. The Committee "approved rules to channel the focus towards energy-efficiency and cost-effectiveness of building renovations in the EU, updating the EPBD as part of the "Clean Energy for All Europeans" package". Bendt Bendtsen, member of ITRE and rapporteur of the EPBD review dossier said: "It is vital that Member States show a clear commitment and take concrete actions in their long-term planning. This includes facilitating access to financial tools, showing investors that energy efficiency renovations are prioritised, and enabling public authorities to invest in well-performing buildings". The proposal was finally approved by the Council and the European Parliament in May 2018. 2024 revisions In 2021, the European Commission, under the leadership of Estonian Commissioner Kadri Simson proposed a new revision of the Directive, in the context of the "Fit for 55" legislative package. The proposal includes the following priorities: Obligation for all member states to establish National building renovation plans Establishment of minimum energy performance standards (MEPS), requiring the worst energy performant (non-residential) buildings to reach at least class F by 2030 and class E by 2033. Promotion of technical assistance, including one-stop-shops and renovation passports Introduction of new financial mechanisms to incentivize banks and mortgage holders to promote energy efficient renovation (mortgage portfolio standard) Following the start of the Russian invasion of Ukraine, the Commission issued additional proposals, such as the obligation to ensure new buildings are solar ready and to install solar energy installations on buildings. The commission's proposal is currently being discussed and negotiated in the council and at the European Parliament. The chief negotiator for the file in the European Parliament is Green MEP Ciaran Cuffe. In 2021, the European Commission proposed to review the directive, with a view of introducing more exigent energy efficiency minimum standards for new and existing buildings, improved availability of energy performance certificates by means of public online databases, and to introduce financial mechanisms to incentivize banks to provide loans for energy efficient renovations. The informal agreement was endorsed by both Parliament and Council. Contents EPBD support initiatives The European Commission has launched practical support initiatives with the objective to help and support EU countries with the implementation of the EPBD. EPBD Concerted Action The Concerted Action EPBD (CA EPBD) was launched in 2005 under the European Union's Horizon 2020 research and innovation programme to address the Energy Performance of Buildings Directive (EPBD), with the objective to promote dialogue and exchange of knowledge and best practices between all 28 Member States and Norway for reducing energy use in buildings. The first CA EPBD was launched in 2005 and closed in June 2007 followed by a second phase and a third phase from 2011 to 2015. The current CA EPBD (CA EPBD IV), a joint initiative between the EU Member States and the European Commission, runs since October 2015 to March 2018 with the aim to transpose and implement the EPBD recast. EPBD Buildings Platform The EPBD Buildings Platform was launched by the European Commission in the framework of the Intelligent Energy – Europe, 2003–2006 Programme, as the central resource of information on the EPBD. The Platform comprises databases with publications, events, standards and software tools. Interested organisations or individuals could submit events and publications to the databases. A high number of information papers (fact sheets) were also produced, with the aim to inform a wide range of people of the status of work in a specific area. The platform also offered a helpdesk with lists of frequently asked questions and the possibility to ask individual questions. This initiative was completed at the end of 2008, and a new one, 'BUILD UP' was launched in 2009. BUILD UP As a continuation of its support to the Member States in implementing the EPBD, the European Commission launched the BUILD UP initiative in 2009. The initiative has been receiving funding under the framework of the Intelligent Energy Europe (IEE) Programme. The first BUILD UP (BUILD UP I) was launched in 2009 and closed in 2011 when BUILD UP II followed in 2012 and ran until 2014. BUILD UP III was running from January 2015 until December 2017. BUILD UP IV started early 2018. The BUILD UP web portal aims to increase awareness and foster the market transformation towards Nearly Zero-Energy Buildings, catalysing and releasing Europe's collective intelligence for an effective implementation of energy saving measures in buildings, by connecting building professionals, including competent authorities. The portal includes databases of publications, news, events, software tools & blog posts. Since the start of BUILD UP II in 2009 the portal introduced added value content items namely as overview articles (allowing for users to read / download them on demand) and free participation webinars, providing an effective learning resource. The platform also incorporates the "BUILD UP Skills" webpage, an initiative launched in 2011 under the IEE programme to assist with the training and further education of craftsmen, on-site workers and systems installers of the building sector. BUILD UP hosts all BUILD UP Skills related information (EU Exchange Meetings, Technical Working Groups (TWGs), National pages and country factsheets, news, events and previous newsletters) under its separate section "Skills". Intelligent Energy Europe (IEE) Programme The EU's Intelligent Energy Europe (IEE) Programme was launched in 2003; the first IEE Programme (IEE I) closed in 2006, and was followed by the second IEE Programme (IEE II) from 2007 to 2013. Most parts of the IEE programme were run by the Executive Agency for SMEs, EASME -formerly known as the Executive Agency for Competitiveness and Innovation (EACI)- on behalf of the European Commission. The Programme "supported projects which sought to overcome non-technical barriers to the uptake, implementation and replication of innovative sustainable energy solutions". From 2007 to 2013, the IEE II Programme allocated €72m (16% of the entire IEE II funding) to 63 building-related projects (including CA EPBD II & III), revealing the strong support for enabling EPBD implementation. The range of topics was broad, covering the fields of deep renovation, Nearly Zero-Energy Buildings, Energy Performance Certificates, renewable energy and the exemplary role of public buildings. Since the Programme's completion, the EU's Horizon 2020 Framework Programme has been funding these type of activities. See also Energy performance certificate, which arose from the implementation of the Directive in the United Kingdom EU law UK enterprise law References External links Concerted Action EPBD BUILD UP portal Building thermal regulations Energy development Energy economics Energy policies and initiatives of the European Union Energy performance of buildings Low-energy building 2002 in law 2002 in the European Union
Energy Performance of Buildings Directive 2024
[ "Environmental_science" ]
2,091
[ "Energy economics", "Environmental social science" ]
13,160,226
https://en.wikipedia.org/wiki/Breather%20surface
In differential geometry, a breather surface is a one-parameter family of mathematical surfaces which correspond to breather solutions of the sine-Gordon equation, a differential equation appearing in theoretical physics. The surfaces have the remarkable property that they have constant curvature , where the curvature is well-defined. This makes them examples of generalized pseudospheres. Mathematical background There is a correspondence between embedded surfaces of constant curvature -1, known as pseudospheres, and solutions to the sine-Gordon equation. This correspondence can be built starting with the simplest example of a pseudosphere, the tractroid. In a special set of coordinates, known as asymptotic coordinates, the Gauss–Codazzi equations, which are consistency equations dictating when a surface of prescribed first and second fundamental form can be embedded into three-dimensional space with the flat metric, reduce to the sine-Gordon equation. In the correspondence, the tractroid corresponds to the static 1-soliton solution of the sine-Gordon solution. Due to the Lorentz invariance of sine-Gordon, a one-parameter family of Lorentz boosts can be applied to the static solution to obtain new solutions: on the pseudosphere side, these are known as Lie transformations, which deform the tractroid to the one-parameter family of surfaces known as Dini's surfaces. The method of Bäcklund transformation allows the construction of a large number of distinct solutions to the sine-Gordon equation, the multi-soliton solutions. For example, the 2-soliton corresponds to the Kuen surface. However, while this generates an infinite family of solutions, the breather solutions are not among them. Breather solutions are instead derived from the inverse scattering method for the sine-Gordon equation. They are localized in space but oscillate in time. Each solution to the sine-Gordon equation gives a first and second fundamental form which satisfy the Gauss-Codazzi equations. The fundamental theorem of surface theory then guarantees that there is a parameterized surface which recovers the prescribed first and second fundamental forms. Locally the parameterization is well-behaved, but extended arbitrarily the resulting surface may have self-intersections and cusps. Indeed, a theorem of Hilbert says that any pseudosphere cannot be embedded regularly (roughly, meaning without cusps) into . Parameterization The parameterization with parameter is given by References External links Xah Lee Web - Surface Gallery Breather surface in Virtual Math Museum Surfaces Mathematics articles needing expert attention Differential equations
Breather surface
[ "Mathematics" ]
523
[ "Mathematical objects", "Differential equations", "Equations" ]
13,160,311
https://en.wikipedia.org/wiki/Airborne%20Real-time%20Cueing%20Hyperspectral%20Enhanced%20Reconnaissance
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance, also known by the acronym ARCHER, is an aerial imaging system that produces ground images far more detailed than plain sight or ordinary aerial photography can. It is the most sophisticated unclassified hyperspectral imaging system available, according to U.S. Government officials. ARCHER can automatically scan detailed imaging for a given signature of the object being sought (such as a missing aircraft), for abnormalities in the surrounding area, or for changes from previous recorded spectral signatures. It has direct applications for search and rescue, counterdrug, disaster relief and impact assessment, and homeland security, and has been deployed by the Civil Air Patrol (CAP) in the US on the Australian-built Gippsland GA8 Airvan fixed-wing aircraft. CAP, the civilian auxiliary of the United States Air Force, is a volunteer education and public-service non-profit organization that conducts aircraft search and rescue in the US. Overview ARCHER is a daytime non-invasive technology, which works by analyzing an object's reflected light. It cannot detect objects at night, underwater, under dense cover, underground, under snow or inside buildings. The system uses a special camera facing down through a quartz glass portal in the belly of the aircraft, which is typically flown at a standard mission altitude of and 100 knots (50 meters/second) ground speed. The system software was developed by Space Computer Corporation of Los Angeles and the system hardware is supplied by NovaSol Corp. of Honolulu, Hawaii specifically for CAP. The ARCHER system is based on hyperspectral technology research and testing previously undertaken by the United States Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). CAP developed ARCHER in cooperation with the NRL, AFRL and the United States Coast Guard Research & Development Center in the largest interagency project CAP has undertaken in its 74-year history. Since 2003, almost US$5 million authorized under the 2002 Defense Appropriations Act has been spent on development and deployment. , CAP reported completing the initial deployment of 16 aircraft throughout the U.S. and training over 100 operators, but had only used the system on a few search and rescue missions, and had not credited it with being the first to find any wreckage. In searches in Georgia and Maryland during 2007, ARCHER located the aircraft wreckage, but both accidents had no survivors, according to Col. Drew Alexa, director of advanced technology, and the ARCHER program manager at CAP. An ARCHER equipped aircraft from the Utah Wing of the Civil Air Patrol was used in the search for adventurer Steve Fossett in September 2007. ARCHER did not locate Mr. Fossett, but was instrumental in uncovering eight previously uncharted crash sites in the high desert area of Nevada, some decades old. Col. Alexa described the system to the press in 2007: "The human eye sees basically three bands of light. The ARCHER sensor sees 50. It can see things that are anomalous in the vegetation such as metal or something from an airplane wreckage." Major Cynthia Ryan of the Nevada Civil Air Patrol, while also describing the system to the press in 2007, stated, "ARCHER is essentially something used by the geosciences. It's pretty sophisticated stuff … beyond what the human eye can generally see," She elaborated further, "It might see boulders, it might see trees, it might see mountains, sagebrush, whatever, but it goes 'not that' or 'yes, that'. The amazing part of this is that it can see as little as 10 per cent of the target, and extrapolate from there." In addition to the primary search and rescue mission, CAP has tested additional uses for ARCHER. For example, an ARCHER equipped CAP GA8 was used in a pilot project in Missouri in August 2005 to assess the suitability of the system for tracking hazardous material releases into the environment, and one was deployed to track oil spills in the aftermath of Hurricane Rita in Texas during September 2005. Since then, in the case of a flight originating in Missouri, the ARCHER system proved its usefulness in October 2006, when it found the wreckage in Antlers, Okla. The National Transportation and Safety Board was extremely pleased with the data ARCHER provided, which was later used to locate aircraft debris spread over miles of rough, wooded terrain. In July 2007, the ARCHER system identified a flood-borne oil spill originating in a Kansas oil refinery, that extended downstream and had invaded previously unsuspected reservoir areas. The client agencies (EPA, Coast Guard, and other federal and state agencies) found the data essential to quick remediation. In September 2008, a Civil Air Patrol GA-8 from Texas Wing searched for a missing aircraft from Arkansas. It was found in Oklahoma, identified simultaneously by ground searchers and the overflying ARCHER system. Rather than a direct find, this was a validation of the system's accuracy and efficacy. In the subsequent recovery, it was found that the ARCHER plotted the debris area with great accuracy. Technical description The major ARCHER subsystem components include: advanced hyperspectral imaging (HSI) system with a resolution of one square meter per pixel. panchromatic high-resolution imaging (HRI) camera with a resolution of per pixel. global positioning system (GPS) integrated with an inertial navigation system (INS) Hyperspectral imager The passive hyperspectral imaging spectroscopy remote sensor observes a target in multi-spectral bands. The HSI camera separates the image spectra into 52 "bins" from 500 nanometers (nm) wavelength at the blue end of the visible spectrum to 1100 nm in the infrared, giving the camera a spectral resolution of 11.5 nm. Although ARCHER records data in all 52 bands, the computational algorithms only use the first 40 bands, from 500 nm to 960 nm because the bands above 960 nm are too noisy to be useful. For comparison, the normal human eye will respond to wavelengths from approximately 400 to 700 nm, and is trichromatic, meaning the eye's cone cells only sense light in three spectral bands. As the ARCHER aircraft flies over a search area, reflected sunlight is collected by the HSI camera lens. The collected light passes through a set of lenses that focus the light to form an image of the ground. The imaging system uses a pushbroom approach to image acquisition. With the pushbroom approach, the focusing slit reduces the image height to the equivalent of one vertical pixel, creating a horizontal line image. The horizontal line image is then projected onto a diffraction grating, which is a very finely etched reflecting surface that disperses light into its spectra. The diffraction grating is specially constructed and positioned to create a two-dimensional (2D) spectrum image from the horizontal line image. The spectra are projected vertically, i.e., perpendicular to the line image, by the design and arrangement of the diffraction grating. The 2D spectrum image projects onto a charge-coupled device (CCD) two-dimensional image sensor, which is aligned so that the horizontal pixels are parallel to the image's horizontal. As a result, the vertical pixels are coincident to the spectra produced from the diffraction grating. Each column of pixels receives the spectrum of one horizontal pixel from the original image. The arrangement of vertical pixel sensors in the CCD divides the spectrum into distinct and non-overlapping intervals. The CCD output consists of electrical signals for 52 spectral bands for each of 504 horizontal image pixels. The on-board computer records the CCD output signal at a frame rate of sixty times each second. At an aircraft altitude of 2,500 ft AGL and a speed of 100 knots, a 60 Hz frame rate equates to a ground image resolution of approximately one square meter per pixel. Thus, every frame captured from the CCD contains the spectral data for a ground swath that is approximately one meter long and 500 meters wide. High-resolution imager A high-resolution imaging (HRI) black-and-white, or panchromatic, camera is mounted adjacent to the HSI camera to enable both cameras to capture the same reflected light. The HRI camera uses a pushbroom approach just like the HSI camera with a similar lens and slit arrangement to limit the incoming light to a thin, wide beam. However, the HRI camera does not have a diffraction grating to disperse the incoming reflected light. Instead, the light is directed to a wider CCD to capture more image data. Because it captures a single line of the ground image per frame, it is called a line scan camera. The HRI CCD is 6,144 pixels wide and one pixel high. It operates at a frame rate of 720 Hz. At ARCHER search speed and altitude (100 knots over the ground at 2,500 ft AGL) each pixel in the black-and-white image represents a 3 inch by 3 inch area of the ground. This high resolution adds the capability to identify some objects. Processing A monitor in the cockpit displays detailed images in real time, and the system also logs the image and Global Positioning System data at a rate of 30 gigabytes (GB) per hour for later analysis. The on-board data processing system performs numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. ARCHER has three methods for locating targets: signature matching where reflected light is matched to spectral signatures anomaly detection using a statistical model of the pixels in the image to determine the probability that a pixel does not match the profile, and change detection which executes a pixel-by-pixel comparison of the current image against ground conditions that were obtained in a previous mission over the same area. In change detection, scene changes are identified, and new, moved or departed targets are highlighted for evaluation. In spectral signature matching, the system can be programmed with the parameters of a missing aircraft, such as paint colors, to alert the operators of possible wreckage. It can also be used to look for specific materials, such as petroleum products or other chemicals released into the environment, or even ordinary items like commonly available blue polyethylene tarpaulins. In an impact assessment role, information on the location of blue tarps used to temporarily repair buildings damaged in a storm can help direct disaster relief efforts; in a counterdrug role, a blue tarp located in a remote area could be associated with illegal activity. References External links NovaSol Corp Space Computer Corporation Civil Air Patrol Spectroscopy Earth observation remote sensors
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance
[ "Physics", "Chemistry" ]
2,169
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
13,161,364
https://en.wikipedia.org/wiki/Thiocyanogen
Thiocyanogen, (SCN)2, is a pseudohalogen derived from the pseudohalide thiocyanate, [SCN]−, with behavior intermediate between dibromine and diiodine. This hexatomic compound exhibits C2 point group symmetry and has the connectivity NCS-SCN. In the lungs, lactoperoxidase may oxidize thiocyanate to thiocyanogen or hypothiocyanite. History Berzelius first proposed that thiocyanogen ought exist as part of his radical theory, but the compound's isolation proved problematic. Liebig pursued a wide variety of synthetic routes for the better part of a century, but, even with Wöhler's assistance, only succeeded in producing a complex mixture with the proportions of thiocyanic acid. In 1861, Linnemann generated appreciable quantities of thiocyanogen from a silver thiocyanate suspension in diethyl ether and excess iodine, but misidentified the minor product as sulfur iodide cyanide (ISCN). Indeed, that reaction suffers from competing equilibria attributed to the weak oxidizing power of iodine; the major product is sulfur dicyanide. The following year, Schneider produced thiocyangen from silver thiocyanate and disulfur dichloride, but the product disproportionated to sulfur and trisulfur dicyanides. The subject then lay fallow until the 1910s, when Niels Bjerrum began investigating gold thiocyanate complexes. Some eliminated reductively and reversibly, whereas others appeared to irreversibly generate cyanide and sulfate salt solutions. Understanding the process required reanalyzing the decomposition of thiocyanogen using the then-new techniques of physical chemistry. Bjerrum's work revealed that water catalyzed thiocyanogen's decomposition via hypothiocyanous acid. Moreover, the oxidation potential of thiocyanogen appeared to be 0.769 V, slightly greater than iodine but less than bromine. In 1919, Söderbäck successfully isolated stable thiocyanogen from oxidation of oxidation of plumbous thiocyanate with bromine. Preparation Modern syntheses typically differ little from Söderbäck's process. Thiocyanogen synthesis begins when aqueous solutions of lead(II) nitrate and sodium thiocyanate, combined, precipitate plumbous thiocyanate. Treating an anhydrous Pb(SCN)2 suspension in glacial acetic acid with bromine then affords a 0.1M solution of thiocyanogen that is stable for days. Alternatively, a solution of bromine in methylene chloride is added to a suspension of Pb(SCN)2 in methylene chloride at 0 °C. Pb(SCN)2 + Br2 → (SCN)2 + PbBr2 In either case, the oxidation is exothermic. An alternative technique is the thermal decomposition of cupric thiocyanate at 35–80 °C: 2Cu(SCN)2 → 2 CuSCN + (SCN)2 Reactions In general, thiocyanogen is stored in solution, as the pure compound explodes above 20 °C to a red-orange polymer. However, the sulfur atoms disproportionate in water: 3(SCN)2 + 4H2O → H2SO4 + HCN + 5HSCN Thiocyanogen is a weak electrophile, attacking only highly activated (phenolic or anilinic) or polycyclic arenes. It attacks carbonyls at the α position. Heteratoms are attacked more easily, and the compound thiocyanates sulfur, nitrogen, and various poor metals. Thiocyanogen solutions in nonpolar solvents react almost completely with chlorine to give chlorine thiocyanate; but the corresponding bromine thiocyanate is unstable above −50 °C, forming polymeric thiocyanogen and bromine. The compound adds trans to alkenes to give 1,2-bis(thiocyanato) compounds; the intermediate thiiranium ion can be trapped with many nucleophiles. Radical polymerization is the most likely side-reaction, and yields improve when cold and dark. However, the addition reaction is slow, and light may be necessary to accelerate the process. Titanacyclopentadienes give (Z,Z)-1,4-bis(thiocyanato)-1,3-butadienes, which in turn can be converted to 1,2-dithiins. Thiocyanogen only adds once to alkynes; the resulting dithioacyloin dicyanate is not particularly olefinic. Selenocyanogen, (SeCN)2, prepared from reaction of silver selenocyanate with iodine in tetrahydrofuran at 0 °C, reacts in a similar manner to thiocyanogen. Applications Thiocyanogen has been used to estimate the degree of unsaturation in fatty acids, similar to the iodine value. References Inorganic carbon compounds Inorganic sulfur compounds Inorganic nitrogen compounds Thiocyanates Pseudohalogens
Thiocyanogen
[ "Chemistry" ]
1,143
[ "Pseudohalogens", "Inorganic compounds", "Functional groups", "Inorganic sulfur compounds", "Inorganic nitrogen compounds", "Inorganic carbon compounds", "Thiocyanates" ]
13,162,950
https://en.wikipedia.org/wiki/Beta%20Disk%20Interface
Beta Disk Interface is a disk interface for ZX Spectrum computers, developed by Technology Research Ltd. (United Kingdom) in 1984 and released in 1985, with a price of £109.25 (or £249.75 with one disk drive). Beta 128 Disk Interface is a 1987 version, supporting ZX Spectrum 128 machines (due to different access point addresses). Beta Disk Interfaces were distributed with the TR-DOS operating system in ROM, also attributed to Technology Research Ltd.. The interface was based on the WD1793 chip. Latest firmware version is 5.03 (1986). The Beta Disk Interface handles single- and double-sided, 40- or 80-track double-density floppy disks, and up to four drives. Clones This interface was popular for its simplicity, and the Beta 128 Disk Interface was cloned all around the USSR. The first known USSR clones were ones produced by НПВО "Вариант" (NPVO "Variant", Leningrad) in 1989. Beta 128 schematics are included in various Soviet/Russian ZX Spectrum clones, but some variants only support two drives. Phase correction of the drive data signal is also implemented differently. Between 2018 and 2021, Beta Disk clones were produced in the Czech Republic, with the names such as Beta Disk 128C, 128X and 128 mini. Operating systems support TR-DOS iS-DOS CP/M (various hack versions) DNA OS See also DISCiPLE References External links Virtual TR-DOS ZX Spectrum Computer storage devices
Beta Disk Interface
[ "Technology" ]
313
[ "Computer storage devices", "Recording devices" ]
13,163,358
https://en.wikipedia.org/wiki/Whole%20number%20rule
In chemistry, the whole number rule states that the masses of the isotopes are whole number multiples of the mass of the hydrogen atom. The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. It is also known as the Aston whole number rule after Francis W. Aston who was awarded the Nobel Prize in Chemistry in 1922 "for his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule." Law of definite proportions The law of definite proportions was formulated by Joseph Proust around 1800 and states that all samples of a chemical compound will have the same elemental composition by mass. The atomic theory of John Dalton expanded this concept and explained matter as consisting of discrete atoms with one kind of atom for each element combined in fixed proportions to form compounds. Prout's hypothesis In 1815, William Prout reported on his observation that the atomic weights of the elements were whole multiples of the atomic weight of hydrogen. He then hypothesized that the hydrogen atom was the fundamental object and that the other elements were a combination of different numbers of hydrogen atoms. Aston's discovery of isotopes In 1920, Francis W. Aston demonstrated through the use of a mass spectrometer that apparent deviations from Prout's hypothesis are predominantly due to the existence of isotopes. For example, Aston discovered that neon has two isotopes with masses very close to 20 and 22 as per the whole number rule, and proposed that the non-integer value 20.2 for the atomic weight of neon is due to the fact that natural neon is a mixture of about 90% neon-20 and 10% neon-22). A secondary cause of deviations is the binding energy or mass defect of the individual isotopes. Discovery of the neutron During the 1920s, it was thought that the atomic nucleus was made of protons and electrons, which would account for the disparity between the atomic number of an atom and its atomic mass. In 1932, James Chadwick discovered an uncharged particle of approximately the mass as the proton, which he called the neutron. The fact that the atomic nucleus is composed of protons and neutrons was rapidly accepted and Chadwick was awarded the Nobel Prize in Physics in 1935 for his discovery. The modern form of the whole number rule is that the atomic mass of a given elemental isotope is approximately the mass number (number of protons plus neutrons) times an atomic mass unit (approximate mass of a proton, neutron, or hydrogen-1 atom). This rule predicts the atomic mass of nuclides and isotopes with an error of at most 1%, with most of the error explained by the mass deficit caused by nuclear binding energy. References Further reading External links 1922 Nobel Prize Presentation Speech Mass spectrometry Periodic table
Whole number rule
[ "Physics", "Chemistry" ]
602
[ "Periodic table", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,163,733
https://en.wikipedia.org/wiki/Stenotherm
A stenotherm (from Greek στενός stenos "narrow" and θέρμη therme "heat") is a species or living organism capable of surviving only within a narrow temperature range. This specialization is often found in organisms that inhabit environments with relatively stable environments, such as deep sea environments or polar regions. The opposite of a stenotherm is a eurytherm, an organism that can function across a wide range of body temperatures. Eurythermic organisms are typically found in environments with significant temperature variations, such as temperate or tropical regions. The size, shape, and composition of an organism's body can influence its temperature regulation, with larger organisms generally maintaining a more stable internal temperature than smaller ones. Examples Chionoecetes opilio is a stenothermic organism, and temperature significantly affects its biology throughout its life history, from embryo to adult. Small changes in temperature (< 2 °C) can increase the duration of egg incubation for C. opilio by a full year. See also Ecotope References Ecology
Stenotherm
[ "Biology" ]
224
[ "Ecology" ]
13,164,797
https://en.wikipedia.org/wiki/Live%20bottom%20trailer
A live bottom trailer is a semi-trailer used for hauling loose material such as asphalt, grain, potatoes, sand and gravel. A live bottom trailer is the alternative to a dump truck or an end dump trailer. The typical live bottom trailer has a conveyor belt on the bottom of the trailer tub that pushes the material out of the back of the trailer at a controlled pace. Unlike the conventional dump truck, the tub does not have to be raised to deposit the materials. Operation The live bottom trailer is powered by a hydraulic system. When the operator engages the truck hydraulic system, it activates the conveyor belt, moving the load horizontally out of the back trailer. Uses Live bottom trailers can haul a variety of products including gravel, potatoes, top soil, grain, carrots, sand, lime, peat moss, asphalt, compost, rip-rap, heavy rocks, biowaste, etc. Those who work in industries such as the agriculture and construction benefit from the speed of unloading, versatility of the trailer and chassis mount. Safety The live bottom trailer eliminates trailer roll over because the tub does not have to be raised in the air to unload the materials. The trailer has a lower centre of gravity which makes it easy for the trailer to unload in an uneven area, compared to dump trailers that have to be on level ground to unload. Overhead electrical wires are a danger for the conventional dump trailer during unloading, but with a live bottom, wires are not a problem. The trailer can work anywhere that it can drive into because the tub does not have to be raised for unloading. In addition, the truck cannot be accidentally driven with the trailer raised, which has been a cause of a number of accidents, often involving collision with bridges, overpasses, or overhead/suspended traffic signs/lights. Advantages The tub empties clean, making it easier for different materials to be transported without having to get inside the tub to clean it out. The conveyor belt allows the material to be dumped at a controlled pace so that the material can be partially unloaded where it is needed. The rounded tub results in a lower centre of gravity which means a smoother ride and better handling than other trailers. Working under bridges and in confined areas is easier with a live bottom as opposed to a dump trailer because it can fit anywhere it can drive. Wet or dry materials can be hauled in a live bottom trailer. In a dump truck, wet materials stick in the top of the tub during unloading and causes trailer roll over. Insurance costs are lower for a live bottom trailer because it does not have to be raised in the air and there are few cases of trailer roll over. Disadvantages Some live bottom trailers are not well suited for heavy rock and demolition. However rip-rap, heavy rock, and asphalt can be hauled if built with the appropriate strength steels. See also Moving floor, a hydraulically driven conveyance system also used in semi-trailers External links Engineering vehicles
Live bottom trailer
[ "Engineering" ]
606
[ "Engineering vehicles" ]
13,165,796
https://en.wikipedia.org/wiki/Ocean%20heat%20content
Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean gives the total ocean heat uptake. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth's excess energy from global heating. The main driver of this increase was caused by humans via their rising greenhouse gas emissions. By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2023, the world's oceans were again the hottest in the historical record and exceeded the previous 2022 record maximum. The five highest ocean heat observations to a depth of 2000 meters occurred in the period 2019–2023. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years of global measurements. Ocean heat content and sea level rise are important indicators of climate change. Ocean water can absorb a lot of solar energy because water has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. The net rate of change in the top 2000 meters from 2003 to 2018 was (or annual mean energy gain of 9.3 zettajoules). It is difficult to measure temperatures accurately over long periods while at the same time covering enough areas and depths. This explains the uncertainty in the figures. Changes in ocean temperature greatly affect ecosystems in oceans and on land. For example, there are multiple impacts on coastal ecosystems and communities relying on their ecosystem services. Direct effects include variations in sea level and sea ice, changes to the intensity of the water cycle, and the migration of marine life. Calculations Definition Ocean heat content is a term used in physical oceanography to describe a type of thermodynamic potential energy that is stored in the ocean. It is defined in coordination with the equation of state of seawater. TEOS-10 is an international standard approved in 2010 by the Intergovernmental Oceanographic Commission. Calculation of ocean heat content follows that of enthalpy referenced to the ocean surface, also called potential enthalpy. OHC changes are thus made more readily comparable to seawater heat exchanges with ice, freshwater, and humid air. OHC is always reported as a change or as an "anomaly" relative to a baseline. Positive values then also quantify ocean heat uptake (OHU) and are useful to diagnose where most of planetary energy gains from global heating are going. To calculate the ocean heat content, measurements of ocean temperature from sample parcels of seawater gathered at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles). The areal density of ocean heat content between two depths is computed as a definite integral: where is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, is the in-situ seawater density profile, and is the conservative temperature profile. is defined at a single depth h0 usually chosen as the ocean surface. In SI units, has units of Joules per square metre (J·m−2). In practice, the integral can be approximated by summation using a smooth and otherwise well-behaved sequence of in-situ data; including temperature (t), pressure (p), salinity (s) and their corresponding density (ρ). Conservative temperature are translated values relative to the reference pressure (p0) at h0. A substitute known as potential temperature has been used in earlier calculations. Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. Measurements Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions. Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle. Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series altimeters have observed vertically integrated OHC, which is a major component of sea level rise. Since 2002, GRACE and GRACE-FO have remotely monitored ocean changes using gravimetry. The partnership between Argo and satellite measurements has thereby yielded ongoing improvements to estimates of OHC and other global ocean properties. Causes for heat uptake Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its heat capacity, and effectively transmits energy according to its heat transfer coefficient. Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean. Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. It can be computed as an accumulation over time of the observed differences (or imbalances) between total incoming and outgoing radiation. Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. Increases in planetary heat content for the well-observed 2005–2019 period are thought to exceed measurement uncertainties. From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction, downwelling, and upwelling. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland. Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy. From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets. Recent observations and changes Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales. Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700–2000 meter ocean layer. Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean. Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions. Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins. Impacts Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation. The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion. It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances. The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada. Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020. The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. Nevertheless the rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s; a scaling proportional to the increase in atmospheric carbon dioxide. Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there. See also References External links NOAA Global Ocean Heat and Salt Content Meteorological concepts Climate change Climatology Earth Earth sciences Environmental science Oceanography Articles containing video clips
Ocean heat content
[ "Physics", "Environmental_science" ]
2,925
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "nan" ]
13,165,926
https://en.wikipedia.org/wiki/ControlNet
ControlNet is an open industrial network protocol for industrial automation applications, also known as a fieldbus. ControlNet was earlier supported by ControlNet International, but in 2008 support and management of ControlNet was transferred to ODVA, which now manages all protocols in the Common Industrial Protocol family. Features which set ControlNet apart from other fieldbuses include the built-in support for fully redundant cables and the fact that communication on ControlNet can be strictly scheduled and highly deterministic. Due to the unique physical layer, common network sniffers such as Wireshark cannot be used to sniff ControlNet packets. Rockwell Automation provides ControlNet Traffic Analyzer software to sniff and analyze ControlNet packets. Version 1, 1.25 and 1.5 Versions 1 and 1.25 were released in quick succession when ControlNet first launched in 1997. Version 1.5 was released in 1998 and hardware produced for each version variant was typically not compatible. Most installations of ControlNet are version 1.5. Architecture Physical layer ControlNet cables consist of RG-6 coaxial cable with BNC connectors, though optical fiber is sometimes used for long distances. The network topology is a bus structure with short taps. ControlNet also supports a star topology if used with the appropriate hardware. ControlNet can operate with a single RG-6 coaxial cable bus, or a dual RG-6 coaxial cable bus for cable redundancy. In all cases, the RG-6 should be of quad-shield variety. Maximum cable length without repeaters is 1000m and maximum number of nodes on the bus is 99. However, there is a tradeoff between number of devices on the bus and total cable length. Repeaters can be used to further extend the cable length. The network can support up to 5 repeaters (10 when used for redundant networks). The repeaters do not utilize network node numbers and are available in copper or fiber optic choices. The physical layer signaling uses Manchester code at 5 Mbit/s. Link layer ControlNet is a scheduled communication network designed for cyclic data exchange. The protocol operates in cycles, known as NUIs, where NUI stands for Network Update Interval. Each NUI has three phases, the first phase is dedicated to scheduled traffic, where all nodes with scheduled data are guaranteed a transmission opportunity. The second phase is dedicated to unscheduled traffic. There is no guarantee that every node will get an opportunity to transmit in every unscheduled phase. The third phase is network maintenance or "guardband". It includes synchronization and a means of determining starting node on the next unscheduled data transfer. Both the scheduled and unscheduled phase use an implicit token ring media access method. The amount of time each NUI consists of is known as the NUT, where NUT stands for Network Update Time. It is configurable from 2 to 100 ms. The default NUT on an unscheduled network is 5 ms. The maximum size of a scheduled or unscheduled ControlNet data frame is 510 Bytes. Application layer The ControlNet application layer protocol is based on the Common Industrial Protocol (CIP) layer which is also used in DeviceNet and EtherNet/IP. References External links ODVA website ControlNet Networks and Communications from Allen-Bradley Serial buses Network protocols Industrial automation
ControlNet
[ "Technology", "Engineering" ]
683
[ "Computer network stubs", "Automation", "Industrial engineering", "Computing stubs", "Industrial automation" ]
13,167,602
https://en.wikipedia.org/wiki/Submersion%20%28coastal%20management%29
Submersion is the sustainable cyclic portion of coastal erosion where coastal sediments move from the visible portion of a beach to the submerged nearshore region, and later return to the original visible portion of the beach. The recovery portion of the sustainable cycle of sediment behaviour is named accretion. Submersion vs erosion The sediment that is submerged during rough weather forms landforms including storm bars. In calmer weather waves return sediment to the visible part of the beach. Due to longshore drift some sediment can end up further along the beach from where it started. Often coastal areas have developed sustainable coastal positions where the sediment moving off beaches is sustainable submersion. On many inhabited coastlines, anthropogenic interference in coastal processes has meant that erosion is often more permanent than submersion. Community perception The term erosion often is associated with undesirable impacts on the environment, whereas submersion is a sustainable part of healthy foreshores. Communities making decisions about coastal management need to develop understanding of the components of beach recession and be able to separate the component that is temporary sustainable submersion from the more serious irreversible anthropogenic or climate change erosion portion. References Coastal geography Geological processes Physical oceanography
Submersion (coastal management)
[ "Physics" ]
248
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
13,167,630
https://en.wikipedia.org/wiki/Accretion%20%28coastal%20management%29
Accretion is the process of coastal sediment returning to the visible portion of a beach or foreshore after a submersion event. A sustainable beach or foreshore often goes through a cycle of submersion during rough weather and later accretion during calmer periods. If a coastline is not in a healthy sustainable state, erosion can be more serious, and accretion does not fully restore the original volume of the visible beach or foreshore, which leads to permanent beach loss. References Coastal geography Deposition (geology) Physical oceanography
Accretion (coastal management)
[ "Physics" ]
110
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
13,167,800
https://en.wikipedia.org/wiki/Central%20Plains%20Water
Central Plains Water, or, more fully, the Central Plains Water Enhancement Scheme, is a large-scale proposal for water diversion, damming, reticulation and irrigation for the Central Plains of Canterbury, New Zealand. Construction started on the scheme in 2014. The original proposal involved diversion of water, the construction of a storage dam, tunnels and a series of canals and water races to supply water for irrigation to an area of 60,000 hectares on the Canterbury Plains. Water will be taken from the Rakaia and Waimakariri Rivers. In June 2010, resource consents for the scheme were approved in a revised form without the storage dam. From 2010 to 2012, the resource consents were under appeal to the Environment Court. In July 2012, the resource consents for the scheme were finalised by the Environment Court. The Central Plains Water Enhancement Scheme originated as a feasibility study jointly initiated and funded by Christchurch City Council and Selwyn District Council. The Central Plains Water Enhancement Scheme is contentious. It is opposed by community, recreation and environment groups, some city and regional councillors, and some corporate dairying interests. The scheme is supported by Christchurch City Council and Selwyn District Council staff and some councillors, irrigation interests, consultants, farming interests, and more recently, some corporate dairying interests. Scope Canterbury Regional Council has summarised the scope of the Central Plains Water enhancement scheme as follows; 'The applicants propose to irrigate 60,000 hectares of land between the Rakaia and Waimakariri Rivers from the Malvern foothills to State Highway One. Water will be abstracted at a rate of up to 40 m3/s from two points on the Waimakariri River and one point on the Rakaia River. The water will be irrigated directly from the river and via a storage system. The proposal includes a 55-metre high storage dam within the Waianiwaniwa Valley and associated land use applications for works within watercourses.' The proposed dam would be about 2 kilometres long, with a maximum height of 55 metres, with a base width of about 250 metres, and 10 m wide crest, with a capacity of 280 million cubic metres. The dam would be 1.5 kilometres north east of the town of Coalgate. The two rivers and the reservoir would be connected by a headrace canal, 53 kilometres long, 5 metres deep and 30 metres wide (40–50 metres including embankments). Water would be delivered to farmers via 460 kilometres of water races, ranging in width from 14 to 27 metres, including the embankments. A brief history In 1991, Christchurch City Council and the Selwyn District Council, in their annual planning process, agree on a feasibility study on irrigation of the Central Plains. The two councils provide a budget and set up a joint steering committee. In 2000, the steering committee contracts consulting firm URS New Zealand Limited to prepare a scoping report. In late 2001, the steering committee applies for resource consent to take 40 m3/s of water from the Rakaia River and the Waimakariri River. In January 2002, the steering committee releases the feasibility study and seeks to continue the project. In 2003, the Central Plains Water Trust was set up to apply for resource consents, and the Trust establishes a company, Central Plains Water Limited, to raise funds from farmers via a share subscription. In 2004 Central Plains Water Limited issued a share prospectus and the share subscription is over-subscribed. In November 2005, further consent applications for land and water use were lodged with Canterbury Regional Council and Central Plains Water Limited becomes a 'requiring authority'. In June 2006, further consent applications for land use and a notice of requirement, the precursor to the use of the Public Works Act to compulsorily acquire land, are lodged with Selwyn District Council. In July 2007, the trustees of Central Plains Water Trust informed Christchurch City Council that they had run out of money to fund the lawyers and consultants needed for the consent and notice of requirement hearings. Christchurch City Council gave approval for Central Plains Water Limited to borrow up to $4.8 million from corporate dairy farmer Dairy Holdings Limited. The hearing to decide the resource consent applications and submissions and the notice of requirement commenced on 25 February 2008. In September 2012, Selwyn District Council approved a loan of $5 million to Central Plains Water Limited for the design stage. Supporters The Central Plains Water enhancement scheme has had a small but influential group of supporters, some of whom have been involved as steering committee members, trustees and company directors. The supporters have included development-minded council politicians, council staff with water engineering backgrounds, directors of council-owned companies, farmer representatives and consultants. The advancement of the scheme appears to have coincided with career moves and business interests of some of these supporters. The initial membership of Central Plains Water Enhancement Steering Committee consisted of Councillor Pat Harrow (Christchurch City Council) and Councillors Christiansen and Wild (Selwyn District Council) and Doug Marsh, Jack Searle, John Donkers, Willie Palmer and Doug Catherwood. Christchurch City councillor Denis O'Rourke was soon added and Doug Marsh became chairperson. Doug Marsh is now the Chairperson of the Central Plains Water Trust and a director of Central Plains Water Limited. He describes himself as a "Christchurch-based professional (company) director" Doug Marsh appears to specialise in council-owned companies. Doug Marsh is also the Chairman of the board of the Directors of the Selwyn Plantation Board Ltd, the Chairman of Plains Laminates Ltd, Chairman of the Canterbury A & P Board, Chairman of Southern Cross Engineering Holdings Ltd, a Director of City Care Ltd, a Director of Electricity Ashburton Ltd and a Director of Hindin Communications Ltd Denis O'Rourke and Doug Catherwood, who were two of the original members of the steering committee, are now Trustees of the Central Plains Water Trust. Allan Watson, who was the Christchurch City Council Water Services Manager in 1999, had a very important role. Watson wrote most of the reports submitted to the Christchurch City Council strategy and resources committee between late 1999 and 2003. Watson wrote the initial report to the Christchurch City Council strategy and resources committee that set up the Central Plains joint steering committee. Watson wrote the crucial report in February 2002 that recommended that the scheme be considered feasible and that the role of the steering committee be continued. Watson had previously been the Malvern County Engineer for 10 years. Allan Watson now works for the consulting firm GHD and he has publicly represented GHD as the project managers for the Central Plains Water Enhancement scheme. In 2000, Walter Lewthwaite was one of the original Christchurch City Council employees supporting the joint Steering Committee. Lewthwaite had 30 years experience in water engineering and 14 years experience in managing irrigation projects. In November 2005, Lewthwaite was a Senior Environmental Engineer employed by URS New Zealand Limited, and the project manager and co-author of the application for resource consents lodged with Canterbury Regional Council. By June 2006, Lewthwaite was an Associate of URS New Zealand Limited. In September 2006, Lewthwaite also prepared information to support the applications to Selwyn District Council. Opponents The Central Plains Water Enhancement Scheme is opposed by farmers and community, recreation and environment groups. Opponents include; individual farmers such as Sheffield Valley farmer Marty Lucas who will lose more than 30% of his property. the Malvern Hills Protection Society formerly the 'Dam Action Group', the Water Rights Trust, the New Zealand Recreational Canoeing Association, the Christchurch-based White Water Canoe Club, the Royal Forest and Bird Protection Society of New Zealand, the Fish and Game Council of New Zealand, and, the Green Party of Aotearoa New Zealand, Between 1,192 and 1,316 of public submitters oppose the 64 notified consent applications lodged with Canterbury Regional Council and between 153 and 172 submissions are in support. The range of numbers of submitters given is presumably due to some of the submissions specifying some specific consent applications rather than all of the applications included in the proposal. Costs The estimated construction costs of the scheme have doubled since the 2002 'feasibility' study and have increased by 500% since the first scoping study. In December 2000, the initial scoping study estimated the total cost of the scheme to be $NZ120 million or $1,190.48 per hectare irrigated. By September 2001, the estimated scheme cost was $NZ201.7 million or $2,400 per hectare irrigated. In February 2002, when Christchurch City Council and Selwyn District Council were presented with the feasibility study, the estimated scheme cost was $NZ235 million for 84,000 hectares or $2,798 per hectare irrigated. At 1 April 2004, the estimated scheme cost was $NZ372 million for 60,000 hectares or $6,200 per hectare irrigated. In January 2006, Central Plains Water Limited director John Donkers stated that the total cost was $NZ367 million for 60,000 hectares or $NZ6,117 per hectare. In December 2007, the estimate of the total cost of the scheme appeared to be $6,826 per hectare irrigated. On 19 February 2008, the evidence of Walter Lewthwaite, one of the principal engineering witnesses for Central Plains Water Trust, became available from the Canterbury Regional Council website. Lewthwaite states that in early 2007 he compiled and supplied an estimate of the total scheme cost to Mr Donnelly (the economist) and Mr MacFarlane (the farm management consultant) for their use in providing the economic analysis. The estimate was $NZ409.6 million for a scheme area of 60,000 hectares, or $6,826 per hectare irrigated. The feasibility study stage The constitution and terms of reference for the Central Plains Water Enhancement Steering Committee was approved on 14 February 2000. The terms of reference had these two objectives: to execute feasibility studies into the viability and practicality of water enhancement schemes in the Central Plains area,.. is to undertake feasibility studies for the Central Plains area sufficiently detailed to allow decisions on the advisability of proceeding to resource consent applications and eventual scheme implementation. The feasibility studies also had a required level of detail:The level of detail of these studies shall be sufficient to allow decisions to be made by the Councils on the advisability of proceeding to resource consent applications and scheme implementation. By February 2001, the steering committee had identified 27 tasks that would be necessary to complete the feasibility study. The list of tasks is comprehensive; it included the assessment of economic effects, benefits, environmental effects, social effects, cultural effects, risks, planning, land accessibility, and environmental and technical feasibility, and consentability. Item 23 was specifically entitled 'Land Accessibility'. On 11 February 2002 the Central Plains Water Enhancement Steering Committee presented the URS feasibility report and their own report to a joint meeting of the two 'parent' Councils. On 18 February 2002 the reports were presented to the Strategy and Finance committee of the Christchurch City Council. The conclusion of the URS feasibility study was stated fairly firmly;"that a water enhancement scheme for the Central Plains can be built, is affordable, will have effects that can be mitigated, and is therefore feasible" The Steering Committee's conclusion was much less firm."the affordability, bankability and consentability of the proposed scheme has been proved to a degree sufficient to give the Selwyn District Council and Christchurch City Councils confidence to proceed with the project to the next stage." The Steering Committee had not provided a full conclusion on a number of issues from the list of 27 feasibility study tasks. They had instead simply moved the resolution of a number of the important issues from the feasibility study stage to a new stage to be called 'concept refinement'. The issues to be dealt with later were; more technical investigations the scheme's ownership structure how to acquire land for dams and races the mitigation of social, environmental and cultural effects. Court actions with other competing abstractors Central Plains Water Trust has been in some lengthy litigation with Ngāi Tahu Properties Limited and Synlait. The three entities have resource consents or applications for resource consents to take the same water - the remaining water from the Rakaia and Waimakariri Rivers, allocated for abstraction by the Rakaia Water Conservation Order or the Waimakariri Rivers Regional Plan. The issue before the courts is 'who has first access to limited water? The first to have consent granted? The first to file an application to take water? The first to file all necessary applications? The first to have replied to requests for information so that the application is complete and therefore 'notifiable'? The cases have been appealed up to the Supreme Court. Ngāi Tahu Properties Limited On 28 January 2005, Ngāi Tahu Properties Limited had applied for competing resource consents to take 3.96 m³/s of water from the Waimakariri River and use it for irrigation of 5,700 hectares of land to the north of the Waimakariri River. On 17 September 2005 the Ngai Tahu applications were publicly notified. A hearing before independent commissioners was held in February 2006. On 26 and 27 June 2006, Ngāi Tahu Properties Limited sought a declaration from the Environment Court that their application to take water from the Waimakariri River had 'priority' over the 2001 CPWT application and therefore could be granted before the CPWT application. On 22 August 2006, the Environment Court released a decision that Ngāi Tahu Properties Limited had priority to the remaining 'A' allocation block of water from the Waimakariri River over the Central Plains Water Trust application. The Central Plains Water Trust then appealed the decision to the High Court on the grounds that as they had applied first their priority to the water should be upheld, in spite of the fact that a decision would be some time in the future. The High Court agreed with the Environment Court that priority to a limited resource went to the applications that were ready to be 'notifiable' first, not the applicant who applied first. That decision confirmed that Ngāi Tahu Properties Limited would be able to take water under their consents from the Waimakariri River at a more optimal minimum flow than any later consent granted to Central Plains Water Trust. However, Central Plains Water Trust appealed this decision to the Court of Appeal and the case was heard on 28 February 2008. On 19 March 2008, the Court of Appeal released a majority decision, that reversed the Environment Court and High Court decisions and awarded priority to Central Plains Water Trust. Justice Robertson gave a dissenting minority opinion that without the full information, the original CPW application had not been ready for notification in 2001. On 24 June 2008 the Supreme Court granted Ngai Tahu Property Limited leave to hear an appeal of the Court of Appeal decision. Synlait In early 2007, the Central Plains Water Trust and the Ashburton Community Water Trust went to the Environment Court for a declaration that their 2001 consent application for water from the Rakaia River had priority over the consent application made by dairying company Synlait (Robindale) Dairies. In May 2007, the Environment Court ruled the Central Plains Water Trust application had priority over the Synlait application. Synlait Director Ben Dingle said that the decision was being appealed to the High Court. The High Court heard this appeal on 23 and 24 October 2007. On 13 March 2008, the High Court released its decision to uphold the appeal and to award priority to Synlait. Central Plains Water Limited announced it would lodge an appeal with the Court of Appeal. The corporate dairying connection In May 2007, confidential minutes from the March board meeting of Central Plains Water Limited were leaked to media. The minutes stated that the councils (Christchurch and Selwyn District) must agree to a 'bail out' loan or the scheme would be 'killed'. Central Plains Water later confirmed that the corporate dairy farming company, Dairy Holdings Limited, was prepared to offer a large loan to the scheme. Dairy Holdings Limited operates 57 dairy farms and is owned by Timaru millionaire Allan Hubbard and Fonterra board member Colin Armer. On 5 June 2007, Christchurch City Council was informed that Central Plains Water Limited had 'a shortfall of $NZ1 million' and had run out of money needed to pay for the expenses of the impending hearings on the applications for the various resource consents. On 7 June 2007, the Christchurch City Council authorised two Council general managers to approve loan agreements for CPWL to borrow up to a maximum of $4.8 million, subject to the Central Plains Water Trust continuing to 'own' the resource consents, as required by the April 2003 Memorandum of Understanding. The Malvern Hills Protection Society questioned whether the Central Plains resource consent applications had been offered as security for the $NZ4.8million loan and whether such a loan would breach the 2004 CPW Memorandum of Agreement, which forbids transferring or assigning its interest in the resource consents. Similarly, Ben Dingle, a director of the competing dairying company, Synlait, also questioned the community benefit of the Central Plains project, as the main benefits of irrigation schemes (increased land values and higher-value land-uses) flow to the landowners who have access to the water. A report to the Christchurch City Council meeting of 13 December 2007 gives the details of the final loan arrangements. On 19 October 2007, two Council general managers signed the loan agreement with Dairy Holdings Limited. The amount initially borrowed from Dairy Holdings Limited is $NZ1.7 million out of a maximum of $4.8 million. The law firm Anthony Harper had certified that the loan was not contrary to the Memorandum of Agreement as the resource consent applications were not used as security. However, the loan agreement grants a sub-licence from CPWL to Dairy Holdings Limited to use the CPW water consents by taking water for irrigation from the Rakaia River. The sub-licence will start from the date the consents are granted to the date that the whole scheme is operational. The Christchurch City councillors voted (eight votes against, five votes for) not to accept the report. A resource consent is specifically declared by the Resource Management Act 1991 not to be real or personal property. Resource consents are not 'owned'; they are 'held' by 'consent holders'. The Central Plains Water Trust applications for resource consents may not have been technically used as security for the loan from Dairy Holdings Limited. However, the Christchurch City Council report clarifies that Dairy Holdings Limited, will now get the benefit of the first use of water from the Rakaia River, under the loan arrangement. That benefit will flow from the date the consents are granted, which will be some years before any of the 'ordinary' farmer shareholders in CPWL receive water, once the full scheme is constructed. The concept of guaranteed public 'ownership' of the resource consents by Central Plains Water Trust, is somewhat of a fiction, given that a private company, Central Plains Water Limited, has an exclusive licence to operate the consents to take and use water for irrigation, and particularly given that Central Plains Water Limited has already granted a sublicence for the Rakaia River water to Dairy Holdings Limited. Local government elections October 2007 The Central Plains Water enhancement scheme was the second most important issue in the 2007 Christchurch local government elections, according to a poll of 320 people commissioned by the Christchurch newspaper The Press. Bob Parker, who became the new Mayor of Christchurch, favoured allowing the Central Plains Water scheme to proceed through the hearings into the resource consent applications. Megan Woods, the unsuccessful Christchurch mayoral candidate, did not support the Central Plains Water scheme. Sally Buck, a Christchurch City Councillor in the Fendalton Waimairi Ward, strongly opposed the Central Plains Water scheme. Four new regional councillors elected to Canterbury Regional Council opposed the Central Plains Water scheme. The four were: David Sutherland and Rik Tindall, who stood as "Save Our Water" candidates, and independent candidates Jane Demeter and Eugenie Sage. Richard Budd, a long-serving regional councillor, who had been a paid consultation facilitator for Central Plains Water, lost the Christchurch East ward to Sutherland and Tindall. Defeated regional councillor Elizabeth Cunningham commented that she thought it unlikely that Central Plains Water scheme could be stopped by the new councillors as it was still proceeding to resource consent hearings where the new councillors would have little influence. Environmental effects The proposed scheme has a number of environmental effects. The dam would result in a loss of habitat for the endangered Canterbury mudfish. The dam would also affect amenity and landscape values, especially for the settlement of Coalgate. Water abstraction from the rivers will have an effect on ecology and other natural characteristics. The intensification of farming as a result of water being made available by the scheme has led to fears of increased nitrate contamination of the aquifers. Canterbury mudfish habitat The Canterbury mudfish is a native freshwater fish of the galaxiid family that is found only in Canterbury. It is an acutely threatened species that is classified as 'Nationally Endangered'. In October 2002, staff of the National Institute of Water and Atmospheric Research (NIWA), were engaged by Central Plains to survey fish populations in the Waianiwaniwa River catchment as part of the investigation into the potential dam site. The survey identified a large and abundant population of Canterbury mudfish that had previously been unknown. NIWA concluded that the dam would be problematic for the mudfish as their habitat would be replaced by an unsuitable reservoir and the remaining waterways would be opened to predatory eels. Although NIWA did no further work for Central Plains Water, much of NIWA's fish survey was included in the assessment of effects on the environment prepared by URS New Zealand Limited. However, a new approach to the effects on the mudfish was included. Mitigation of the loss of habitat would be further evaluated following consultation with the Department of Conservation. In July 2006, and in January and February 2007, University of Canterbury researchers surveyed the Waianiwaniwa Valley for mudfish. The fish identified ranged from young recruits to mature adult fish, indicating a healthy population. Canterbury mudfish occur in at least 24 kilometres of the Waianiwaniwa River. Also, sites in the Waianiwaniwa Valley accounted for 47% of all fish database records known for Canterbury mudfish (based on mean catch per unit effort). Therefore, it was concluded that the Waianiwaniwa catchment is the most important known habitat for this species. Forest and bird's expert witness, Ecologist Colin Meurk concluded that the Waiainiwaniwa catchment "represents the largest known Canterbury mudfish habitat and is substantially larger than any other documented mudfish habitats. A rare combination of conditions makes the Waianiwaniwa River a unique ecosystem and creates an important whole catchment refuge for the conservation of this nationally threatened species". Angus McIntosh, Associate Professor of Freshwater Ecology in the School of Biological Sciences at the University of Canterbury, presented evidence on behalf of the Department of Conservation. He disagreed with the CPW evidence on mudfish. He made three conclusions: The Waianiwaniwa Valley population of Canterbury mudfish (Neochanna burrowsius) is the largest and most important population of this nationally endangered fish in existence. The construction of the dam in the Waianiwaniwa Valley will eliminate the natural population and mudfish will not be able to live in the reservoir or any connected streams. CPW's proposed measures to mitigate the loss of the Waianiwaniwa population of Canterbury mudfish are inadequate to address the significance and characteristics of the mudfish population that would be lost and are largely undocumented. The hearing of the applications and submissions The hearing, to decide the applications for resource consents sought from Canterbury Regional Council and Selwyn District Council and the notice of requirement for designation, commenced on 25 February 2008 and ended on 25 September 2008. The hearing was the largest ever held by Canterbury Regional Council. The hearing panel heard evidence from several hundred submitters on 71 days over a -year period at an expected cost of $2.1 million. Council Officer’s reports The summary Canterbury Regional Council report, by Principal Consents Advisor Leo Fietje, did not make a formal recommendation to either grant or decline the applications. However, it concluded, that on the basis of the applicant's evidence and the officer's reviews to date, that some adverse effects cannot be avoided, remedied or mitigated. Uncertainty remains over fish screens, natural character of the Waimakariri River, terrestrial ecology, and effects on lowland streams. Increased nitrate-nitrogen concentrations are considered significant. The loss of endangered Canterbury mudfish habitat due to the dam is considered to be a significant adverse effect. The report notes that any recommendations are not binding on the hearing panel, and that they may reach different conclusions on hearing further evidence. The summary Selwyn District Council report, by Nick Boyes of Resource Management Group Ltd, recommended declining both the Notice of Requirement and the applications for land use consents. The report also noted that any recommendation was not binding on the hearing panel, and they may reach different conclusions on hearing further evidence. Several reasons for the recommendation were given. CPW has relied on ten management plans to mitigate adverse effects, but has not provided draft copies of any such plans. Insufficient information was provided, despite formal requests, for the Selwyn District Council witnesses to assess the significance of the social effects, the effects on archaeological and heritage values, effects on wetlands and terrestrial ecology, effects on water safety, and the effects on Ngai Tahu statutory acknowledgment areas. The cost-benefit-analysis, which was critical to the farmer-uptake and investment in, and therefore the viability of, the scheme, was considered to lack robustness and to overstate benefits and understate costs. CPW evidence In resource consent hearings the burden of proof generally falls on the consent applicant to satisfy a hearing panel that the purpose of the Resource Management Act is met by granting rather than refusing consent. Also, a burden of proof lies on any party who wishes a hearing panel (or the Environment Court) to make a determination of adverse or positive effects. A 'scintilla' of probative evidence may be enough to make an issue of a particular adverse effect 'live' and therefore requiring rebuttal if it is not to be found to be established. The Officers' reports, in noting several adverse effects, have moved the burden of proof for rebuttal onto the witnesses for Central Plains Water Trust. The opening legal submission for Central Plains Water Trust summarised their technical evidence and concluded that any adverse effects of the scheme will either be adequately mitigated or will be insignificant in light of the positive economic benefits of the scheme. The expert witnesses for Central Plains have provided many reports of technical evidence. Interim decision to decline dam On 3 April 2009, the Commissioners released a minute stating that consents to dam the Wainiwaniwa River were unlikely to be granted and that the hearing would be resumed on 11 May 2009 to decide whether to proceed with a proposal not including water storage. The minute requested legal submissions on that point. Central Plains Water Limited chairman Pat Morrison stated that the most important short-term goal was to get the water takes from the Waimakariri and Rakaia rivers granted. Implications for the scheme CPW responded that the hearing should continue to consider the water take and associated canal consents and the notice of requirement. The Department of Conservation, the Fish and Game Council, the Royal Forest and Bird Protection Society and Te Runanga o Ngai Tahu (TRONT) all submitted that the hearing panel should close the hearing and decline all the consents applied for by CPW as these had been presented as an integrated proposal where water storage was fundamental. The Malvern Hills Protection Society recommended declining all applications, noting that CPW had obtained requiring authority status on the basis that the dam and reservoir were essential (para 14). The Society also noted that any water-take consents granted were likely to be ultimately transferred to Dairy Holdings Limited under existing loan agreements (para 29). Revised divert and irrigate proposal On 20 May 2009, the Hearing Panel decided that it would continue to hear evidence from CPW on a modified scheme from 5 October 2009. On 30 October 2009, the Commissioners announced that, subject to conditions, they considered they could issue resource consents and grant the Notice of Requirement for the revised scheme. They intended to convene again in early 2010 to finalise consent conditions and to complete a final decision. Decision June 2010 In June 2010, Environment Canterbury issue a press release stating that the hearing panel had granted 31 consents and the notice of requirement for the revised scheme without the storage dam. The full report of the hearing panel is available on the Environment Canterbury website. By the end of June 2010, six appeals of the decision had been lodged with the Environment Court. Central Plains Water Trust lodged one of the appeals as applicant in order to change some consent conditions which limit the taking of water to 12 hours a day. Christchurch City Council's appeal was because it considered to much water would be taken from the Waimakariri River which may affect Christchurch's water supply. Fish and Game's appeal was motivated by concern over the Waimakariri River take and 'inadequate' fish screening conditions. Ngāi Tahus appeal concerned the Waimakariri River take and the legality of the change in scope of the consents granted from what had been applied for. Other appellants were a member of the Deans family and some extractors of river gravel. In July 2012, the resource consents for the scheme were confirmed by the Environment Court. References External links Central Plains Water Trust Christchurch Library - CPW page Canterbury Water Management Strategy - an initiative by the Ministry of Agriculture and Forestry, Ministry for the Environment and Environment Canterbury Environmental issues in New Zealand Canterbury Region Water and politics Irrigation projects Irrigation in New Zealand
Central Plains Water
[ "Engineering" ]
6,082
[ "Irrigation projects" ]
13,168,288
https://en.wikipedia.org/wiki/Jackup%20rig
A jackup rig or a self-elevating unit is a type of mobile platform that consists of a buoyant hull fitted with a number of movable legs, capable of raising its hull over the surface of the sea. The buoyant hull enables transportation of the unit and all attached machinery to a desired location. Once on location the hull is raised to the required elevation above the sea surface supported by the sea bed. The legs of such units may be designed to penetrate the sea bed, may be fitted with enlarged sections or footings, or may be attached to a bottom mat. Generally jackup rigs are not self-propelled and rely on tugs or heavy lift ships for transportation. Jackup platforms are almost exclusively used as exploratory oil and gas drilling platforms and as offshore and wind farm service platforms. Jackup rigs can either be triangular in shape with three legs or square in shape with four legs. Jackup platforms have been the most popular and numerous of various mobile types in existence. The total number of jackup drilling rigs in operation numbered about 540 at the end of 2013. The tallest jackup rig built to date is the Noble Lloyd Noble, completed in 2016 with legs 214 metres (702 feet) tall. Name Jackup rigs are so named because they are self-elevating with three, four, six and even eight movable legs that can be extended (“jacked”) above or below the hull. Jackups are towed or moved under self propulsion to the site with the hull lowered to the water level, and the legs extended above the hull. The hull is actually a water-tight barge that floats on the water’s surface. When the rig reaches the work site, the crew jacks the legs downward through the water and into the sea floor (or onto the sea floor with mat supported jackups). This anchors the rig and holds the hull well above the waves. History An early design was the DeLong platform, designed by Leon B. DeLong. In 1949 he started his own company, DeLong Engineering & Construction Company. In 1950 he constructed the DeLong Rig No. 1 for Magnolia Petroleum, consisting of a barge with six legs. In 1953 DeLong entered into a joint venture with McDermott, which built the DeLong-McDermott No.1 in 1954 for Humble Oil. This was the first mobile offshore drilling platform. This barge had ten legs which had spud cans to prevent them from digging into the seabed too deep. When DeLong-McDermott was taken over by the Southern Natural Gas Company, which formed The Offshore Company, the platform was called Offshore No. 51. In 1954, Zapata Offshore, owned by George H. W. Bush, ordered the Scorpion. It was designed by R. G. LeTourneau and featured three electro-mechanically-operated lattice type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955. The Scorpion was put into operation in May 1956 off Port Aransas, Texas. The second, also designed by LeTourneau, was called Vinegaroon. Operation A jackup rig is a barge fitted with long support legs that can be raised or lowered. The jackup is maneuvered (self-propelled or by towing) into location with its legs up and the hull floating on the water. Upon arrival at the work location, the legs are jacked down onto the seafloor. Then "preloading" takes place, where the weight of the barge and additional ballast water are used to drive the legs securely into the sea bottom so they will not penetrate further while operations are carried out. After preloading, the jacking system is used to raise the entire barge above the water to a predetermined height or "air gap", so that wave, tidal and current loading acts only on the relatively slender legs and not on the barge hull. Modern jacking systems use a rack and pinion gear arrangement where the pinion gears are driven by hydraulic or electric motors and the rack is affixed to the legs. Jackup rigs can only be placed in relatively shallow waters, generally less than of water. However, a specialized class of jackup rigs known as premium or ultra-premium jackups are known to have operational capability in water depths ranging from 150 to 190 meters (500 to 625 feet). Types Mobile offshore Drilling Units (MODU) This type of rig is commonly used in connection with oil and/or natural gas drilling. There are more jackup rigs in the worldwide offshore rig fleet than other type of mobile offshore drilling rig. Other types of offshore rigs include semi-submersibles (which float on pontoon-like structures) and drillships, which are ship-shaped vessels with rigs mounted in their center. These rigs drill through holes in the drillship hulls, known as moon pools. Turbine Installation Vessel (TIV) This type of rig is commonly used in connection with offshore wind turbine installation. Barges Jackup rigs can also refer to specialized barges that are similar to an oil and gas platform but are used as a base for servicing other structures such as offshore wind turbines, long bridges, and drilling platforms. See also Crane vessel Offshore geotechnical engineering Oil platform Rack phase difference TIV Resolution References Oil platforms Ship types
Jackup rig
[ "Chemistry", "Engineering" ]
1,091
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
14,325,087
https://en.wikipedia.org/wiki/Pseudodementia
Pseudodementia (otherwise known as depression-related cognitive dysfunction or depressive cognitive disorder) is a condition that leads to cognitive and functional impairment imitating dementia that is secondary to psychiatric disorders, especially depression. Pseudodementia can develop in a wide range of neuropsychiatric disease such as depression, schizophrenia and other psychosis, mania, dissociative disorders, and conversion disorders. The presentations of pseudodementia may mimic organic dementia, but are essentially reversible on treatment and doesn't lead to actual brain degeneration. However, it has been found that some of the cognitive symptoms associated with pseudodementia can persist as residual symptoms and even transform into true neurodegenerative dementia in some cases. Psychiatric conditions, mainly depression, is the strongest risk factor of pseudodementia rather than age. Even though most of the existing studies focused on older age groups, younger adults can develop pseudodementia if they have depression. While aging does affect the cognition and brain function and making it hard to distinguish depressive cognitive disorder from actual dementia, there are differential diagnostic screenings available. It is crucial to confirm the correct diagnosis since depressive cognitive disorder is reversible with proper treatments. Pseudodementia typically involves three cognitive components: memory issues, deficits in executive functioning, and deficits in speech and language. Specific cognitive symptoms might include trouble recalling words or remembering things in general, decreased attentional control and concentration, difficulty completing tasks or making decisions, decreased speed and fluency of speech, and impaired processing speed. Since the symptoms of pseudodementia is highly similar to dementia, it is critical complete differential diagnosis to completely exclude dementia. People with pseudodementia are typically very distressed about the cognitive impairment they experience. Currently, the treatment of pseudodementia is mainly focused on treating depression, cognitive impairment, and dementia. And we have seen improvements in cognitive dysfunction with antidepressants such as SSRI (Selective serotonin Reuptake Inhibitors), SNRI (Serotonin-norepinephrine Reuptake Inhibitors), TCAs (Tricyclic Antidepressants), Zolmitriptan, Vortioxetine, and Cholinesterase Inhibitors. History Carl Wernicke is often believed to have been the source of the term pseudodementia (in his native German, pseudodemenz). Despite this belief being held by many of his students, Wernicke never actually used the word in any of his written works. It is possible that this misconception comes from Wernicke's discussions on Ganser's syndrome. Instead, the first written instance of pseudodementia was by one of Wernicke's students, Georg Stertz. However the term itself was not linked to the modern understanding of it until 1961 by psychiatrist Leslie Gordon Kiloh, who noticed patients with cognitive symptoms consistent with dementia who improved with treatment. Kiloh believed that the term should be used to describe a person's presentation, rather than an outright diagnosis. Modern research, however, has shown evidence for the term being used in such a way. Reversible causes of true dementia must be excluded. His term was mainly descriptive. The clinical phenomenon, however, has been well-known since the late 19th century as melancholic dementia. Doubts about the classification and features of the syndrome, and the misleading nature of the name, led to proposals that the term be dropped. However, proponents argue that although it is not a defined singular concept with a precise set of symptoms, it is a practical and useful term that has held up well in clinical practice, and also highlights those who may have a treatable condition. Presentation The history of disturbance in pseudodementia is often short and abrupt onset, while dementia is more often insidious. In addition, there is often minor, or an absence of, any abnormal brain patterns seen via imaging which indicate an organic component to the cognitive decline, such as what one would see in dementia. The key symptoms of pseudodementia include: speech impairments, memory deficits, attention problems, emotional control issues, organization difficulties, and decision making. Clinically, people with pseudodementia differ from those with true dementia when their memory is tested. They will often answer that they don't know the answer to a question, and their attention and concentration are often intact. By contrast, those presenting with organic dementia will often have "near-miss" answers rather than stating that they do not know the answer. This can make diagnosis difficult and result in misdiagnosis as a patient might have organic dementia but answer questions in a way that suggests pseudodementia, or vice versa. In addition, people presenting with pseudodementia often lack the gradual mental decline seen in true dementia. They instead tend to remain at the same level of reduced cognitive function throughout. However, for some, pseudodementia can eventually progress to organic dementia and lead to lowered cognitive function. Because of this, some recommend that elderly patients that present with pseudodementia should receive a full screening for dementia, as well as closely monitor cognitive faculties in order to catch the progression to organic dementia early. They may appear upset or distressed, and those with true dementia will often give wrong answers, have poor attention and concentration, and appear indifferent or unconcerned. The symptoms of depression oftentimes mimic dementia even though it may be co-occurring. Causes Pseudodementia refers to "behavioral changes that resemble those of the progressive degenerative dementias, but which are attributable to so-called functional causes". The main cause of pseudodementia is depression. Any age group can develop pseudodementia. In depression, processing centers in the brain responsible for cognitive function and memory are affected, including the prefrontal cortex, amygdala, and hippocampus. Reduced function of the hippocampus results in impaired recognition and recall of memories, a symptom commonly associated with dementia. While not as common, other mental health disorders and comorbidities can also cause symptoms that mimic dementia, and thus must be considered when making a diagnosis. Diagnosis Differential diagnosis While there is currently no cure for dementia, other psychiatric disorders that may result in dementia-like symptoms are able to be treated. Thus, it is essential to complete differential diagnosis, where other possibilities are appropriately ruled out to avoid misdiagnosis and inappropriate treatment plans. The implementation and application of existing collaborative care models, such as DICE (describe, investigate, create, evaluate), can aid in avoiding misdiagnosis. DICE is a method utilized by healthcare workers to evaluate and manage behavioral and psychological symptoms associated with dementia. Comorbidities (such as vascular, infectious, traumatic, autoimmune, idiopathic, or even becoming malnourished) have the potential to mimic symptoms of dementia and thus must be evaluated for, typically through taking a complete patient history and physical exam. For instance, studies have also shown a relationship between depression and its cognitive effects on everyday functioning and distortions of memory. Since pseudodementia does not cause deterioration of the brain, brain scans can be used to visualize potential deterioration associated with dementia. Investigations such as PET and SPECT imaging of the brain show reduced blood flow in areas of the brain in people with Alzheimer's disease (AD), the most common type of dementia, compared with a more normal blood flow in those with pseudodementia. Reduced blood flow leads to an inadequate oxygen supply that reaches the brain, causing irreversible cell damage and cell death. In addition, MRI results show medial temporal lobe atrophy, which causes impaired recall of facts and events (declarative memory), in individuals with AD. Pseudodementia vs. dementia Pseudodementia symptoms can appear similar to dementia. Due to the similar signs and symptoms, it can result in a misdiagnosis of depression, as well as adverse effects from inaccurately prescribed medications.Generally, dementia involves a steady and irreversible cognitive decline while pseudodementia-induced symptoms are reversible. Thus, once the depression is properly treated or the medication therapy has been modified, depression-induced cognitive impairment can be effectively reversed. Commonly within older adults, diminished mental capacity and social withdrawal are identified as dementia symptoms without considering and ruling out depression. As a result, older adult patients are often misdiagnosed due to insufficient testing. Cognitive symptoms such as memory loss, slowed movement, or reduced/ slowed speech, are sometimes initially misdiagnosed as dementia, however, further investigation determined that these patients were suffering from a major depressive episode. This is an important distinction as the former is untreatable, whereas the latter is treatable using antidepressant therapy, electroconvulsive therapy, or both. In contrast to major depression, dementia is a progressive neurodegenerative syndrome involving a pervasive impairment of higher cortical functions resulting from widespread brain pathology. A significant overlap in cognitive and neuropsychological dysfunction in dementia and pseudodementia patients increases the difficulty in diagnosis. Differences in the severity of impairment and quality of patients' responses can be observed, and a test of antisaccadic movements may be used to differentiate the two, as pseudodementia patients have poorer performance on this test. Other researchers have suggested additional criteria to differentiate pseudodementia from dementia, based on their studies. However, the sample size for these studies are relatively small so the validity of the studies are limited. A systematic review conducted in 2018 reviewed 18 longitudinal studies about pseudodementia. Among the 284 patients that were studied, 33% of the patients developed irreversible dementia while 53% of the patients no longer met the criteria for dementia during follow-up. Individuals with pseudodementia present considerable cognitive deficits, including disorders in learning, memory and psychomotor performance. Substantial evidences from brain imaging such as CT scanning and positron emission tomography (PET) have also revealed abnormalities in brain structure and function. A comparison between dementia and pseudodementia is shown below. Management Pharmacological If effective medical treatment for depression is given, this can aid in the distinction between pseudodementia and dementia. Antidepressants have been found to assist in the elimination of cognitive dysfunction associated with depression, whereas cognitive dysfunction associated with true dementia continues along a steady gradient. In cases where antidepressant therapy is not well tolerated, patients can consider electroconvulsive therapy as a possible alternative. However, studies have revealed that patients who displayed cognitive dysfunction related to depression eventually developed dementia later on in their lives. The development of treatments for dementia has not been as fast as those for depression. Hence, the pharmacological treatments for pseudodementia do not directly treat the condition itself but directly treat dementia, depression, and cognitive impairment. These medications include SSRI (Selective Serotonin Reuptake Inhibitor), SNRI (Serotonin-norepinephrine Reuptake Inhibitors), TCAs (Tricyclic antidepressants), Zolmitriptan, and cholinesterase inhibitors. SSRI or Selective Serotonin Reuptake Inhibitors belong to the class of antidepressants. Some examples of SSRIs are fluoxetine (Prozac), paroxetine (Paxil), sertraline (Zoloft), citalopram (Celexa), and escitalopram (Lexapro). SSRIs function by inhibiting serotonin reabsorption into neurons, allowing more serotonin to be accessible and improving nerve cell communication. Therefore, SSRIs are considered the first-line agent for pseudodementia due to the rise in serotonin levels, which may assist in alleviating pseudodementia-related depressive symptoms. SNRI or Serotonin-norepinephrine Reuptake Inhibitors also belong to the class of antidepressants. Some examples of SNRIs are desvenlafaxine (Pristiq), duloxetine (Cymbalta), levomilnacipran (Fetzima), and milnacipran (Savella). In addition to inhibiting serotonin reabsorption, SNRIs also inhibit norepinephrine reabsorption into neurons, allowing more serotonin and norepinephrine to be accessible to nerve cells, improving both nerve cell communication and energy levels. However, SNRIs are considered the second-line agent for pseudodementia due to more severe side effects compared to SSRIs, such as dry mouth and hypertension. TCAs or Tricyclic Antidepressants are another medications that belong to the class of antidepressants. Some examples of TCA are amitriptyline (Elavil), clomipramine (Anafranil), doxepin (Sinequan), and imipramine (Tofranil). TCAs also function like SNRIs by inhibiting both serotonin and norepinephrine reabsorption into neurons. However, TCAs activate more neurotransmitters or chemical messengers than SNRIs, perhaps causing additional adverse effects. Therefore, TCAs are not recommended for use unless other antidepressants are no longer working. Zolmitriptan (Zomig) belongs to the class of selective serotonin receptor agonists. The mechanism of action of zolmitriptan is to block pain signals by constricting blood vessels in the brain that cause migraines. In addition to affecting blood vessel constriction, Zolmitriptan indirectly eases depression associated with pseudodementia since it is a selective serotonin receptor agonist. Cholinesterase Inhibitors belong to the class of drugs that inhibit the breakdown of a neurotransmitter called Acetycholine that helps improve nerve cell communication. Some examples of cholinesterase inhibitors are donepezil (Acricept), rivastigmine (Exelon), and galantamine (Razadyne). All of these cholinesterase inhibitors are FDA-approved to treat all or certain stages of Alzheimer's disease. Since the main cause of psuedodementia is found to be depression, Selective Serotonin Reuptake Inhibitors (SSRIs) are still preferred over other medications. Non-pharmacological When pharmacological treatments are ineffective, or in addition to pharmacological treatments, there are a number of non-pharmacological therapies that can be used in the treatment of depression. For some patients, cognitive behavior therapy (This is an effective form of therapy for a wide range of mental illnesses including depression, anxiety disorders, drug abuse problems, etc. that is based on the belief that psychological problems are rooted, in part, in one's own behavior and thought patterns. As such, by changing these patterns using new strategies learned in cognitive behavioral therapy, a patient can learn to better cope.) or interpersonal therapy (This is a form of therapy that has been used in an integrated manner to treat a wide range of psychiatric disorders. It is based on the belief that a patient's relationships in the past and/or present is directly linked to their mental challenges and by improving those relationships, a patient's mental health can be improved.) can be used to delve deeper into their symptoms, ways to manage them, and the root causes of a patient's depression. Patient's can chose to participate in these therapies in individual sessions or in a group setting. Future Research Given the limitations and amount of current researches and studies about pseudodementia, there are still many questions left to answer. Future research regarding younger age groups is necessary to better characterize the risk factors, further criteria, and correlation of age and development of pseudodementia. Future study should also incorporate more modern technologies such as genetic sequencing, investigation of possible pseudodementia-related biomarkers, and PET scans to better understand the underlying mechanism of pseudodementia. In addition, future studies should incorporate larger sample size to increase the validity of the study results and any groups with higher risk of developing pseudodementia to extend the scope of the study. References Aging-associated diseases Mood disorders Psychopathological syndromes Memory disorders
Pseudodementia
[ "Biology" ]
3,410
[ "Senescence", "Aging-associated diseases" ]
14,325,287
https://en.wikipedia.org/wiki/Bluebugging
Bluebugging is a form of Bluetooth attack often caused by a lack of awareness. It was developed after the onset of bluejacking and bluesnarfing. Similar to bluesnarfing, bluebugging accesses and uses all phone features but is limited by the transmitting power of class 2 Bluetooth radios, normally capping its range at 10–15 meters. However, the operational range can be increased with the use of a directional antenna. History Bluebugging was developed by the German researcher Martin Herfurt in 2004, one year after the advent of bluejacking. Initially a threat against laptops with Bluetooth capability, it later targeted mobile phones and PDAs. Bluebugging manipulates a target phone into compromising its security, this to create a backdoor attack before returning control of the phone to its owner. Once control of a phone has been established, it is used to call back the hacker who is then able to listen in to conversations, hence the name "bugging". The Bluebug program also has the capability to create a call forwarding application whereby the hacker receives calls intended for the target phone. A further development of Bluebugging has allowed for the control of target phones through Bluetooth phone headsets, It achieves this by pretending to be the headset and thereby "tricking" the phone into obeying call commands. Not only can a hacker receive calls intended for the target phone, they can send messages, read phonebooks, and examine calendars. See also IEEE 802.15 Near-field communication Personal area network References External links Bluetooth Special Interest Group Site (includes specifications) Official Bluetooth site aimed at users Bluetooth/Ethernet Vendor MAC Address Lookup Bluebugging Video and description Bluetooth Hacking (computer security)
Bluebugging
[ "Technology" ]
362
[ "Wireless networking", "Bluetooth" ]
14,325,911
https://en.wikipedia.org/wiki/BCAR1
Breast cancer anti-estrogen resistance protein 1 is a protein that in humans is encoded by the BCAR1 gene. Gene BCAR1 is localized on chromosome 16 on region q, on the negative strand and it consists of seven exons. Eight different gene isoforms have been identified that share the same sequence starting from the second exon onwards but are characterized by different starting sites. The longest isoform is called BCAR1-iso1 (RefSeq NM_001170714.1) and is 916 amino acids long, the other shorter isoforms start with an alternative first exon. Function BCAR1 is a ubiquitously expressed adaptor molecule originally identified as the major substrate of v-Src and v-Crk . p130Cas/BCAR1 belongs to the Cas family of adaptor proteins and can act as a docking protein for several signalling partners. Due to its ability to associate with multiple signaling partners, p130Cas/BCAR1 contributes to the regulation to a variety of signaling pathways leading to cell adhesion, migration, invasion, apoptosis, hypoxia and mechanical forces. p130Cas/BCAR1 plays a role in cell transformation and cancer progression and alterations of p130Cas/BCAR1 expression and the resulting activation of selective signalling are determinants for the occurrence of different types of human tumors. Due to the capacity of p130Cas/BCAR1, as an adaptor protein, to interact with multiple partners and to be regulated by phosphorylation and dephosphorylation, its expression and phosphorylation can lead to a wide range of functional consequences. Among the regulators of p130Cas/BCAR1 tyrosine phosphorylation, receptor tyrosine kinases (RTKs) and integrins play a prominent role. RTK-dependent p130Cas/BCAR1 tyrosine phosphorylation and the subsequent binding with specific downstream signaling molecule modulate cell processes such as actin cytoskeleton remodeling, cell adhesion, proliferation, migration, invasion and survival. Integrin-mediated p130Cas/BCAR1 phosphorylation upon adhesion to extracellular matrix (ECM) induces downstream signaling that is required for allowing cells to spread and migrate on the ECM. Both RTKs and integrin activation affect p130Cas/BCAR1 tyrosine phosphorylation and represent an efficient means by which cells utilize signals coming from growth factors and integrin activation to coordinate cell responses. Additionally, p130Cas/BCAR1 tyrosine phosphorylation on its substrate domain can be induced by cell stretching subsequent to changes in the rigidity of the extracellular matrix, allowing cells to respond to mechanical force changes in the cell environment. Cas-Family p130Cas/BCAR1 is a member of the Cas family (Crk-associated substrate) of adaptor proteins which is characterized by the presence of multiple conserved motifs for protein–protein interactions, and by extensive tyrosine and serine phosphorylations. The Cas family comprises other three members: NEDD9 (Neural precursor cell expressed, developmentally down-regulated 9, also called Human enhancer of filamentation 1, HEF-1 or Cas-L), EFS (Embryonal Fyn-associated substrate), and CASS4 (Cas scaffolding protein family member 4). These Cas proteins have a high structural homology, characterized by the presence of multiple protein interaction domains and phosphorylation motifs through which Cas family members can recruit effector proteins. However, despite the high degree of similarity, their temporal expression, tissue distribution and functional roles are distinct and not overlapping. Notably, the knock-out of p130Cas/BCAR1 in mice is embryonic lethal, suggesting that other family members do not show an overlapping role in development. Structure p130Cas/BCAR1 is a scaffold protein characterized by several structural domains. It possesses an amino N-terminal Src-homology 3 domain (SH3) domain, followed by a proline-rich domain (PRR) and a substrate domain (SD). The substrate domain consists of 15 repeats of the YxxP consensus phosphorylation motif for Src family kinases (SFKs). Following the substrate domain is the serine-rich domain, which forms a four-helix bundle. This acts as a protein-interaction motif, similar to those found in other adhesion-related proteins such as focal adhesion kinase (FAK) and vinculin. The remaining carboxy-terminal sequence contains a bipartite Src-binding domain (residues 681–713) able to bind both the SH2 and SH3 domains of Src. p130Cas/BCAR1 can undergo extensive changes in tyrosine phosphorylation that occur predominantly in the 15 YxxP repeats within the substrate domain and represent the major post-translational modification of p130Cas/BCAR1. p130Cas/BCAR1 tyrosine phosphorylation can result from a diverse range of extracellular stimuli, including growth factors, integrin activation, vasoactive hormones and peptides ligands for G-protein coupled receptors. These stimuli triggers p130Cas/BCAR1 tyrosine phosphorylation and its translocation from cytosol to the cell membrane. Clinical significance Given the ability of p130Cas/BCAR1 scaffold protein to convey and integrate different type of signals and subsequently to regulate key cellular functions such as adhesion, migration, invasion, proliferation and survival, the existence of a strong correlation between deregulated p130Cas/BCAR1 expression and cancer was inferred. Deregulated expression of p130Cas/BCAR1 has been identified in several cancer types. Altered levels of p130Cas/BCAR1 expression in cancers can result from gene amplification, transcription upregulation or changes in protein stability. Overexpression of p130Cas/BCAR1 has been detected in human breast cancer, prostate cancer, ovarian cancer, lung cancer, colorectal cancer, hepatocellular carcinoma, glioma, melanoma, anaplastic large cell lymphoma and chronic myelogenous leukaemia. The presence of aberrant levels of hyperphosphorylated p130Cas/BCAR1 strongly promotes cell proliferation, migration, invasion, survival, angiogenesis and drug resistance. It has been demonstrated that high levels of p130Cas/BCAR1 expression in breast cancer correlate with worse prognosis, increased probability to develop metastasis and resistance to therapy. Conversely, lowering the amount of p130Cas/BCAR1 expression in ovarian, breast and prostate cancer is sufficient to block tumor growth and progression of cancer cells. p130Cas/BCAR1 has potential uses as a diagnostic and prognostic marker for some human cancers. Since lowering p130Cas/BCAR1 in tumor cells is sufficient to halt their transformation and progression, it is conceivable to propose p130Cas/BCAR1 may represent a therapeutic target. However, the non-catalytic nature of p130Cas/BCAR1 makes difficult to develop specific inhibitors. Notes References Further reading External links Bcar1 Info with links in the Cell Migration Gateway Proteins
BCAR1
[ "Chemistry" ]
1,576
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,078
https://en.wikipedia.org/wiki/Actin%2C%20cytoplasmic%202
Actin, cytoplasmic 2, or gamma-actin is a protein that in humans is encoded by the ACTG1 gene. Gamma-actin is widely expressed in cellular cytoskeletons of many tissues; in adult striated muscle cells, gamma-actin is localized to Z-discs and costamere structures, which are responsible for force transduction and transmission in muscle cells. Mutations in ACTG1 have been associated with nonsyndromic hearing loss and Baraitser-Winter syndrome, as well as susceptibility of adolescent patients to vincristine toxicity. Structure Human gamma-actin is 41.8 kDa in molecular weight and 375 amino acids in length. Actins are highly conserved proteins that are involved in various types of cell motility, and maintenance of the cytoskeleton. In vertebrates, three main groups of actin paralogs, alpha, beta, and gamma, have been identified. The alpha actins are found in muscle tissues and are a major constituent of the sarcomere contractile apparatus. The beta and gamma actins co-exist in most cell types as components of the cytoskeleton, and as mediators of internal cell motility. Actin, gamma 1, encoded by this gene, is found in non-muscle cells in the cytoplasm, and in muscle cells at costamere structures, or transverse points of cell-cell adhesion that run perpendicular to the long axis of myocytes. Function In myocytes, sarcomeres adhere to the sarcolemma via costameres, which align at Z-discs and M-lines. The two primary cytoskeletal components of costameres are desmin intermediate filaments and gamma-actin microfilaments. It has been shown that gamma-actin interacting with another costameric protein dystrophin is critical for costameres forming mechanically strong links between the cytoskeleton and the sarcolemmal membrane. Additional studies have shown that gamma-actin colocalizes with alpha-actinin and GFP-labeled gamma actin localized to Z-discs, whereas GFP-alpha-actin localized to pointed ends of thin filaments, indicating that gamma actin specifically localizes to Z-discs in striated muscle cells. During development of myocytes, gamma actin is thought to play a role in the organization and assembly of developing sarcomeres, evidenced in part by its early colocalization with alpha-actinin. Gamma-actin is eventually replaced by sarcomeric alpha-actin isoforms, with low levels of gamma-actin persisting in adult myocytes which associate with Z-disc and costamere domains. Insights into the function of gamma-actin in muscle have come from studies employing transgenesis. In a skeletal muscle-specific knockout of gamma-actin in mice, these animals showed no detectable abnormalities in development; however, knockout mice showed muscle weakness and fiber necrosis, along with decreased isometric twitch force, disrupted intrafibrillar and interfibrillar connections among myocytes, and myopathy. Clinical significance An autosomal dominant mutation in ACTG1 in the DFNA20/26 locus at 17q25-qter was identified in patients with hearing loss. A Thr278Ile mutation was identified in helix 9 of gamma-actin protein, which is predicted to alter protein structure. This study identified the first disease causing mutation in gamma-actin and underlies the importance of gamma-actin as structural elements of the inner ear hair cells. Since then, other ACTG1 mutations have been linked to nonsyndromic hearing loss, including Met305Thr. A missense mutation in ACTG1 at Ser155Phe has also been identified in patients with Baraitser-Winter syndrome, which is a developmental disorder characterized by congenital ptosis, excessively-arched eyebrows, hypertelorism, ocular colobomata, lissencephaly, short stature, seizures and hearing loss. Differential expression of ACTG1 mRNA was also identified in patients with Sporadic Amyotrophic Lateral Sclerosis, a devastating disease with unknown causality, using a sophisticated bioinformatics approach employing Affymetrix long-oligonucleotide BaFL methods. Single nucleotide polymorphisms in ACTG1 have been associated with vincristine toxicity, which is part of the standard treatment regimen for childhood acute lymphoblastic leukemia. Neurotoxicity was more frequent in patients that were ACTG1 Gly310Ala mutation carriers, suggesting that this may play a role in patient outcomes from vincristine treatment. Interactions ACTG1 has been shown to interact with: CAP1, DMD, TMSB4X, and Plectin. See also Actin References External links Further reading Proteins
Actin, cytoplasmic 2
[ "Chemistry" ]
1,022
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,079
https://en.wikipedia.org/wiki/Language%20expectancy%20theory
Language expectancy theory (LET) is a theory of persuasion. The theory assumes language is a rules-based system, in which people develop expected norms as to appropriate language usage in given situations. Furthermore, unexpected linguistic usage can affect the receiver's behavior resulting from attitudes towards a persuasive message. Background Created by Michael Burgoon, a retired professor of medicine from the University of Arizona, and Gerald R. Miller, the inspiration for LET was sparked by Brooks' work on expectations of language in 1970. Burgoon, Jones and Stewart furthered the discussion with the idea of linguistic strategies and message intensity in an essay published in 1975. The essay linked linguistic strategies, or how a message is framed, to effective persuasive outcomes. The original work for the language expectation theory was published in 1978. Titled "An empirical test of a model of resistance to persuasion", it outlined the theory through 17s. Expectations The theory views language expectancies as enduring patterns of anticipated communication behavior which are grounded in a society's psychological and cultural norms. Such societal forces influence language and enable the identification of non-normative use; violations of linguistic, syntactic and semantic expectations will either facilitate or inhibit an audience's receptivity to persuasion. Burgoon claims applications for his theory in management, media, politics and medicine, and declares that his empirical research has shown a greater effect than expectancy violations theory, the domain of which does not extend to the spoken word. LET argues that typical language behaviors fall within a normative "bandwidth" of expectations determined by a source's perceived credibility, the individual listener's normative expectations and a group's normative social climate, and generally supports a gender-stereotypical reaction to the use of profanity, for example. Communication expectancies are said to derive from three factors: The communicator – individual features, such as ethos or source credibility, personality, appearance, social status and gender. The relationship between a receiver and a communicator, including factors such as attraction, similarity and status equality. Context; i.e., privacy and formality constraints on interaction. Violations Violating social norms can have a positive or negative effect on persuasion. Usually people use language to conform to social norms; but a person's intentional or accidental deviation from expected behavior can have either a positive or negative reaction. Language Expectancy Theory assumes that language is a rule-governed system and people develop expectations concerning the language or message strategies employed by others in persuasive attempts (Burgoon, 1995). Expectations are a function of cultural and sociological norms and preferences arising from cultural values and societal standards or ideals for competent communication. When observed, behavior is preferred over what was expected or when a listener's initial negative evaluation causes a speaker to conform more closely to the expected behavior. The deviation can be seen as positive, but when language choice or behavior is perceived as unacceptable or inappropriate behavior, the violation is negatively received and can inhibit the receptivity to a persuasive appeal. Positive violations occur (b) when negatively evaluated sources conform more closely than expected to cultural values or situational norms. This can result in overly positive evaluation of the source and change promoted by the actor (Burgoon, 1995). Negative violations, resulting from language choices that lie outside socially acceptable behavior in a negative direction, produce no attitude or behavior change in receivers. Summary of propositions Language expectancy theory is based on 17 propositions. Those propositions can be summarized as listed below: 1, 2 and 3: People create expectations for language. Those expectations determine whether messages will be accepted or rejected by an individual. Breaking expectations positively results in a behavior change in favor of the persuasive message while a breaking expectations negatively results in no change or an opposite behavior change. 4, 5 and 6: Individuals with perceived credibility (those who hold power in a society) have the freedom in persuasion to select varied language strategies (wide bandwidth). Those with low credibility and those unsure of their perceived credibility are restricted to low aggression or compliance-gaining messages to be persuasive. 7, 8 and 9: Irrelevant fear and anxiety tactics are better received using low-intensity and verbally unaggressive compliance-gaining. Intense and aggressive language use result in lower levels of persuasion. 10, 11 and 12: For the persuader, an individual who is experiencing cognitive stress will use lower intensity messages. If a communicator violates his/her norms of communication, they will experience cognitive stress. 13 and 14: Pretreatments forewarn receivers of the persuasive attacks (supportive, refutational or a combination). When Persuasive messages do not violate expectations created by the pretreatments, resistance to persuasion is conferred. When pretreatment expectations of persuasive messages are violated, receivers are less resistant to persuasion. 15, 16 and 17: Low intensity attack strategies are more effective than high intensity attack strategies when overcoming resistance to persuasion created in pretreatment. The first message in a string of arguments methodically affects the acceptance of the second message. When expectations are positively violated in the first message, the second will be persuasive. When expectations are negatively violated in the first message, the second will not be persuasive. The role of intensity These propositions give rise to the impact of language intensity—defined by John Waite Bowers as a quality of language that "indicates the degree to which the speaker's attitude toward a concept deviates from neutrality"—on persuasive messages. Theorists have concentrated on two key areas: (1) intensity of language when it comes to gender roles and (2) credibility. The perceived credibility of a source can greatly affect a message's persuasiveness. Researchers found that credible sources can enhance their appeal by using intense language; however, less credible speakers are more persuasive with low-intensity appeals. Similarly, females are less persuasive than males when they use intense language because it violates the expected behavior, but are more persuasive when they use low-intensity language. Males, however, are seen as weak when they argue in a less intense manner. Theorists argue further that females and speakers perceived as having low credibility have less freedom in selecting message strategies and that the use of aggressive language negatively violates expectations. Example To better explain the theory we look at the expectations and societal norms for a man and a woman on their first date. If the man pushed for further physical intimacy after dinner, the societal expectation of a first date would be violated. The example below with Margret and Steve depicts such a scene. Margret: "I had a really good time tonight, Steve. We should do it again." Steve: "Let's cut the crap. Do you want to have sex?" Margret: "Uhhh..." Margret's language expectations of a first date were violated. Steve chooses an aggressive linguistic strategy. If Margret views Steve as a credible and appealing source, she may receive the message positively and, thus, the message would be persuasive. If Margret perceives Steve as an ambiguous or low-credible source, Steve will not be persuasive. In such a case, Steve should have used a low-aggressive message in his attempt to win Margret to his idea of having sex. Criticism Determining whether a positive or negative violation has occurred can be difficult. When there is no attitude or behavior change it may be concluded that a negative violation has occurred (possibly related to a boomerang effect). Conversely, when an attitude or behavior change does occur it may be too easy to conclude a positive violation of expectations has occurred. The theory has also been critiqued for being too "grand" in its predictive and explanatory goals. Burgoon counters that practical applications of his research conclusions are compelling enough to negate this criticism. See also Physician–patient interaction Social influence Notes References Bowers, J.W. (1963). Language intensity, social introversion, and attitude change. Speech Monographs, 30, 345–352. Bowers, J.W. (1964). Some correlates of language intensity. Quarterly Journal of Speech, 50, 415–420. Burgoon, J.K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 12, 13–21. Burgoon, M. (1994). Advances in Research in Social Influence: Essays in Honor of Gerald R. Miller. Charles R. Berger and Michael Burgoon (Editors), East Lansing, MI: Michigan State University Press, 1993. Burgoon, M., Dillard, J.P., & Doran, N. (1984). Friendly or unfriendly persuasion: The effects of violations of expectations by males and females. Human Communication Research, 10, 283–294. Burgoon, M. Jones, S.B., Stewart, D. (1975). Toward a message-centered theory or persuasion: Three empirical investigations of language intensity. Human Communication Research, 1, 240–256. Burgoon, M. and Miller, G.R. (1977) Predictors of resistance to persuasion: propensity of persuasive attack, pretreatment language intensity, and expected delay of attack. The Journal of Psychology, 95, 105–110. Burgoon, M., & Miller, G.R. (1985). An expectancy interpretation of language and persuasion. In H. Giles & R. Clair (Eds.) The social and psychological contexts of language (pp. 199–229). London: Lawrence Erlbaum Associates. Burgoon, M., Hunsacker, F., & Dawson, E. (1994). Approaches to gaining compliance. Human Communication, (pp. 203–217). Thousand Oaks, CA: Sage. Dillard, J. P., & Pfau, M. W. (2002). The Persuasion Handbook: Developments in Theory and Practice (1st ed.). Thousand Oaks, CA: SAGE Behavioral concepts Scientific theories
Language expectancy theory
[ "Biology" ]
2,116
[ "Behavior", "Behavioral concepts", "Behaviorism" ]
14,326,527
https://en.wikipedia.org/wiki/Flying%20probe
Flying probes are test probes used for testing both bare circuit boards and boards loaded with components. Flying probes were introduced in the late 1980’s and can be found in many manufacturing and assembly operations, most often in manufacturing of electronic printed circuit boards. A flying probe tester uses one or more test probes to make contact with the circuit board under test; the probes are moved from place to place on the circuit board to carry out tests of multiple conductors or components. Flying probe testers are a more flexible alternative to bed of nails testers, which use multiple contacts to simultaneously contact the board and which rely on electrical switching to carry out measurements. One limitation in flying probe test methods is the speed at which measurements can be taken; the probes must be moved to each new test site on the board, and then a measurement must be completed. Bed-of-nails testers touch each test point simultaneously and electronic switching of instruments between test pins is more rapid than movement of probes. Manufacturing of a bed-of-nails testers however is more costly. Bare board Loaded board in-circuit test In the testing of printed circuit boards, a flying probe test or fixtureless in-circuit test (FICT) system may be used for testing low to mid volume production, prototypes, and boards that present accessibility problems. A traditional "bed of nails" tester for testing a PCB requires a custom fixture to hold the PCBA and the Pogo pins which make contact with the PCBA. In contrast, FICT uses two or more flying probes, which may be moved based on software instruction. The flying probes are electro-mechanically controlled to access components on printed circuit assemblies (PCAs). The probes are moved around the board under test using an automatically operated two-axis system, and one or more test probes contact components of the board or test points on the printed circuit board. The main advantage of flying probe testing is the substantial cost of a bed-of-nails fixture, costing on the order of US $20,000, is not required. The flying probes also allow easy modification of the test fixture when the PCBA design changes. FICT may be used on both bare or assembled PCB's. However, since the tester makes measurements serially, instead of making many measurements at once, the test cycle may become much longer than for a bed-of-nails fixture. A test cycle that may take 30 seconds on such a system, may take an hour with flying probes. Test coverage may not be as comprehensive as a bed of nails tester (assuming similar net access for each), because fewer points are tested at one time. References electronic test equipment hardware testing nondestructive testing
Flying probe
[ "Materials_science", "Technology", "Engineering" ]
560
[ "Nondestructive testing", "Materials testing", "Electronic test equipment", "Measuring instruments" ]
14,326,547
https://en.wikipedia.org/wiki/Power-off%20testing
Power-off testing is often necessary to test the printed circuit assembly (PCA) board due to uncertainty as to the nature of the failure. When the PCA can be further damaged by applying power it is necessary to use power off test techniques to safely examine it. Power off testing includes analog signature analysis, ohmmeter, LCR Meter and optical inspection. This type of testing also lends itself well to troubleshooting circuit boards without the aid of supporting documentation such as schematics. Typical equipment Analog signature analysis* Huntron Tracker* Automated optical inspection LCR meter Machine vision Ohmmeter Printed circuit board manufacturing Nondestructive testing Hardware testing Electricity
Power-off testing
[ "Materials_science", "Engineering" ]
135
[ "Nondestructive testing", "Electronic engineering", "Materials testing", "Electrical engineering", "Printed circuit board manufacturing" ]
14,326,894
https://en.wikipedia.org/wiki/Iris%20Bay%20%28Dubai%29
The Iris Bay is a 32-floor commercial tower in the Business Bay in Dubai, United Arab Emirates that is known for "its oval, crescent moon type shape." The tower has a total structural height of 170 m (558 ft). Construction of the Iris Bay was expected to be completed in 2008 but progress stopped in 2011. The building was completed 2015. The tower is designed in the shape of an ovoid and comprises two identical double curved pixelated shells which are rotated and cantilevered over the podium. The rear elevation is a continuous vertical curve punctuated by balconies while the front elevation is made up of seven zones of rotated glass. The podium comprises 4 stories with a double height ground level and houses retail and commercial space totaling 36,000 m2. See also List of buildings in Dubai Notes External links Buildings and structures under construction in Dubai High-tech architecture Postmodern architecture Skyscraper office buildings in Dubai
Iris Bay (Dubai)
[ "Engineering" ]
189
[ "Postmodern architecture", "Architecture" ]
14,330,135
https://en.wikipedia.org/wiki/Symmetrical%20double-sided%20two-way%20ranging
In radio technology, symmetrical double-sided two-way ranging (SDS-TWR) is a ranging method that uses two delays that naturally occur in signal transmission to determine the range between two stations: Signal propagation delay between two wireless devices Processing delay of acknowledgements within a wireless device This method is called symmetrical double-sided two-way ranging because: It is symmetrical in that the measurements from station A to station B are a mirror-image of the measurements from station B to station A (ABA to BAB). It is double-sided in that only two stations are used for ranging measurement station A and station B. It is two-way in that a data packet (called a test packet) and an ACK packet is used. Signal propagation delay A special type of packet (test packets) is transmitted from station A (node A) to station B (node B). As time the packet travels through space per meter is known (from physical laws), the difference in time from when it was sent from the transmitter and received at the receiver can be calculated. This time delay is known as the signal propagation delay. Processing delay Station A now expects an acknowledgement from Station B. A station takes a known amount of time to process the incoming test packet, generate an acknowledgement (ack packet), and prepare it for transmission. The sum of time taken to process this acknowledgement is known as processing delay. Calculating the range The acknowledgement sent back to station A includes in its header those two delay values – the signal propagation delay and the processing delay. A further signal propagation delay can be calculated by Station A on the received acknowledgement, even as this delay was calculated on the test packet. These three values can then be used by an algorithm to calculate the range between these two stations. Verifying the range calculation To verify that the range calculation was accurate, the same procedure is repeated by station B sending a test packet to station A and station A sending an acknowledgement to station B. At the end of this procedure, two range values are determined and an average of the two can be used to achieve a fairly accurate distance measurement between these two stations. See also Multilateration Real-time locating system References Radio technology Wireless locating
Symmetrical double-sided two-way ranging
[ "Technology", "Engineering" ]
454
[ "Information and communications technology", "Telecommunications engineering", "Wireless locating", "Radio technology" ]
14,330,284
https://en.wikipedia.org/wiki/Weather%20god
A weather god or goddess, also frequently known as a storm god or goddess, is a deity in mythology associated with weather phenomena such as thunder, snow, lightning, rain, wind, storms, tornadoes, and hurricanes. Should they only be in charge of one feature of a storm, they will be called after that attribute, such as a rain god or a lightning/thunder god. This singular attribute might then be emphasized more than the generic, all-encompassing term "storm god", though with thunder/lightning gods, the two terms seem interchangeable. They feature commonly in polytheistic religions, especially in Proto-Indo-European ones. Storm gods are most often conceived of as wielding thunder and/or lightning (some lightning gods' names actually mean "thunder", but since one cannot have thunder without lightning, they presumably wielded both). The ancients didn't seem to differentiate between the two, which is presumably why both the words "lightning bolt" and "thunderbolt" exist despite being synonyms. Of the examples currently listed storm themed deities are more frequently depicted as male, but both male and female storm or other rain, wind, or weather deities are described. Africa and the Middle East Sub-Sahara Africa Umvelinqangi, god of thunder in Zulu traditional religion Mbaba Mwana Waresa, goddess of rain in Zulu traditional religion Ọya, the orisha of winds, tempests, and cyclones in Yoruba religion Bunzi, goddess of rain, in Kongo religion. Tano (Ta Kora), a god of thunder and war in the Akan religion. Afroasiatic Middle East Canaanite Baal, Canaanite god of fertility, weather, and war. Hadad, the Canaanite and Carthaginian storm, fertility, & war god. Identified as Baʿal's true name at Ugarit. Yahwism, the faith of ancient Israel and Judah Egyptian Horus, the Egyptian god of rainstorms, the weather, the sky and war. Associated with the sun, kingship, and retribution. Personified in the pharaoh. Set, the Egyptian chaos, evil, and storm god, lord of the desert. Mesopotamian Enlil, god associated with wind, air, earth, and storms Adad, the Mesopotamian weather god Manzat, goddess of the rainbow Shala, wife of Adad and a rain goddess Wer, a weather god worshiped in northern Mesopotamia and in Syria Western Eurasia Albanian Dielli, the Sun: god of the sky and weather Zojz, Shurdh, i Verbti, Rmoria: sky and weather god Balto-Slavic Bangpūtys, Lithuanian god of storms and the sea Perkūnas, Baltic god of thunder, rain, mountains, and oak trees. Servant of the creator god Dievas. Perun, Slavic god of thunder and lightning and king of the gods Celtic Taranis, Celtic god of thunder, often depicted with a wheel as well as a thunderbolt Germanic Freyr, Norse god of agriculture, medicine, fertility, sunshine, summer, abundance, and rain Thor, Norse god of thunder/lightning, oak trees, protection, strength, and hallowing. Also Thunor and Donar, the Anglo-Saxon and Continental Germanic versions, respectively, of him. All descend from Common Germanic *Thunraz, the reflex of the PIE thunder god for this language branch of the Indo-Europeans. Greco-Roman Aeolus (son of Hippotes), keeper of the winds in the Odyssey Anemoi, collective name for the gods of the winds in Greek mythology, their number varies from 4 to more Jupiter, the Roman weather and sky god and king of the gods Neptune , the Roman God of the seas, oceans, earthquakes and Storms Poseidon, Greek God of the sea, King of the Seas and Oceans, God of Earthquakes and Storms. He is referred to The Stormbringer Tempestas, Roman goddess of storms or sudden weather. Commonly referred to in the plural, Tempestates Tritopatores, wind gods Zeus, Greek weather and sky god and king of the gods Western Asia Anatolian-Caucasian Tamar (goddess), Georgian virgin goddess who controlled the weather. Tarḫunna, Hittite storm god; other Anatolian languages had similar names for their storm gods, such as Luwian below. Tarḫunz, Luwian storm god. Teshub, Hurrian storm god. Theispas or Teisheba, the Urartian storm and war god. Vayu, Hindu/Vedic wind god. Weather god of Nerik, Hittite god of the weather worshiped in the village of Nerik. Weather god of Zippalanda, Hittite god of the weather worshiped in the village of Zippalanda. Hindu-Vedic Indra, Hindu God of the Weather, Storms, Sky, Lightning, and Thunder. Also known as the King of gods. Mariamman, Hindu rain goddess. Rudra, the god of wind, storms, and hunting; destructive aspect of Shiva Persian-Zoroastrian Vayu-Vata, Iranian duo of gods, the first is the god of wind, much like the Hindu Vayu. Uralic Küdryrchö Jumo, the Mari storm god. Ukko, Finnish thunder and harvest god and king of the gods Asia-Pacific / Oceania Chinese Dian Mu, Leigong, and Wen Zhong, the thunder deities. Feng Bo, Feng Po Po, and Han Zixian, the Deities of Wind. Yunzhongzi, the master of clouds. Yu Shi, the god of rain. Sometimes the Dragon Kings were included instead of Yu Shi Filipino Oden, the Bugkalot deity of the rain, worshiped for the deity's life-giving waters Apo Tudo, the Ilocano deity of the rain Anitun Tauo, the Sambal goddess of wind and rain who was reduced in rank by Malayari for her conceit Anitun Tabu, the Tagalog goddess of wind and rain and daughter of Idianale and Dumangan Bulan-hari, one of the Tagalog deities sent by Bathala to aid the people of Pinak; can command rain to fall; married to Bitu-in Santonilyo, a Bisaya deity who brings rain when its image is immersed at sea Diwata Kat Sidpan, a Tagbanwa deity who lives in the western region called Sidpan; controls the rains Diwata Kat Libatan, a Tagbanwa deity who lives in the eastern region called Babatan; controls the rain Diwata na Magbabaya, simply referred as Magbabaya, the good Bukidnon supreme deity and supreme planner who looks like a man; created the earth and the first eight elements, namely bronze, gold, coins, rock, clouds, rain, iron, and water; using the elements, he also created the sea, sky, moon, and stars; also known as the pure god who wills all things; one of three deities living in the realm called Banting Anit: also called Anitan; the Manobo guardian of the thunderbolt Inaiyau: the Manobo god of storms Tagbanua: the Manobo god of rain Umouiri: the Manobo god of clouds Libtakan: the Manobo god of sunrise, sunset, and good weather Japanese Fūjin, Japanese wind god. Raijin, Japanese god of thunder, lightning, and storms Susanoo, tempestuous Japanese god of storms and the sea. Vietnamese Thần Gió, Vietnamese wind god. Oceania Baiame, sky god and creator deity of southeastern Australia. Julunggul, Arnhem Land rainbow serpent goddess who oversaw the initiation of boys into manhood. Tāwhirimātea, Maori storm god. Native Americas Central America, South America and the Caribbean Apocatequil, Pre-Incan god of lightning, the day and good. Regional variant of god Illapa. Chaac, Maya rain god. Aztec equivalent is Tlaloc. Coatrisquie, Taíno rain goddess, servant of Guabancex, and sidekick of thunder god Guatauva. Cocijo, Zapotec god of lightning. Ehecatl, Aztec god of wind. Guabancex, top Taíno storm goddess; the Lady of the Winds who also dishes out earthquakes and other natural disasters. Guatauva, Taíno god of thunder and lightning who is also responsible for rallying the other storm gods. Huari, Pre-Incan god of water, rain, lightning, agriculture and war. After a period of time, he was identified as a giant god of war, sun, water and agriculture. Huracán, K'iche Maya god of the weather, wind, storms, and fire. Illapa, Inca god of lightning, thunder, rain and war. He is considered one of the most important and powerful Inca gods. Juracán, Taíno zemi or deity of chaos and disorder believed to control the weather, particularly hurricanes. K'awiil, classic Maya god of lightning. Kon, Inca god of wind and rain. Kon is also a creator god. Pachakamaq, Inca god of earthquakes, fire, the clouds and sky. Commonly described as a reissue of Wiracocha. He was one of the most important Inca gods, as well as he is considered the creator god of the universe and controller of the balance of the world. Paryaqaqa, Pre-Incan god of water, torrential rains, storms and lightning. Regional variant of the god Illapa. Q'uq'umatz, K'iche Maya god of wind and rain, also known as Kukulkan, Aztec equivalent is Quetzalcoatl. Tezcatlipoca, Aztec god of hurricanes and night winds. Tlaloc, Aztec rain and earthquake god. Mayan equivalent is Chaac. Tohil, K'iche Maya god of rain, sun, and fire. Tupã, the Guaraní god of thunder and light. Creator of the universe. Wiracocha, the Inca and Pre-Incan god of everything. Absolute creator of the entire Cosmos, as well as everything in existence. Considered the father of all the Inca gods and supreme god of the Inca pantheon. Wiracocha was associated with the sun, lightning, and storms. Yana Raman, Pre-Incan god of lightning. Considered creator by the Yaros or Llacuaces ethnic group. Regional variant of the god Illapa. Yopaat, a Classic-period Maya storm god. See also Ekendriya Rain god Sea god, often responsible for weather at sea Sky god Thunder god Wind god References Further reading Holtom, D. C. "The Storm God Theme in Japanese Mythology." Sociologus, Neue Folge / New Series, 6, no. 1 (1956): 44-56. https://www.jstor.org/stable/43643852. Lists of deities
Weather god
[ "Physics" ]
2,295
[ "Weather", "Sky and weather deities", "Physical phenomena" ]
14,330,683
https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20%28Norway%29
The Royal Norwegian Ministry of Energy () is a Norwegian ministry responsible for energy, including petroleum and natural gas production in the North Sea. It is led by Minister of Energy Terje Aasland of the Labour Party since 2022. The department must report to the legislature, the Storting. History The ministry was originally established in 1978, where petroleum and energy affairs were transferred from the Ministry of Industry. It was merged into the Ministry of Industry as to become Ministry of Industry and Energy in 1993. In 1997, petroleum and energy affairs was once again transferred to the current ministry. It was renamed again in 2024 as Ministry of Energy. Organisation Political staff As of June 2023, the political staff of the ministry is as follows: Minister Terje Aasland (Labour Party) State Secretary Andreas Bjelland Eriksen (Labour Party) State Secretary Astrid Bergmål (Labour Party) State Secretary Elisabeth Sæther (Labour Party) Political Advisor Jorid Juliussen Nordmelan (Labour Party) Departments The ministry is divided into four departments and a communication unit. Communication Unit Technology and Industry Department Energy and Water Resources Department Department of Trade and Industrial Economics Administration, Budgets and Accounting Department Subsidiaries Subordinate government agencies: Norwegian Petroleum Directorate Norwegian Water Resources and Energy Directorate Gassnova Statnett Wholly owned limited companies: Gassco Petoro Partially owned public limited companies: Equinor (62% ownership) References External links Official web site Petroleum and Energy Norway Ministry of Petroleum and Energy Ministry of Petroleum and Energy Ministry of Petroleum and Energy Petroleum politics 1978 establishments in Norway Norway, Petroleum and Energy
Ministry of Energy (Norway)
[ "Chemistry", "Engineering" ]
318
[ "Petroleum", "Energy organizations", "Petroleum politics", "Energy ministries" ]
14,330,991
https://en.wikipedia.org/wiki/Outline%20of%20nuclear%20technology
The following outline is provided as an overview of and topical guide to nuclear technology: Nuclear technology – involves the reactions of atomic nuclei. Among the notable nuclear technologies are nuclear power, nuclear medicine, and nuclear weapons. It has found applications from smoke detectors to nuclear reactors, and from gun sights to nuclear weapons. Essence of nuclear technology Atomic nucleus Branches of nuclear technology Nuclear engineering History of nuclear technology History of nuclear power History of nuclear weapons Nuclear material Nuclear fuel Fertile material Thorium Uranium Enriched uranium Depleted uranium Plutonium Deuterium Tritium Nuclear power Nuclear power – List of nuclear power stations Nuclear reactor technology Fusion power Inertial fusion power plant Reactor types List of nuclear reactors Advanced gas-cooled reactor Boiling water reactor Fast breeder reactor Fast neutron reactor Gas-cooled fast reactor Generation IV reactor Integral Fast Reactor Lead-cooled fast reactor Liquid-metal-cooled reactor Magnox reactor Molten-salt reactor Pebble-bed reactor Pressurized water reactor Sodium-cooled fast reactor Supercritical water reactor Very high temperature reactor Radioisotope thermoelectric generator Radioactive waste Future energy development Nuclear propulsion Nuclear thermal rocket Polywell Nuclear decommissioning Nuclear power phase-out Civilian nuclear accidents List of civilian nuclear accidents List of civilian radiation accidents Nuclear medicine Nuclear medicine – BNCT Brachytherapy Gamma (Anger) Camera PET Proton therapy Radiation therapy SPECT Tomotherapy Nuclear weapons Nuclear weapons – Nuclear explosion Effects of nuclear explosions Types of nuclear weapons Strategic nuclear weapon ICBM SLBM Tactical nuclear weapons List of nuclear weapons Nuclear weapons systems Nuclear weapons delivery (missiles, etc.) Nuclear weapon design Nuclear weapons proliferation Nuclear weapons testing List of states with nuclear weapons List of nuclear tests Nuclear strategy Assured destruction Counterforce, Countervalue Decapitation strike Deterrence Doctrine for Joint Nuclear Operations Fail-deadly Force de frappe First strike, Second strike Game theory & wargaming Massive retaliation Minimal deterrence Mutual assured destruction (MAD) No first use National Security Strategy of the United States Nuclear attribution Nuclear blackmail Nuclear proliferation Nuclear utilization target selection (NUTS) Single Integrated Operational Plan (SIOP) Strategic bombing Nuclear weapons incidents List of sunken nuclear submarines United States military nuclear incident terminology 1950 British Columbia B-36 crash 1950 Rivière-du-Loup B-50 nuclear weapon loss incident 1958 Mars Bluff B-47 nuclear weapon loss incident 1961 Goldsboro B-52 crash 1961 Yuba City B-52 crash 1964 Savage Mountain B-52 crash 1965 Philippine Sea A-4 incident 1966 Palomares B-52 crash 1968 Thule Air Base B-52 crash 2007 United States Air Force nuclear weapons incident Nuclear technology scholars Henri Becquerel Niels Bohr James Chadwick John Cockcroft Pierre Curie Marie Curie Albert Einstein Michael Faraday Enrico Fermi Otto Hahn Lise Meitner Robert Oppenheimer Wolfgang Pauli Franco Rasetti Ernest Rutherford Ernest Walton See also Outline of energy Outline of nuclear power List of civilian nuclear ships List of military nuclear accidents List of nuclear medicine radiopharmaceuticals List of nuclear waste treatment technologies List of particles Anti-nuclear movement External links Nuclear Energy Institute – Beneficial Uses of Radiation Nuclear Technology Nuclear technology Nuclear technology outline Outline of nuclear technology
Outline of nuclear technology
[ "Physics" ]
635
[ "Nuclear technology", "Nuclear physics" ]
14,331,278
https://en.wikipedia.org/wiki/Hildebrand%20solubility%20parameter
The Hildebrand solubility parameter (δ) provides a numerical estimate of the degree of interaction between materials and can be a good indication of solubility, particularly for nonpolar materials such as many polymers. Materials with similar values of δ are likely to be miscible. Definition The Hildebrand solubility parameter is the square root of the cohesive energy density: The cohesive energy density is the amount of energy needed to completely remove a unit volume of molecules from their neighbours to infinite separation (an ideal gas). This is equal to the heat of vaporization of the compound divided by its molar volume in the condensed phase. In order for a material to dissolve, these same interactions need to be overcome, as the molecules are separated from each other and surrounded by the solvent. In 1936 Joel Henry Hildebrand suggested the square root of the cohesive energy density as a numerical value indicating solvency behavior. This later became known as the "Hildebrand solubility parameter". Materials with similar solubility parameters will be able to interact with each other, resulting in solvation, miscibility or swelling. Uses and limitations Its principal utility is that it provides simple predictions of phase equilibrium based on a single parameter that is readily obtained for most materials. These predictions are often useful for nonpolar and slightly polar (dipole moment < 2 debyes) systems without hydrogen bonding. It has found particular use in predicting solubility and swelling of polymers by solvents. More complicated three-dimensional solubility parameters, such as Hansen solubility parameters, have been proposed for polar molecules. The principal limitation of the solubility parameter approach is that it applies only to associated solutions ("like dissolves like" or, technically speaking, positive deviations from Raoult's law); it cannot account for negative deviations from Raoult's law that result from effects such as solvation or the formation of electron donor–acceptor complexes. Like any simple predictive theory, it can inspire overconfidence; it is best used for screening with data used to verify the predictions. Units The conventional units for the solubility parameter are (calories per cm3)1/2, or cal1/2 cm−3/2. The SI units are J1/2 m−3/2, equivalent to the pascal1/2. 1 calorie is equal to 4.184 J. 1 cal1/2 cm−3/2 = (523/125 J)1/2 (10−2 m)−3/2 = (4.184 J)1/2 (0.01 m)−3/2 = 2.045483 103 J1/2 m−3/2 = 2.045483 (106 J/m3)1/2= 2.045483 MPa1/2. Given the non-exact nature of the use of δ, it is often sufficient to say that the number in MPa1/2 is about twice the number in cal1/2 cm−3/2. Where the units are not given, for example, in older books, it is usually safe to assume the non-SI unit. Examples From the table, poly(ethylene) has a solubility parameter of 7.9 cal1/2 cm−3/2. Good solvents are likely to be diethyl ether and hexane. (However, PE only dissolves at temperatures well above 100 °C.) Poly(styrene) has a solubility parameter of 9.1 cal1/2 cm−3/2, and thus ethyl acetate is likely to be a good solvent. Nylon 6,6 has a solubility parameter of 13.7 cal1/2 cm−3/2, and ethanol is likely to be the best solvent of those tabulated. However, the latter is polar, and thus we should be very cautions about using just the Hildebrand solubility parameter to make predictions. See also Solvent Hansen solubility parameters References Notes Bibliography External links Abboud J.-L. M., Notario R. (1999) Critical compilation of scales of solvent parameters. part I. pure, non-hydrogen bond donor solvents – technical report. Pure Appl. Chem. 71(4), 645–718 (IUPAC document with large table (1b) of Hildebrand solubility parameter (δH)) Polymer chemistry 1936 introductions
Hildebrand solubility parameter
[ "Chemistry", "Materials_science", "Engineering" ]
933
[ "Materials science", "Polymer chemistry" ]
14,331,485
https://en.wikipedia.org/wiki/Over-the-counter%20counseling
Over-the-counter counseling (or OTC counseling) refers to the counseling that a pharmacist may provide on the subject of initiating, modifying, or stopping an over-the-counter (OTC) drug product. OTC counseling requires an assessment of the patient's self-care concerns and drug-related needs. The types of drugs that are involved in OTC counseling are, for example, used to treat self-diagnosable conditions like heartburn, cough, and rashes, though prescription drugs and professional diagnoses are also relevant to the recommendation process. Purpose The aim of OTC counseling is to empower patients to take control of their healthcare-related needs for conditions that do not require an appointment with a medical doctor. This benefits the healthcare system by reducing unnecessary physician visits. The pharmacist can also use OTC counseling to ensure the highest likelihood of success for the patient's self-care attempt and minimize the risk of any drug-related problems. Although OTC drugs are generally regarded as safe for use without a prescription (by definition), medication errors still occur. For example, patients sometimes misuse OTC products by taking larger than recommended doses, in order to bring about symptomatic relief more quickly, or even intentionally abuse them for unlabeled indications. Even when a patient is instructed not to use OTC products without speaking with their primary care physician, patients can still fail to identify products as OTC medications worth avoiding. Technique A pharmacist can use both open-ended questions (that start with the word who, what, how, why or where) as well as close-ended questions (that start with the word will, can, do or did) which are to be used only if the former do not get the appropriate response in order to obtain relevant information about a patient's potential needs for treatment or potential drug-therapy problems. Pharmacists ask patients about comorbidities to avoid any drug-disease state contraindications. Formal frameworks Although OTC counseling does not necessarily involve the use of a formal framework, various frameworks have been proposed: QuEST The QuEST approach has been described as both "short" and "systematic." It takes the form of the following: Qu : Quickly and accurately assess the patient (via SCHOLAR) E : Establish appropriateness for self-care S : Suggest appropriate self-care strategies T : Talk with the patient SCHOLAR S : Symptoms C : Characteristics H : History O : Onset L : Location A : Aggravating factors R : Remitting factors SCHOLAR-MAC As above, with the following addition: M : Medications A : Allergies C : Conditions WWHAM The WWHAM method is not strict; there is no requirement that the OTC counseling follow the exact order of the mnemonic. It takes the form of the following: W : Who is the patient W : What are the symptoms H : How long have the symptoms been present A : Action taken M : Medication being taken ASMETHOD The ASMETHOD has been attributed to the London pharmacist, Derek Balon. It takes the form of the following: A : Age/appearance S : Self or someone else M : Medication E : Extra medicines T : Time persisting H : History O : Other symptoms D : Danger symptoms ENCORE The ENCORE method helps pharmacists focus intently on the patient's presenting symptoms while considering the appropriate OTC recommendation. It takes the form of the following: E : Explore N : Nature of the symptoms O : Obtain the identity of the patient C : Concurrent medications E : Exclude the possibility of a serious disease O : Other associated symptoms N : No medication; consider a non-pharmacological approach as appropriate C : Care G : Geriatric patient P : Pediatric patient P : Pregnant women L : Lactating mothers O : Observe O : Other tell-tale signs of the condition D : Demeanor of the patient D : Dramatization by the patient R : Refer P : Potentially serious case of the disease P : Persistent symptoms (or failure of previous therapy) P : Patients at increased risk (e.g. diabetic patients with a wound on the underside of the foot) E : Explain your recommendation SIT DOWN SIR S : Site or location of a sign/symptom I : Intensity or severity T : Type or nature D : Duration O : Onset W : With (other symptoms) N : Annoyed or aggravated by S : Spread or radiation I : Incidence or frequency R : Relieved by Subject areas Proton-pump inhibitors For the selection of OTC proton-pump inhibitors (PPIs), pharmacists must first determine whether or not a patient is likely to benefit from self-care for the treatment of their acid reflux symptoms. Examples of exclusions to self-care treatment of acid-reflux symptoms include a positive family history of gastrointestinal cancers, since their symptoms may reflect a more serious, underlying condition, and patients that present with so-called "alarm symptoms," which require a prompt evaluation by a diagnostician. The available PPIs labeled for OTC use varies by country. As of October 2015, in the United States, available OTC proton-pump inhibitors include omeprazole, lansoprazole, and esomeprazole, whereas the UK approves the OTC use of omeprazole, esomeprazole, pantoprazole, and rabeprazole. Dietary supplements Whether or not pharmacists should be involved with selling dietary supplements, which are not approved for the treatment or prevention of any disease or disorder, is the subject of much ethical debate. However, a 2009 review of the literature found that the common perception was that pharmacists should be involved in the OTC counseling process for dietary supplements where dietary supplements are sold. As experts in drug therapies that cause vitamin depletion, there are several recommendations that pharmacists commonly make. For example, pharmacists sometimes advise patients on long-term metformin therapy to supplement with vitamin B12 to treat or prevent diabetic peripheral neuropathy. Cancer While there are currently no OTC medications available for the treatment of cancer in the United States, there are specific OTC recommendations that apply to cancer patients that do not apply to the general population. Even a common OTC medication like acetaminophen may pose a risk to cancer patients by masking the presence of fever, which is an important sign of a serious side effect of some chemotherapy regimens called febrile neutropenia. Upper respiratory tract infections During OTC counseling, pharmacists differentiate between self-care appropriate upper respiratory tract infections, like the common cold, and potentially devastating infections like the flu. Urinary incontinence Pharmacists can offer non-pharmacological, behavioral counseling for patients with urinary incontinence. This includes teaching patients about the important behavioral interventions that can reduce their symptoms and improve quality of life. This can include recommending daily Kegel exercises, and instructing patients on the proper technique. In addition, pharmacists can provide resources for patients to learn more about how to control their symptoms. In terms of medications, pharmacists can help patients identify medications that may be worsening or causing their urinary incontinence, or offer recommendations for prescription medications for patients to take to their physicians. Comparison to prescription drug counseling OTC counseling patients about self-care and non-prescription drugs does not follow the same format as counseling for prescription drugs. A pharmacist who counsels for a prescription drug can view a patient's profile, which includes their current list of concurrent medications and allergies to medications. However, an OTC counseling session may occur in the aisle of the store, forcing pharmacists to elicit the necessary information from patients directly. References Pharmacy
Over-the-counter counseling
[ "Chemistry" ]
1,626
[ "Pharmacology", "Pharmacy" ]
14,331,659
https://en.wikipedia.org/wiki/Pulsed%20radiofrequency
Pulsed radiofrequency is the technique whereby radio frequency (RF) oscillations are gated at a rate of pulses (cycles) per second (one cycle per second is known as a hertz (Hz)). Radio frequency energies occupy to of the electromagnetic spectrum. Radio frequency electromagnetic energy is routinely produced by RF electrical circuits connected to a transducer, usually an antenna. Pulsed radio frequency waveforms The figure below shows an example of a generalized pulsed radio frequency waveform as seen with an oscilloscope with an antenna probe. In this example there are 1000 pulses per second (one kilohertz pulse rate) with a gated pulse width of 42 μs. The pulse packet frequency in this example is 27.125 MHz of RF energy. The duty cycle for a pulsed radio frequency is the percent time the RF packet is on, 4.2% for this example ([0.042 ms × 1000 pulses divided by 1000 ms/s] × 100). The pulse packet form can be a square, triangle, sawtooth or sine wave. In several applications of pulse radio frequency, such as radar, times between pulses can be modulated. Use in radar The best understood and applied use of pulse radio frequency electromagnetic energy is their use in radar. The uses of radar are diverse and applied to military, civilian and space exploration. Radar is based on the reflection or scatter of pulsed radiofrequency waves emitted from a transmitter which are then detected by an antenna which then determines the range, speed, and direction of objects. In most uses the transmitter and detector are located at the same location. Radio frequencies used with radar are from 3 MHz to 300 GHz depending on the type and application. Therapeutic uses Pulsed radiofrequency fields are an emerging technology used in the medical field for the treatment of tumors, cardiac arrhythmias, chronic and post-operative pain, bone fracture, and soft tissue wounds. There are two general categories of pulsed radiofrequency field therapies based on their mechanism of action: thermal and non-thermal (athermal). While thermal radiofrequency ablation for tumors and cardiac arrhythmia has been used for over 25 years, non-thermal pulsed radio frequency is currently being developed for the ablation of cardiac arrhythmias and tumors. The technique uses pulsed radio frequency energy delivered via catheter at frequencies of 300–750 kHz for 30 to 60 seconds. Thermal pulsed radio frequency takes advantage of high current delivered focally by an electrode to ablate the tissue of interest. Generally the tissue/electrode temperature reached is 60–75 °C resulting in focal tissue destruction. Thermal pulse radio frequency ablation has also been used for lesioning of peripheral nerves to reduce chronic pain. Non thermal therapeutic uses of pulsed radio frequency are currently being used to treat pain and edema, chronic wounds, and bone repair. Pulsed radiofrequency therapy technologies are described by the acronyms EMF (electromagnetic field), PEMF (pulsed electromagnetic fields), PRF (pulsed radiofrequency fields), and PRFE (pulsed radiofrequency energy). These technologies have been varied in terms of their electric and magnetic field energies as well as in the pulse length, duty cycle, treatment time and mode of delivery. Although pulsed radiofrequency has been used for medical treatment purposes for decades, peer reviewed publications assessing the efficacy and physiological mechanism(s) are now starting to appear addressing this technology. Potential effects of non-thermal PEMFs are seen on some human cell types with different sensitivities, while the evidence suggests that frequencies higher than 100 Hz, magnetic flux densities between 1 and 10 mT, and chronic exposure more than 10 days would be more effective in establishing some cellular response. Natural sources Natural occurring sources of pulsed radiofrequency exist in the form of stars called pulsars. Pulsars were discovered in 1967 using a radio telescope. These stars are thought to be rapidly spinning neutron stars. These stars have powerful magnetic fields which cause the star to emit strong radio frequencies. Different sizes of pulsars pulse at different rates. References Radio spectrum
Pulsed radiofrequency
[ "Physics" ]
861
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
14,331,851
https://en.wikipedia.org/wiki/Boundedly%20generated%20group
In mathematics, a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups. The property of bounded generation is also closely related with the congruence subgroup problem (see ). Definitions A group G is called boundedly generated if there exists a finite subset S of G and a positive integer m such that every element g of G can be represented as a product of at most m powers of the elements of S: where and are integers. The finite set S generates G, so a boundedly generated group is finitely generated. An equivalent definition can be given in terms of cyclic subgroups. A group G is called boundedly generated if there is a finite family C1, …, CM of not necessarily distinct cyclic subgroups such that G = C1…CM as a set. Properties Bounded generation is unaffected by passing to a subgroup of finite index: if H is a finite index subgroup of G then G is boundedly generated if and only if H is boundedly generated. Bounded generation goes to extension: if a group G has a normal subgroup N such that both N and G/N are boundedly generated, then so is G itself. Any quotient group of a boundedly generated group is also boundedly generated. A finitely generated torsion group must be finite if it is boundedly generated; equivalently, an infinite finitely generated torsion group is not boundedly generated. A pseudocharacter on a discrete group G is defined to be a real-valued function f on a G such that f(gh) − f(g) − f(h) is uniformly bounded and f(gn) = n·f(g). The vector space of pseudocharacters of a boundedly generated group G is finite-dimensional. Examples If n ≥ 3, the group SLn(Z) is boundedly generated by its elementary subgroups, formed by matrices differing from the identity matrix only in one off-diagonal entry. In 1984, Carter and Keller gave an elementary proof of this result, motivated by a question in algebraic . A free group on at least two generators is not boundedly generated (see below). The group SL2(Z) is not boundedly generated, since it contains a free subgroup with two generators of index 12. A Gromov-hyperbolic group is boundedly generated if and only if it is virtually cyclic (or elementary), i.e. contains a cyclic subgroup of finite index. Free groups are not boundedly generated Several authors have stated in the mathematical literature that it is obvious that finitely generated free groups are not boundedly generated. This section contains various obvious and less obvious ways of proving this. Some of the methods, which touch on bounded cohomology, are important because they are geometric rather than algebraic, so can be applied to a wider class of groups, for example Gromov-hyperbolic groups. Since for any n ≥ 2, the free group on 2 generators F2 contains the free group on n generators Fn as a subgroup of finite index (in fact n − 1), once one non-cyclic free group on finitely many generators is known to be not boundedly generated, this will be true for all of them. Similarly, since SL2(Z) contains F2 as a subgroup of index 12, it is enough to consider SL2(Z). In other words, to show that no Fn with n ≥ 2 has bounded generation, it is sufficient to prove this for one of them or even just for SL2(Z) . Burnside counterexamples Since bounded generation is preserved under taking homomorphic images, if a single finitely generated group with at least two generators is known to be not boundedly generated, this will be true for the free group on the same number of generators, and hence for all free groups. To show that no (non-cyclic) free group has bounded generation, it is therefore enough to produce one example of a finitely generated group which is not boundedly generated, and any finitely generated infinite torsion group will work. The existence of such groups constitutes Golod and Shafarevich's negative solution of the generalized Burnside problem in 1964; later, other explicit examples of infinite finitely generated torsion groups were constructed by Aleshin, Olshanskii, and Grigorchuk, using automata. Consequently, free groups of rank at least two are not boundedly generated. Symmetric groups The symmetric group Sn can be generated by two elements, a 2-cycle and an n-cycle, so that it is a quotient group of F2. On the other hand, it is easy to show that the maximal order M(n) of an element in Sn satisfies log M(n) ≤ n/e where e is Euler's number (Edmund Landau proved the more precise asymptotic estimate log M(n) ~ (n log n)1/2). In fact if the cycles in a cycle decomposition of a permutation have length N1, ..., Nk with N1 + ··· + Nk = n, then the order of the permutation divides the product N1 ··· Nk, which in turn is bounded by (n/k)k, using the inequality of arithmetic and geometric means. On the other hand, (n/x)x is maximized when x = e. If F2 could be written as a product of m cyclic subgroups, then necessarily n! would have to be less than or equal to M(n)m for all n, contradicting Stirling's asymptotic formula. Hyperbolic geometry There is also a simple geometric proof that G = SL2(Z) is not boundedly generated. It acts by Möbius transformations on the upper half-plane H, with the Poincaré metric. Any compactly supported 1-form α on a fundamental domain of G extends uniquely to a G-invariant 1-form on H. If z is in H and γ is the geodesic from z to g(z), the function defined by satisfies the first condition for a pseudocharacter since by the Stokes theorem where Δ is the geodesic triangle with vertices z, g(z) and h−1(z), and geodesics triangles have area bounded by π. The homogenized function defines a pseudocharacter, depending only on α. As is well known from the theory of dynamical systems, any orbit (gk(z)) of a hyperbolic element g has limit set consisting of two fixed points on the extended real axis; it follows that the geodesic segment from z to g(z) cuts through only finitely many translates of the fundamental domain. It is therefore easy to choose α so that fα equals one on a given hyperbolic element and vanishes on a finite set of other hyperbolic elements with distinct fixed points. Since G therefore has an infinite-dimensional space of pseudocharacters, it cannot be boundedly generated. Dynamical properties of hyperbolic elements can similarly be used to prove that any non-elementary Gromov-hyperbolic group is not boundedly generated. Brooks pseudocharacters Robert Brooks gave a combinatorial scheme to produce pseudocharacters of any free group Fn; this scheme was later shown to yield an infinite-dimensional family of pseudocharacters (see ). Epstein and Fujiwara later extended these results to all non-elementary Gromov-hyperbolic groups. Gromov boundary This simple folklore proof uses dynamical properties of the action of hyperbolic elements on the Gromov boundary of a Gromov-hyperbolic group. For the special case of the free group Fn, the boundary (or space of ends) can be identified with the space X of semi-infinite reduced words g1 g2 ··· in the generators and their inverses. It gives a natural compactification of the tree, given by the Cayley graph with respect to the generators. A sequence of semi-infinite words converges to another such word provided that the initial segments agree after a certain stage, so that X is compact (and metrizable). The free group acts by left multiplication on the semi-infinite words. Moreover, any element g in Fn has exactly two fixed points g ±∞, namely the reduced infinite words given by the limits of g&hairsp;n as n tends to ±∞. Furthermore, g&hairsp;n·w tends to g ±∞ as n tends to ±∞ for any semi-infinite word w; and more generally if wn tends to w ≠ g ±∞, then g&hairsp;n·wn tends to g&hairsp;+∞ as n tends to ∞. If Fn were boundedly generated, it could be written as a product of cyclic groups Ci generated by elements hi. Let X0 be the countable subset given by the finitely many Fn-orbits of the fixed points hi ±∞, the fixed points of the hi and all their conjugates. Since X is uncountable, there is an element of g with fixed points outside X0 and a point w outside X0 different from these fixed points. Then for some subsequence (gm) of (gn) gm = h1n(m,1) ··· hkn(m,k), with each n(m,i&hairsp;) constant or strictly monotone. On the one hand, by successive use of the rules for computing limits of the form h&hairsp;n·wn, the limit of the right hand side applied to x is necessarily a fixed point of one of the conjugates of the hi's. On the other hand, this limit also must be g&hairsp;+∞, which is not one of these points, a contradiction. References (see pages 222-229, also available on the Cornell archive) . Group theory Geometric group theory
Boundedly generated group
[ "Physics", "Mathematics" ]
2,076
[ "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Symmetry" ]
14,331,917
https://en.wikipedia.org/wiki/Color%20reaction
In chemistry, a color reaction or colour reaction is a chemical reaction that is used to transform colorless chemical compounds into colored derivatives which can be detected visually or with the aid of a colorimeter. The concentration of a colorless solution cannot normally be determined with a colorimeter. The addition of a color reagent leads to a color reaction and the absorbance of the colored product can then be measured with a colorimeter. A change in absorbance in the ultraviolet range cannot be detected by eye but can be measured by a suitably equipped colorimeter. A special colorimeter is required because standard colorimeters cannot operate below a wavelength of 400 nanometers. It is also necessary to use fused quartz cuvettes because glass is opaque to ultraviolet. Color reagents Many different color reagents have been developed for determining the concentrations of different substances. For example, Nessler's reagent can be used to determine the concentration of a solution of ammonia. Thin layer chromatography In thin layer chromatography (TLC) color reactions are frequently used to detect compound spots by dipping the plate into the reagent or by spraying the reagent onto the plates. See also Blood sugar Colorimeter Derivatization MBAS assay References Chemical reactions
Color reaction
[ "Chemistry" ]
259
[ "Physical chemistry stubs", "nan" ]
14,332,621
https://en.wikipedia.org/wiki/Basis%20of%20accounting
In accounting, a basis of accounting is a method used to define, recognise, and report financial transactions. The two primary bases of accounting are the cash basis of accounting, or cash accounting, method and the accrual accounting method. A third method, the modified cash basis, combines elements of both accrual and cash accounting. The cash basis method records income and expenses when cash is actually paid to or by a party. The accrual method records income items when they are earned and records deductions when expenses are incurred. The modified cash basis records income when it is earned but deductions when expenses are paid out. Both methods have advantages and disadvantages, and can be used in a wide range of situations. In many cases, regulatory bodies require individuals, businesses or corporations to use one method or the other. Comparison Accrual basis The accrual method records income items when they are earned and records deductions when expenses are incurred. For a business invoicing for an item sold or work done, the corresponding amount will appear in the books even though no payment has yet been received. Similarly, debts owed by the business are recorded as they are incurred, even if they are paid later. The accrual basis is a common method of accounting used globally for both financial reporting and taxation. Under accrual accounting, revenue is recognized when it is earned, and expenses are recognized when they are incurred, regardless of when cash is exchanged. In some jurisdictions, such as the United States, the accrual basis has been an option for tax purposes since 1916. An "accrual basis taxpayer" determines when income is earned based on specific tests, such as the "all-events test" and the "earlier-of test". However, the details of these tests and the timing of income recognition may vary depending on local tax laws and regulations. For financial accounting purposes, accrual accounting generally follows the principle that revenue cannot be recognized until it is earned, even if payment has been received in advance. The specifics of accrual accounting can vary across jurisdictions, though the overarching principle of recognizing revenue and expenses when they are earned and incurred remains consistent. Modified cash basis The modified cash basis of accounting, combines elements of both accrual and cash basis accounting. Some forms of the modified cash basis record income when it is earned but deductions when expenses are paid out. In other words, the recording of income is on an accrual basis, while the recording of expenses is on the cash basis. The modified method does not conform to the GAAP. See also Accrual Accrual accounting in the public sector Adjusting entries Claim of right doctrine Deferral Matching principle Revenue recognition Tax accounting References Personal taxes Corporate taxation in the United States Corporate taxation in Canada Accounting systems Economics comparisons Accounting terminology
Basis of accounting
[ "Technology" ]
577
[ "Information systems", "Accounting systems" ]
14,332,843
https://en.wikipedia.org/wiki/International%20Wine%20and%20Spirit%20Competition
The International Wine & Spirit Competition is an annual wine and spirit competition founded in 1969 by the German/British oenologist Anton Massel. Each year the competition receives entries from over 90 countries worldwide. The awards given by the competition are considered as high honours in the industry. The judging occurs annually, in London. Only brands that pay the entry fee are judged, and two or four bottles of each entry must be supplied, depending on the category entered. Depending on the points out of 100 awarded, submitted drinks can receive gold outstanding (for spirits only), gold, silver, or bronze awards, and there are no limitations on how many of each which can be awarded. There is also an extensive range of trophies each year. Judging The judging process consists of blind tasting and panel discussion. Entries are judged by panels drawn from 250 specialists from around the world. Judging processes In 2019, IWSC wine judging moved to London for the first time. The competition makes use of over 250 specialist judges from all over the world. Many are Masters of Wine, Master Sommelier, some are winemakers or distillers, others are trade specialists, each judging in their special field. IWSC's Annual Award Ceremony The competition culminates in London in Autumn with the annual awards presentation and dinner, at the Roundhouse (previously the annual banquet was held at the City of London Guildhall). Presidents/Industry Champion A President/Industry Champion is selected annually from influential individuals in the wines and spirits industry. After their term, they serve on the competition’s Advisory Board. 2023 Richard Seale, Barbados 2022 Johann Krige, South Africa 2021 Michael Urquhart, UK 2020 Tamara Roberts, UK 2019 George Fistonich, New Zealand 2018 Facundo L. Bacardi 2017 Chris Blandy, Portugal 2016 Matteo Lunelli, Italy 2015 Neil McGuigan, Australia 2014 Dr Laura Catena, Argentina 2013 G. Garvin Brown IV, USA 2012 Mauricio González-Gordon y Díez, Spain 2011 Prince Robert of Luxembourg, France 2010 Prinz Michael zu Salm-Salm, Germany 2009 Sir Ian Good, UK 2008 Rafael Guilisasti, Chile 2007 Gina Gallo, USA 2006 Anthony von Mandl, Canada 2005 Wolf Blass, Australia 2004 Paul Symington, Portugal 2003 Claes Dahlbäck, Sweden 2002 Dominique Hériard Dubreuil, France 2001 Warren Winiarski, USA 2000 Baroness Philippine de Rothschild, France 1999 Miguel A. Torres, Spain 1998 Sir Anthony Greener, UK 1997 Jean Hugel, France 1996 Dr Anton Rupert, South Africa 1995 Marchese Leonardo de Frescobaldi, Italy 1994 Michael Jackaman, UK 1993 Mme May de Lencquesaing, France 1992 Chris Hancock Hon MW, Australia 1991 Peter Sichel, USA 1990 Robert Drouhin, France 1989 Jos Ignacio Domecq, Spain 1988 Marchese Piero Antinori, Italy 1987 Kenneth Grahame, UK 1986 Dr Max Lake, Australia 1985 Marquis de Goulaine, France 1984 Mme Odette Pol Roger, France 1983 Robert Mondavi Hon MW, USA 1982 Dr Hans Ambrosi, Germany 1981 Harry Waugh Hon MW, UK 1980 Peter Noble, CBE 1979 Cyril Ray, UK 1978 Sir Reginald Bennett VRD, UK 1977 Lord Montagu of Beaulieu, UK See also Spirits ratings References Further reading Fraser, Craig; (et al.) (2008). Fire Water: South African Brandy. Quivertree Publications. Page 40. . External links Official website Wine tasting Wine-related events Distilled drinks Wine awards Awards established in 1969 Recurring events established in 1969 Food and drink awards
International Wine and Spirit Competition
[ "Chemistry" ]
742
[ "Distillation", "Distilled drinks" ]
14,333,272
https://en.wikipedia.org/wiki/Dominance-based%20rough%20set%20approach
The dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński. The main change compared to the classical rough sets is the substitution for the indiscernibility relation by a dominance relation, which permits one to deal with inconsistencies typical to consideration of criteria and preference-ordered decision classes. Multicriteria classification (sorting) Multicriteria classification (sorting) is one of the problems considered within MCDA and can be stated as follows: given a set of objects evaluated by a set of criteria (attributes with preference-order domains), assign these objects to some pre-defined and preference-ordered decision classes, such that each object is assigned to exactly one class. Due to the preference ordering, improvement of evaluations of an object on the criteria should not worsen its class assignment. The sorting problem is very similar to the problem of classification, however, in the latter, the objects are evaluated by regular attributes and the decision classes are not necessarily preference ordered. The problem of multicriteria classification is also referred to as ordinal classification problem with monotonicity constraints and often appears in real-life application when ordinal and monotone properties follow from the domain knowledge about the problem. As an illustrative example, consider the problem of evaluation in a high school. The director of the school wants to assign students (objects) to three classes: bad, medium and good (notice that class good is preferred to medium and medium is preferred to bad). Each student is described by three criteria: level in Physics, Mathematics and Literature, each taking one of three possible values bad, medium and good. Criteria are preference-ordered and improving the level from one of the subjects should not result in worse global evaluation (class). As a more serious example, consider classification of bank clients, from the viewpoint of bankruptcy risk, into classes safe and risky. This may involve such characteristics as "return on equity (ROE)", "return on investment (ROI)" and "return on sales (ROS)". The domains of these attributes are not simply ordered but involve a preference order since, from the viewpoint of bank managers, greater values of ROE, ROI or ROS are better for clients being analysed for bankruptcy risk . Thus, these attributes are criteria. Neglecting this information in knowledge discovery may lead to wrong conclusions. Data representation Decision table In DRSA, data are often presented using a particular form of decision table. Formally, a DRSA decision table is a 4-tuple , where is a finite set of objects, is a finite set of criteria, where is the domain of the criterion and is an information function such that for every . The set is divided into condition criteria (set ) and the decision criterion (class) . Notice, that is an evaluation of object on criterion , while is the class assignment (decision value) of the object. An example of decision table is shown in Table 1 below. Outranking relation It is assumed that the domain of a criterion is completely preordered by an outranking relation ; means that is at least as good as (outranks) with respect to the criterion . Without loss of generality, we assume that the domain of is a subset of reals, , and that the outranking relation is a simple order between real numbers such that the following relation holds: . This relation is straightforward for gain-type ("the more, the better") criterion, e.g. company profit. For cost-type ("the less, the better") criterion, e.g. product price, this relation can be satisfied by negating the values from . Decision classes and class unions Let . The domain of decision criterion, consist of elements (without loss of generality we assume ) and induces a partition of into classes , where . Each object is assigned to one and only one class . The classes are preference-ordered according to an increasing order of class indices, i.e. for all such that , the objects from are strictly preferred to the objects from . For this reason, we can consider the upward and downward unions of classes, defined respectively, as: Main concepts Dominance We say that dominates with respect to , denoted by , if is better than on every criterion from , . For each , the dominance relation is reflexive and transitive, i.e. it is a partial pre-order. Given and , let represent P-dominating set and P-dominated set with respect to , respectively. Rough approximations The key idea of the rough set philosophy is approximation of one knowledge by another knowledge. In DRSA, the knowledge being approximated is a collection of upward and downward unions of decision classes and the "granules of knowledge" used for approximation are P-dominating and P-dominated sets. The P-lower and the P-upper approximation of with respect to , denoted as and , respectively, are defined as: Analogously, the P-lower and the P-upper approximation of with respect to , denoted as and , respectively, are defined as: Lower approximations group the objects which certainly belong to class union (respectively ). This certainty comes from the fact, that object belongs to the lower approximation (respectively ), if no other object in contradicts this claim, i.e. every object which P-dominates , also belong to the class union (respectively ). Upper approximations group the objects which could belong to (respectively ), since object belongs to the upper approximation (respectively ), if there exist another object P-dominated by from class union (respectively ). The P-lower and P-upper approximations defined as above satisfy the following properties for all and for any : The P''-boundaries (P-doubtful regions) of and are defined as: Quality of approximation and reducts The ratio defines the quality of approximation of the partition into classes by means of the set of criteria . This ratio express the relation between all the P-correctly classified objects and all the objects in the table. Every minimal subset such that is called a reduct of and is denoted by . A decision table may have more than one reduct. The intersection of all reducts is known as the core. Decision rules On the basis of the approximations obtained by means of the dominance relations, it is possible to induce a generalized description of the preferential information contained in the decision table, in terms of decision rules. The decision rules are expressions of the form if [condition] then [consequent], that represent a form of dependency between condition criteria and decision criteria. Procedures for generating decision rules from a decision table use an inductive learning principle. We can distinguish three types of rules: certain, possible and approximate. Certain rules are generated from lower approximations of unions of classes; possible rules are generated from upper approximations of unions of classes and approximate rules are generated from boundary regions. Certain rules has the following form: if and and then if and and then Possible rules has a similar syntax, however the consequent part of the rule has the form: could belong to or the form: could belong to . Finally, approximate rules has the syntax: if and and and and and then The certain, possible and approximate rules represent certain, possible and ambiguous knowledge extracted from the decision table. Each decision rule should be minimal. Since a decision rule is an implication, by a minimal decision rule we understand such an implication that there is no other implication with an antecedent of at least the same weakness (in other words, rule using a subset of elementary conditions or/and weaker elementary conditions) and a consequent of at least the same strength (in other words, rule assigning objects to the same union or sub-union of classes). A set of decision rules is complete if it is able to cover all objects from the decision table in such a way that consistent objects are re-classified to their original classes and inconsistent objects are classified to clusters of classes referring to this inconsistency. We call minimal each set of decision rules that is complete and non-redundant, i.e. exclusion of any rule from this set makes it non-complete. One of three induction strategies can be adopted to obtain a set of decision rules: generation of a minimal description, i.e. a minimal set of rules, generation of an exhaustive description, i.e. all rules for a given data matrix, generation of a characteristic description, i.e. a set of rules covering relatively many objects each, however, all together not necessarily all objects from the decision table The most popular rule induction algorithm for dominance-based rough set approach is DOMLEM, which generates minimal set of rules. Example Consider the following problem of high school students’ evaluations: {| class="wikitable" style="text-align:center" border="1" |+ Table 1: Example—High School Evaluations ! object (student) !! (Mathematics) !! (Physics) !! (Literature) !! !! (global score) |- ! |medium || medium || bad || || bad |- ! |good || medium || bad || || medium |- ! |medium || good || bad || || medium |- ! |bad || medium || good || || bad |- ! |bad || bad || medium || || bad |- ! |bad || medium || medium || || medium |- ! |good || good || bad || || good |- ! |good || medium || medium || || medium |- ! |medium || medium || good || || good |- ! |good || medium || good || || good |} Each object (student) is described by three criteria , related to the levels in Mathematics, Physics and Literature, respectively. According to the decision attribute, the students are divided into three preference-ordered classes: , and . Thus, the following unions of classes were approximated: i.e. the class of (at most) bad students, i.e. the class of at most medium students, i.e. the class of at least medium students, i.e. the class of (at least) good students. Notice that evaluations of objects and are inconsistent, because has better evaluations on all three criteria than but worse global score. Therefore, lower approximations of class unions consist of the following objects: Thus, only classes and cannot be approximated precisely. Their upper approximations are as follows: while their boundary regions are: Of course, since and are approximated precisely, we have , and The following minimal set of 10 rules can be induced from the decision table: if then if and and then if then if and then if and then if and then if and then if then if then if and then'' The last rule is approximate, while the rest are certain. Extensions Multicriteria choice and ranking problems The other two problems considered within multi-criteria decision analysis, multicriteria choice and ranking problems, can also be solved using dominance-based rough set approach. This is done by converting the decision table into pairwise comparison table (PCT). Variable-consistency DRSA The definitions of rough approximations are based on a strict application of the dominance principle. However, when defining non-ambiguous objects, it is reasonable to accept a limited proportion of negative examples, particularly for large decision tables. Such extended version of DRSA is called Variable-Consistency DRSA model (VC-DRSA) Stochastic DRSA In real-life data, particularly for large datasets, the notions of rough approximations were found to be excessively restrictive. Therefore, an extension of DRSA, based on stochastic model (Stochastic DRSA), which allows inconsistencies to some degree, has been introduced. Having stated the probabilistic model for ordinal classification problems with monotonicity constraints, the concepts of lower approximations are extended to the stochastic case. The method is based on estimating the conditional probabilities using the nonparametric maximum likelihood method which leads to the problem of isotonic regression. Stochastic dominance-based rough sets can also be regarded as a sort of variable-consistency model. Software 4eMka2 is a decision support system for multiple criteria classification problems based on dominance-based rough sets (DRSA). JAMM is a much more advanced successor of 4eMka2. Both systems are freely available for non-profit purposes on the Laboratory of Intelligent Decision Support Systems (IDSS) website. See also Rough sets Granular computing Multicriteria Decision Analysis (MCDA) References Chakhar S., Ishizaka A., Labib A., Saad I. (2016). Dominance-based rough set approach for group decisions, European Journal of Operational Research, 251(1): 206-224 Li S., Li T. Zhang Z., Chen H., Zhang J. (2015). Parallel Computing of Approximations in Dominance-based Rough Sets Approach, Knowledge-based Systems, 87: 102-111 Li S., Li T. (2015). Incremental Update of Approximations in Dominance-based Rough Sets Approach under the Variation of Attribute Values, Information Sciences, 294: 348-361 Li S., Li T., Liu D. (2013). Dynamic Maintenance of Approximations in Dominance-based Rough Set Approach under the Variation of the Object Set, International Journal of Intelligent Systems, 28(8): 729-751 External links The International Rough Set Society Laboratory of Intelligent Decision Support Systems (IDSS) at Poznań University of Technology. Extensive list of DRSA references on the Roman Słowiński home page. Theoretical computer science Machine learning algorithms Multiple-criteria decision analysis
Dominance-based rough set approach
[ "Mathematics" ]
2,882
[ "Theoretical computer science", "Applied mathematics" ]
14,334,415
https://en.wikipedia.org/wiki/Grzegorczyk%20hierarchy
The Grzegorczyk hierarchy (, ), named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels. Definition First we introduce an infinite set of functions, denoted Ei for some natural number i. We define is the addition function, and is a unary function which squares its argument and adds two. Then, for each n greater than 1, , i.e. the x-th iterate of evaluated at 2. From these functions we define the Grzegorczyk hierarchy. , the n-th set in the hierarchy, contains the following functions: Ek for k < n the zero function (Z(x) = 0); the successor function (S(x) = x + 1); the projection functions (); the (generalized) compositions of functions in the set (if h, g1, g2, ... and gm are in , then is as well); and the results of limited (primitive) recursion applied to functions in the set, (if g, h and j are in and for all t and , and further and , then f is in as well). In other words, is the closure of set with respect to function composition and limited recursion (as defined above). Properties These sets clearly form the hierarchy because they are closures over the 's and . They are strict subsets. In other words because the hyperoperation is in but not in . includes functions such as x+1, x+2, ... Every unary function f(x) in is upper bounded by some x+n. However, also includes more complicated functions like x∸1, x∸y (where ∸ is the monus sign defined as x∸y = max(x-y, 0)), , etc. provides all addition functions, such as x+y, 4x, ... provides all multiplication functions, such as xy, x4 provides all exponentiation functions, such as xy, 222x, and is exactly the elementary recursive functions. provides all tetration functions, and so on. Notably, both the function and the characteristic function of the predicate from the Kleene normal form theorem are definable in a way such that they lie at level of the Grzegorczyk hierarchy. This implies in particular that every recursively enumerable set is enumerable by some -function. Relation to primitive recursive functions The definition of is the same as that of the primitive recursive functions, , except that recursion is limited ( for some j in ) and the functions are explicitly included in . Thus the Grzegorczyk hierarchy can be seen as a way to limit the power of primitive recursion to different levels. It is clear from this fact that all functions in any level of the Grzegorczyk hierarchy are primitive recursive functions (i.e. ) and thus: It can also be shown that all primitive recursive functions are in some level of the hierarchy, thus and the sets partition the set of primitive recursive functions, . Meyer and Ritchie introduced another hierarchy subdividing the primitive recursive functions, based on the nesting depth of loops needed to write a LOOP program that computes the function. For a natural number , let denote the set of functions computable by a LOOP program with LOOP and END commands nested no deeper than levels. Fachini and Maggiolo-Schettini showed that coincides with for all integers .p.63 Extensions The Grzegorczyk hierarchy can be extended to transfinite ordinals. Such extensions define a fast-growing hierarchy. To do this, the generating functions must be recursively defined for limit ordinals (note they have already been recursively defined for successor ordinals by the relation ). If there is a standard way of defining a fundamental sequence , whose limit ordinal is , then the generating functions can be defined . However, this definition depends upon a standard way of defining the fundamental sequence. suggests a standard way for all ordinals α < ε0. The original extension was due to Martin Löb and Stan S. Wainer and is sometimes called the Löb–Wainer hierarchy. See also ELEMENTARY Fast-growing hierarchy Ordinal analysis Notes References Bibliography Computability theory Hierarchy of functions
Grzegorczyk hierarchy
[ "Mathematics" ]
979
[ "Computability theory", "Mathematical logic" ]
14,334,551
https://en.wikipedia.org/wiki/Namak%20Lake
Namak Lake (, i.e., salt lake) is a salt lake in Iran. It is located approximately east of the city of Qom and of Aran va bidgol at an elevation of above sea level. The lake is a remnant of the Paratethys sea, which started to dry up from the Pleistocene epoch, leaving Lake Urmia and the Caspian Sea and other bodies of water. The lake has a surface area of about , but most of this is dry. Water only covers . The lake only reaches a depth between to . Environmental characteristics The air in this area is very dry and the temperature difference between day and night reaches 70 degrees Celsius. Due to the high rate of evaporation and very high salinity of the water, Qom's salt lake has a desert-like structure and is covered with thick layers of salt. Also, this lake is known as the habitat of some special plant and animal species that have the ability to live in the harsh and salty conditions of this region. Aliabad Caravanserai, Red Castle, Desert National Park, Sefidab Caravanserai and Manzariyeh Caravanserai are some of the sightseeing places in Qom around Namak Lake. References Lakes of Iran Endorheic lakes of Asia Landforms of Qom province Salt flats Salt flats of Iran
Namak Lake
[ "Chemistry" ]
278
[ "Salt flats", "Salts" ]
14,335,052
https://en.wikipedia.org/wiki/Thyrotoxicosis%20factitia
Thyrotoxicosis factitia (alimentary thyrotoxicosis, exogenous thyrotoxicosis) is a condition of thyrotoxicosis caused by the ingestion of exogenous thyroid hormone. It can be the result of mistaken ingestion of excess drugs, such as levothyroxine and triiodothyronine, or as a symptom of Munchausen syndrome. It is an uncommon form of hyperthyroidism. Patients present with hyperthyroidism and may be mistaken for Graves’ disease, if TSH receptor positive, or thyroiditis because of absent uptake on a thyroid radionuclide uptake scan due to suppression of thyroid function by exogenous thyroid hormones. Ingestion of thyroid hormone also suppresses thyroglobulin levels helping to differentiate thyrotoxicosis factitia from other causes of hyperthyroidism, in which serum thyroglobulin is elevated. Caution, however, should be exercised in interpreting thyroglobulin results without thyroglobulin antibodies, since thyroglobulin antibodies commonly interfere in thyroglobulin immunoassays causing false positive and negative results which may lead to clinical misdirection. In such cases, increased fecal thyroxine levels in thyrotoxicosis factitia may help differentiate it from other causes of hyperthyroidism. See also Foodborne illness Liothyronine References External links Thyroid disease Toxicology
Thyrotoxicosis factitia
[ "Environmental_science" ]
309
[ "Toxicology" ]
14,335,071
https://en.wikipedia.org/wiki/Oil%20megaprojects
Oil megaprojects are large oil field projects. Summary of megaprojects Definition of megaproject: 20,000 barrels per day (3,200 m3/d) of new liquid fuel capacity. Megaprojects predicted for individual years Application to oil supply forecasting A series of project tabulations and analyses by Chris Skrebowski, editor of Petroleum Review, have presented a more pessimistic picture of future oil supply. In a 2004 report, based on an analysis of new projects over , he argued that although ample supply might be available in the near-term, after 2007 "the volumes of new production for this period are well below likely requirements." By 2006, although "the outlook for future supply appears somewhat brighter than even six months ago", nonetheless, if "all the factors reducing new capacity come into play, markets will remain tight and prices high. Only if new capacity flows into the system rather more rapidly than of late, will there be any chance of rebuilding spare capacity and softening prices." The smallest fields, even in aggregate, do not contribute a large fraction of the total. For example, a relatively small number of giant and super-giant oilfields are providing almost half of the world production. Decline rates The most important variable is the average decline rate for Fields in Production (FIP) which is difficult to assess. See also Energy law List of largest oil fields Giant oil and gas fields List of Russian megaprojects References Further reading Oil fields
Oil megaprojects
[ "Engineering" ]
306
[ "Oil megaprojects", "Megaprojects" ]
14,335,458
https://en.wikipedia.org/wiki/Deprivation%20index
A deprivation index or poverty index (or index of deprivation or index of poverty) is a data set to measure relative deprivation (a measure of poverty) of small areas. Such indices are used in spatial epidemiology to identify socio-economic confounding. History In 1983, Brian Jarman published the Jarman Index, also known as the Underprivileged Area Score, to identify underprivileged areas. Since then, many other indices have been developed. Australia Canada Statistics Canada publishes the Canadian Index of Multiple Deprivation. China China's county-level area deprivation index (CADI) Europe European Deprivation Index The European Deprivation Index was published by Launoy et al in 2018 with a goal of addressing social inequalities in health. Laeken indicators The Laeken indicators is a set of common European statistical indicators on poverty and social exclusion, established at the European Council of December 2001 in the Brussels quarter of Laeken, Belgium. They were developed as part of the Lisbon Strategy, of the previous year, which envisioned the coordination of European social policies at country level based on a set of common goals. Laeken indicators include the following. At-risk-of-poverty rate At-risk-of-poverty threshold S80/S20 income quintile share ratio Persistent at-risk-of-poverty rate Persistent at-risk-of-poverty rate (alternative threshold) Relative median at-risk-of-poverty gap Regional cohesion Long-term unemployment rate Persons living in jobless households Early school leavers not in education or training Life expectancy at birth Self defined health status Dispersion around the at-risk-of-poverty threshold At-risk-of-poverty rate anchored at one moment in time At-risk-of-poverty rate before cash social transfers Gini coefficient In-work at risk of poverty rate Long term unemployment share Very long term unemployment rate Most of these indicators are discriminated by various criteria (gender, age group, household type, etc.). France Germany The German Index of Multiple Deprivation (GIMD) Italy The Italian deprivation index United Kingdom Indices of Multiple Deprivation Indices of multiple deprivation (IMD) are datasets used within the UK to classify the relative deprivation (a measure of poverty) of small areas. Multiple components of deprivation are weighted with different strengths and compiled into a single score of deprivation. Small areas are then ranked by deprivation score. As such, deprivation scores must be treated as an ordinal variable. They are created by the British Department for Communities and Local Government (DCLG). The principle behind the index is to target government action in the areas which need it most. The calculation and publication of the indices is devolved and indices of multiple deprivation for Wales, Scotland, England, and Northern Ireland are calculated separately. While the components of deprivation that make up the overall deprivation score are similar in all four nations of the UK the weights assigned to each component, the size of the geographies for which deprivation scores are calculated, and the years of calculation are different. As a result levels of deprivation cannot be easily compared between nations. The geography at which IMDs are produced varies across the nations of the UK and has varied over time. Currently the smallest geography for which IMDs are published is LSOA level in both England and Wales, data zone level for Scotland, and Super Output Area (SOA) for Northern Ireland. Early versions of the English IMDs were published at electoral ward and English local authority level. The use of IMDs in social analysis aims to balance the desire for a single number describing the concept of deprivation in a place and the recognition that deprivation has many interacting components. IMDs may be an improvement over simpler measures of deprivation such as low average household disposable income because they capture variables such as the advantage of access to a good school and the disadvantage of exposure to high levels of air pollution. A potential disadvantage is that the choice of components and the weighting of those components in the construction of the overall multiple deprivation score is unavoidably subjective. Using an IMD to assess outcomes with a deprivation gradient may introduce circularity or endogeneity bias if the outcome overlaps with an IMD indicator. For instance, standardised mortality rates, which show a deprivation gradient, contribute to the health domain of the Scottish IMD. While evidence suggests minimal impact on inequalities research, researchers often use only the income domain to avoid this bias. Cases for indexes of multiple deprivation at larger and smaller geographies IMDs are calculated separately for England, Wales, Scotland, and Northern Ireland and are not comparable across them. While the geographies, the input measures, and the weights assigned to each input measure are different in all four countries, they are similar enough to calculate a combined UK IMD with only small sacrifices in data quality. Decisions within the UK that are taken nationally would be usefully informed by a UK index of multiple deprivation and this work has been proven possible and performed. The most recent whole-UK index of multiple deprivation was compiled by MySociety in 2021. There are also examples of IMDs being created for smaller geographies within nations. This is particularly important in places with very high deprivation in almost all areas. For example, using English IMDs in Manchester is not useful for targeting local interventions since over half of the city is classed as being in England's most deprived decile. By using raw deprivation scores for small areas within the area of interest before they are ranked at the national level, a local IMD can be calculated showing relative deprivation within a place instead of its relative deprivation within England. Applicability of IMDs to the analysis of very diverse areas IMDs are the property of a small area and represent the average characteristics of the people living in that area. They are not the property of any single person living within the area. Research has demonstrated IMDs have low sensitivity and specificity for detecting income- and employment-deprived individuals. Failure by researchers to consider this can lead to misleading features in analysis based on IMDs. This is a particularly large risk in areas which are very diverse due to social housing and mixed community policies such as central London. In these settings, a mixed community with a mix of very low income families in poor health and very high income families in good health can return a middling IMD score that represents neither group well and fails to provide useful insight to users of analysis based on IMD data. Other groups not well represented by IMDs are mobile communities and people experiencing homelessness, some of the most deprived members of society. National indices Responsibility for the production of publication of IMDs varies by the nation that they cover. Northern Ireland Statistics and Research Agency (NISRA) publishes IMDs for Northern Ireland. StatsWales publishes IMDs for Wales. The Scottish Government publishes IMDs for Scotland. The UK Department for Levelling Up, Housing and Communities (DLUHC) publishes IMDs for England. Early version of English IMDs were produced by the Social Disadvantage Research Group at the University of Oxford. The most recent IMDs for the four nations of the UK are, Northern Ireland Multiple Deprivation Measure 2017 (NIMDM2017). English Indices of Deprivation 2019. Scottish Index of Multiple Deprivation (SIMD) 2020. Welsh Index of Multiple Deprivation (WIMD) 2019. Scottish Index of Multiple Deprivation The Scottish index of multiple deprivation (SIMD) is used by local authorities, the Scottish government, the NHS and other government bodies in Scotland to support policy and decision making. It won the Royal Statistical Society's Excellence in Official Statistics Awards in 2017. The SIMD 2020 is composed of 43 indicators grouped into seven domains of varying weight: income, employment, health, education, skills and training, housing, geographic access and crime. These seven domains are calculated and weighted for 6,976 small areas, called ‘data zones’, with roughly equal population. With the population total at 5.3 million that comes to an average population of 760 people per data zone. 1983: Jarman Index, Underprivileged Area Score In 1983, Brian Jarman published the Underprivileged Area Score, which became known as the Jarman Index. This measured socio-economic variation across small geographical areas. The score is an outcome of the need identified in the Acheson Committee Report (into General Practitioner (GP) services in the UK) to create an index to identify 'underprivileged areas' where there were high numbers of patients and hence pressure on general practitioner services. Its creation involved the random distribution of a questionnaire among general practitioners throughout the UK. This was then used to obtain statistical weights for a calculation of a composite index of underprivileged areas based on GPs' perceptions of workload and patient need. 1988: Townsend Deprivation Index The Townsend index is a measure of material deprivation within a population. It was first described by sociologist Peter Townsend in 1988. The measure incorporates four variables: Unemployment (as a percentage of those aged 16 and over who are economically active); Non-car ownership (as a percentage of all households); Non-home ownership (as a percentage of all households); and Household overcrowding. These variables can be measured for the population of a given area and combined (via a series of calculations involving log transformations and standardisations) to give a “Townsend score” for that area. A greater Townsend index score implies a greater degree of deprivation. Areas may be “ranked” according to their Townsend score as a means of expressing relative deprivation. A Townsend score can be calculated for any area where information is available for the four index variables. Commonly, census data are used and scores are calculated at the level of census output areas. Scores for these areas may be linked or mapped to other geographical areas, such as postcodes, to make the scores more applicable in practice. The Townsend index has been the favoured deprivation measure among UK health authorities. Researchers at the University of Bristol's eponymous “Townsend Centre for International Poverty Research” continue to work on “meaningful measures of poverty”. 1991: Carstairs Index The Carstairs index was developed by Vera Carstairs and Russell Morris, and published in 1991 as Deprivation and Health in Scotland. The work focuses on Scotland, and was an alternative to the Townsend Index to avoid the use of households as denominators. The Carstairs index is based on four Census variables: low social class, lack of car ownership, overcrowding and male unemployment and the overall index reflects the material deprivation of an area, in relation to the rest of Scotland. Carstairs indices are calculated at the postcode sector level, with average population sizes of approximately 5,000 persons. The Carstairs index makes use of data collected at the Census to calculate the relative deprivation of an area, therefore there have been four versions: 1981, 1991, 2001 and 2011. The Carstairs indices are routinely produced and published by the MRC/CSO Social and Public Health Sciences Unit at the University of Glasgow. Methodology The components of the Carstairs score are unweighted, and so to ensure that they all have equal influence over the final score, each variable is standardised to have a population-weighted mean of zero, and a variance of one, using the z-score method. The Carstairs index for each area is the sum of the standardised values of the components. Indices may be positive or negative, with negative scores indicating that the area has a lower level of deprivation, and positive scores suggesting the area has a relatively higher level of deprivation. The indices are typically ordered from lowest to highest, and grouped into population quintiles. In the 1981, 1991 and 2001 indices, quintile 1 represented the least deprived areas, and quintile 5 represented the most deprived. In 2011, the order was reversed, in line with the ordering of the Scottish Index of Multiple Deprivation. Changes to the variables The low social class component of the 1981 and 1991 Carstairs index was created using the Registrar General's Social Class (later Social Class for Occupation). In 2001, this was superseded by the National Statistics Socio-economic Classification (NS-SEC). This meant that the definition of low social class had to be amended to reflect the approximate operational categories. The definition of overcrowding was amended between 1981 and 1991, due to the inclusion of kitchens of at least 2 metres wide into the room count in the census. Index of Multiple Deprivation 2000 The Index of Multiple Deprivation 2000 (IMD 2000) showed relative levels of social and economic deprivation across all the counties of England at a ward level, the first national study of its kind. Deprivation across the 8414 wards in the country was assessed, using the criteria of income, employment, health, education, housing, access, and child poverty. Wards ranking in the most deprived 10 per cent in the country were earmarked for additional funding and assistance. The most deprived wards in England were found to be Benchill in Manchester, Speke in Liverpool, Thorntree in Middlesbrough, Everton in Liverpool, and Pallister in Middlesbrough. Indices of Deprivation 2004 IMD2000 was the subject of some controversy, and was succeeded by the Indices of Deprivation 2004 (ID 2004) which abandoned ward-level data and sampled much smaller geographical areas. It is unusual in its inclusion of a measure of geographical access as an element of deprivation and in its direct measure of poverty (through data on benefit receipts). The ID 2004 is based on the idea of distinct dimensions of deprivation which can be recognised and measured separately. These are then combined into a single overall measure. The Index is made up of seven distinct dimensions of deprivation called Domain Indices. Whilst it is known as the ID2004, most of the data actually dates from 2001. The Indices of deprivation 2004 are measured at the Lower Layer Super Output Area level. Super Output Areas were developed by the Office for National Statistics (ONS) from the Census 2001 Output Areas. There are two levels, the lowest (which the Index is based upon) being smaller than wards and containing a minimum of 1,000 people and 400 households. The middle layer contains a minimum of 5,000 people and 2,000 households. Earlier proposals to introduce Upper Layer Super Output Areas were dropped due to lack of demand. In addition to Super Output Areas, Summaries of the ID 2004 are presented at District level, County level and Primary Care Trust (PCT) level. While each SOA is of higher resolution than the highest resolution ward index data of the IMD2000 and therefore better at identifying "pockets" of deprivation within wards the 2004 system has its problems. Some areas of deprivation can still be hidden because of the size of SOAs. Examples of this can be found by comparing central areas of Keighley using the Bradford District Deprivation Index (developed by Bradford Council produced at 1991 Census Enumeration District level) with the ID2004. Additionally SOAs were tasked with providing complete coverage of England and Wales – this combined with the minimum population and household counts within each SOA means that large areas of agricultural, commercial and industrial land have to be included within a residential area that borders them – thus when some very deprived residential areas are mapped, a large area of supposed deprivation emerges, however most of it may not be so but rather has a wide area of relative affluence around it – these can appear to be a greater problem than many smaller completely residential SOAs in which higher concentrations of deprived people live but mixed with more affluent neighbours. Indices of Deprivation 2007 The Indices of Deprivation 2007 (ID 2007) is a deprivation index at the small area level was released on 12 June 2007. It follows the ID2004 and because much of the datasets are the same or similar between indices, it allows for a comparison of 'relative deprivation' of an area between the two indices. While it is known as the ID2007, most of the data actually dates from 2005, and most of the data for the ID2004 was from 2001. The new Index of Multiple Deprivation 2007 (IMD 2007) is a Lower layer Super Output Area (LSOA) level measure of multiple deprivation, and is made up of seven LSOA level domain indices. There are also two supplementary indices (Income Deprivation Affecting Children and Income Deprivation Affecting Older People). Summary measures of the IMD 2007 are presented at local authority district level and county council level. The LSOA level Domain Indices and IMD 2007, together with the local authority district and county summaries are referred to as the Indices of Deprivation 2007 (ID 2007).(Rusty 2009) The ID 2007 are based on the approach, structure and methodology that were used to create the previous ID 2004. The ID 2007 updates the ID 2004 using more up-to-date data. The new IMD 2007 contains seven domains which relate to income deprivation, employment deprivation, health deprivation and disability, education skills and training deprivation, barriers to housing and services, living environment deprivation, and crime. Like the ID2004 it is unusual in that it includes a measure of geographical access as an element of deprivation and its direct measure of poverty (through data on benefit receipts). The ID 2007 is based on the idea of distinct dimensions of deprivation which can be recognised and measured separately. These are then combined into a single overall measure. The Index is made up of seven distinct dimensions of deprivation called Domain Indices, which are: income; employment; health and disability, education, skills, and training; barriers to housing and services; living environment; and crime. Like the ID2004, the ID2007 are measured at Lower Layer Super Output Areas and have similar strengths and weakness regarding concentrated pockets of deprivation. In addition to Super Output Areas, summary measures of the ID2007 are presented at district level, county level and Primary Care Trust (PCT) level. Indices of Deprivation 2010 The Indices of Deprivation 2010 (ID 2010) was released on 24 March 2011. It follows the ID2007 and because much of the datasets are the same or similar between indices allows a comparison of "relative deprivation" of an area between the two indices. While it is known as the ID2010, most of the data actually dates from 2008. The ID 2010 found that 5 million people lived in the most deprived areas in England in 2008 and 38 per cent of them were income deprived. The most deprived area in the country is in the village of Jaywick on the Essex coast. The local authorities with the highest proportion of lower layer Super Output Areas (LSOAs) were in Liverpool, Middlesbrough, Manchester, Knowsley, the City of Kingston upon Hull, Hackney and Tower Hamlets. 98% of the most deprived LSOAs are in urban areas but there are also pockets of deprivation across rural areas. 56% of local authorities contain at least one LSOA amongst the 10 per cent most deprived in England. 88% of the LSOAs that are the most deprived in 2010 were also amongst the most deprived in 2007. Indices of Deprivation 2019 The Indices of Deprivation 2019 (ID 2019) was published in September 2019. It has seven domains of deprivation: income, employment, education, health, crime, barriers to housing and services, and living environment. These domains each have multiple components. For example the Barriers to Housing and Services considers seven components including levels of household overcrowding, homelessness, housing affordability, and the distance by road to four types of key amenity (post office, primary school, supermarket, and GP surgery). Department of Environment Index The Department of Environment Index (DoE) is an index of urban poverty published by the Department for Environment, Food and Rural Affairs and designed to assess relative levels of deprivation in local authorities in England. The DoE has three dimensions of deprivation: social, economic and housing. United States The Area Deprivation Index (ADI). US Department of Health and Human Services. September 2022, developed by the U.S. Health Resources and Services Administration. The index is currently being used by the Centers for Medicare & Medicaid Services to adjust financial benchmarks in various Value-based health care models. However, some researchers have pointed out that applying ADI in practice has several limitations. Social Deprivation Index by the American Academy of Family Physicians Social Vulnerability Index by the U.S. Centers for Disease Control and Prevention Switzerland The Swiss neighbourhood index of SEP (Swiss-SEP) References Demographics of England Geodemographic databases Human geography Measurements and definitions of poverty Medical statistics Office for National Statistics Social statistics data
Deprivation index
[ "Environmental_science" ]
4,181
[ "Environmental social science", "Human geography" ]
14,335,622
https://en.wikipedia.org/wiki/Loss%20of%20United%20Kingdom%20child%20benefit%20data%20%282007%29
The loss of United Kingdom child benefit data was a data breach incident in October 2007, when two computer discs owned by HM Revenue and Customs containing data relating to child benefit went missing. The incident was announced by the Chancellor of the Exchequer, Alistair Darling, on 20 November 2007. The two discs contained the personal details of all families in the United Kingdom (UK) claiming child benefit, of which takeup in the UK is near 100%. The loss The discs were sent by junior staff at HM Revenue and Customs (HMRC) based at Waterview Park in Washington, Tyne and Wear, to the National Audit Office (NAO), as unrecorded internal mail via TNT on 18 October. On 24 October the NAO complained to HMRC that they had not received the data. On 8 November, senior officials in HMRC were informed of the loss, with Chancellor of the Exchequer, Alistair Darling being informed on 10 November. On 20 November Darling announced: The lost data was thought to concern approximately 25 million people in the UK (nearly half of the country's population). The personal data on the missing discs was reported to include names and addresses of parents and children and dates of birth of the children, together with the National Insurance numbers and bank or building society details of their parents. The "password protection" in question is that provided by WinZip version 8. This is a weak, proprietary scheme (unnamed encryption and hash algorithms) with well-known attacks. Anyone competent in computing would be able to break this protection by downloading readily-available tools. WinZip version 9 introduced AES encryption, which would have been secure and only breakable by a brute-force attack. In a list of frequently asked questions, on the BBC News website a breakdown of the loss was reported as being: 7.25 million claimants 15.5 million children, including some who no longer qualify but whose family is claiming for a younger child 2.25 million 'alternative payees' such as partners or carers 3,000 'appointees' who claim the benefit under court instructions 12,500 agents who claim the benefit on behalf of a third party Whilst government ministers claimed that a junior official was to blame, the Conservatives said that the fault lay in part with senior management. This was based on a claim that the National Audit Office had requested that bank details be removed from the data before it was sent, but that HMRC had denied this request, because it would be "too costly and complicated". Emails released on 22 November confirmed that senior HMRC officials had been made aware of the decision on cost grounds not to strip out sensitive information. The cost of removing sensitive information has been given as £5,000. Although the cost was found to be substantially less (£650) in an academic study. According to an IT trade journal Computer Weekly, it said that back in March 2007, the NAO had asked for completed information of the child benefit database to be sent by post on CDs, instead of a sample of the database. The first time this was done, things went smoothly, and the package was registered post. However this time, it was unregistered through the courier. It was later revealed, on 17 December 2007, that the data protection manual for HMRC was in itself under restriction to only senior members of staff, not junior civil servants who had just a summary of what the manual says on security. Other data scandals This was followed by several other data scandals. On 17 December it was revealed by Ruth Kelly that the details of three million learner drivers were lost in the United States. However the only details said to be lost were the: name, address, phone number, the fee paid, the test centre, payment code and e-mail, so not much of a panic was caused due to a reduced risk of financial fraud. On 23 December it was revealed that nine National Health Service (NHS) trusts had also lost the data of hundreds of thousands of patients, some of it archive information, some of it medical records, contact details and soft financial data. A few other trusts also lost data, but found it fairly quickly. Several other UK firms have also admitted security failings. Response Darling stated that there was no indication that the details had fallen into criminal hands, but he urged those affected to monitor their bank accounts. He said "If someone is the innocent victim of fraud as a result of this incident, people can be assured they have protection under the Banking Code so they will not suffer any financial loss as a result." HMRC then set up a Child Benefit Helpline for those concerned about the data loss. The incident was a breach of the UK's Data Protection Act and resulted in the resignation of HMRC chairman Paul Gray; Darling commented that the discs were probably destroyed when "the hunt was on, probably within days" and that there was an "opaque" management structure at HMRC and it was difficult to see who was responsible for what. Gray was subsequently found to be working at Cabinet Office. The Metropolitan Police and the Independent Police Complaints Commission both investigated the security breach, and uniformed police officers investigated HMRC offices. The loss led to much criticism by the Acting Leader of the Liberal Democrats Vince Cable and Shadow Chancellor George Osborne. Osborne said: In addition he said that it was the "final blow for the ambitions of this government to create a national ID database". Cable also criticised the use of disks in the modern age of electronic data transfer. Spokespersons for Gordon Brown, however, said that the Prime Minister fully supported Darling, and said that Darling had not expressed any intention to resign. The general reaction of the public was one of anger and worry. Banks, individuals, businesses and government departments became more vigilant over data fraud and identity theft and the government pledged to be more careful with data. The public and media was particularly angry over the fact that the data was not registered or recorded, and that it was not securely encrypted. Nick Assinder, a political correspondent at the BBC, expressed the opinion that he believed Darling to be "on borrowed time". George Osborne, who questioned whether Darling was "up to the job", suggested that it would be a matter of days before a decision was made regarding Darling's future. However Darling remained Chancellor until Labour's defeat in 2010. TNT stated that, as the delivery was not recorded, it would not be possible to even ascertain if it had actually been sent, let alone where it went. Jeremy Clarkson direct debit fraud On 7 January 2008, Jeremy Clarkson found himself the subject of direct debit fraud after publishing his bank account and sort code details in his column in The Sun to make the point that public concern over the scandal was unnecessary. He wrote, “All you'll be able to do with them is put money into my account. Not take it out. Honestly, I've never known such a palaver about nothing”. Someone then used these details to set up a £500 direct debit to the charity Diabetes UK. In his next Sunday Times column, Clarkson wrote, “I was wrong and I have been punished for my mistake.″ Under the terms of the Direct Debit Guarantee, the payment could be reversed. See also List of UK government data losses United Kingdom government security breaches References External links Alistair Darling's statement to Parliament HMRC letter of apology Brown apologizes for records loss, with timeline of events Child benefit data misplacement Data security Political scandals in the United Kingdom HM Revenue and Customs
Loss of United Kingdom child benefit data (2007)
[ "Engineering" ]
1,531
[ "Cybersecurity engineering", "Data security" ]
14,335,846
https://en.wikipedia.org/wiki/Puccinellia
Puccinellia is a genus of plants in the grass family, known as alkali grass or salt grass. These grasses grow in wet environments, often in saline or alkaline conditions. They are native to temperate to Arctic regions of the Northern and Southern Hemispheres. Selected species Puccinellia agrostidea Sorensen Bent alkali grass or tundra alkali grass Puccinellia ambigua Sorensen - Alberta alkali grass Puccinellia americana Sorensen - American alkali grass Puccinellia andersonii Swallen - Anderson's alkali grass Puccinellia angustata (R.Br.) Rand & Redf. - Narrow alkali grass Puccinellia arctica (Hook.) Fern. & Weath. - Arctic alkali grass Puccinellia bruggemannii Sorensen - Prince Patrick alkali grass Puccinellia convoluta (Hornem.) Hayek - Puccinellia coreensis Honda - Korean alkaligrass Puccinellia deschampsioides Sorensen - Polar alkali grass Puccinellia distans (Jacq.) Parl. - Spreading alkali grass, weeping alkali grass or reflexed saltmarsh-grass Puccinellia fasciculata (Torr.) E.P.Bicknell - Torrey alkali grass or Borrer's saltmarsh-grass Puccinellia fernaldii (A.Hitchc.) E.G.Voss = Torreyochloa pallida var. fernaldii Puccinellia festuciformis (Host) Parl. - Puccinellia groenlandica Sorensen - Greenland alkali grass Puccinellia howellii J.I.Davis - Howell's alkali grass Puccinellia hultenii Swallen - Hulten's alkali grass Puccinellia interior Sorensen - Interior alkali grass Puccinellia kamtschatica Holmb. - Alaska alkali grass Puccinellia kurilensis (Takeda) Honda - Dwarf alkali grass Puccinellia langeana (Berlin) T.J.Sorensen ex Hultén - Puccinellia laurentiana Fern. & Weath. - Tracadigash Mountain alkali grass Puccinellia lemmonii (Vasey) Scribn. - Lemmon's alkali grass Puccinellia limosa (Schur) Holmb. - Puccinellia lucida Fern. & Weath. - Shining alkali grass Puccinellia macquariensis (Cheeseman) Allan & Jansen Puccinellia macra Fern. & Weath. - Bonaventure Island alkali grass Puccinellia maritima (Huds.) Parl. - Seaside alkali grass or common saltmarsh-grass Puccinellia nutkaensis (J.Presl) Fern. & Weath. - Nootka alkali grass Puccinellia nuttalliana (J.A.Schultes) A.S.Hitchc. - Nuttall's alkali grass Puccinellia parishii A.S.Hitchc. - Bog alkali grass or Parish's alkali grass Puccinellia perlaxa (N.G.Walsh) N.G.Walsh & A.R.Williams - Plains saltmarsh-grass Puccinellia phryganodes (Trin.) Scribn. & Merr. - Creeping alkali grass Puccinellia poacea Sorensen - Floodplain alkali grass Puccinellia porsildii Sorensen - Porsild's alkali grass Puccinellia pumila (Vasey) A.S.Hitchc. - Dwarf alkali grass Puccinellia pungens (Pau) Paunero - Puccinellia rosenkrantzii Sorensen - Rosenkrantz's alkali grass Puccinellia rupestris (With.) Fern. & Weath. - British alkali grass or stiff saltmarsh-grass Puccinellia simplex Scribn. - California alkali grass Puccinellia stricta (Hook.f.) C.Blom - Australian saltmarsh-grass Puccinellia sublaevis (Holmb.) Tzvelev - Smooth alkali grass Puccinellia tenella Holmb. ex Porsild - Tundra alkali grass Puccinellia tenuiflora (Griesb.) Scribn. & Merr. - Puccinellia vaginata (Lange) Fern. & Weath. - Sheathed alkali grass Puccinellia vahliana (Liebm.) Scribn. & Merr. - Vahl's alkali grass Puccinellia wrightii (Scribn. & Merr.) Tzvelev - Wright's alkali grass List sources : References External links Jepson Manual Treatment USDA Plants Profile Poaceae genera Halophytes
Puccinellia
[ "Chemistry" ]
1,063
[ "Halophytes", "Salts" ]
14,336,399
https://en.wikipedia.org/wiki/Chromatic%20spectral%20sequence
In mathematics, the chromatic spectral sequence is a spectral sequence, introduced by , used for calculating the initial term of the Adams spectral sequence for Brown–Peterson cohomology, which is in turn used for calculating the stable homotopy groups of spheres. See also Chromatic homotopy theory Adams-Novikov spectral sequence p-local spectrum References Spectral sequences
Chromatic spectral sequence
[ "Mathematics" ]
74
[ "Topology stubs", "Topology" ]
11,659,697
https://en.wikipedia.org/wiki/Wine/water%20mixing%20problem
In the wine/water mixing problem, one starts with two barrels, one holding wine and the other an equal volume of water. A cup of wine is taken from the wine barrel and added to the water. A cup of the wine/water mixture is then returned to the wine barrel, so that the volumes in the barrels are again equal. The question is then posed—which of the two mixtures is purer? The answer is that the mixtures will be of equal purity. The solution still applies no matter how many cups of any sizes and compositions are exchanged, or how little or much stirring at any point in time is done to any barrel, as long as at the end each barrel has the same amount of liquid. The problem can be solved with logic and without resorting to computation. It is not necessary to state the volumes of wine and water, as long as they are equal. The volume of the cup is irrelevant, as is any stirring of the mixtures. Solution Conservation of substance implies that the volume of wine in the barrel holding mostly water has to be equal to the volume of water in the barrel holding mostly wine. The mixtures can be visualised as separated into their water and wine components: To help in grasping this, the wine and water may be represented by, say, 100 red and 100 white marbles, respectively. If 25, say, red marbles are mixed in with the white marbles, and 25 marbles of any color are returned to the red container, then there will again be 100 marbles in each container. If there are now x white marbles in the red container, then there must be x red marbles in the white container. The mixtures will therefore be of equal purity. An example is shown below. History This puzzle was mentioned by W. W. Rouse Ball in the third, 1896, edition of his book Mathematical Recreations And Problems Of Past And Present Times, and is said to have been a favorite problem of Lewis Carroll. References Logic puzzles Thought experiments Chemical mixtures
Wine/water mixing problem
[ "Chemistry" ]
414
[ "Chemical mixtures", "nan" ]
11,660,298
https://en.wikipedia.org/wiki/Libburnia
Libburnia is a project that develops a collection of libraries and command-line tools for burning CDs, DVDs and Blu-ray media. Project overview Libburnia is the name of a project to develop various pieces of disk recording software. libisofs is the library to create or modify ISO 9660 disk images. libburn is the underlying programming library. It is used by xorriso, cdrskin and 3rd party disk recording applications can also use this library directly. libisoburn is an add-on to libburn and libisofs which coordinates both and also allows to grow ISO 9660 filesystem images on multi-session and overwriteable media. xorriso is a CLI application that creates, loads, manipulates and writes ISO 9660 filesystem images with Rock Ridge extensions. This package is part of the GNU Project. xorrisofs creates an ISO9660+Rock Ridge disc images from local files, optionally with a Joliet directory tree. xorrecord writes disc images to physical discs. cdrskin is the end-user application of libburnia. It is CLI-only and its syntax is mostly identical to cdrecord to act as a drop-in replacement for existing front-ends. GNU xorriso Xorriso stands for X/Open, Rock Ridge ISO and is the main command-line tool included with libburnia. It allows both generation and (to some extent) update of image files as well as burning images to the disk. xorriso copies file objects from POSIX compliant filesystems into Rock Ridge enhanced ISO 9660 filesystems and allows session-wise manipulation of such filesystems. It can load the management information of existing ISO images and it writes the session results to optical media or to filesystem objects. Vice versa xorriso is able to copy file objects out of ISO 9660 filesystems. It provides a command-line interface for single operations as well as GNU Readline and Dialog-based interfaces. Uses The underlying libburn library is used directly as sole recording back-end for Xfce’s graphical Xfburn application which is included in the default installation of Xubuntu since version 10.10. GNOME's default disk recording application, Brasero, can use libburn directly without relying on cdrecord compatibility of cdrskin. FlBurn is a FLTK application that uses libburn directly. cdrskin is similar to cdrecord and wodim, and can be used in place of the aforementioned tool in GUI front-ends such as K3b. History The first public release (libburn-0.2.2) was in September 2006. The current stable version is 1.5.4, which was released on January 30, 2021. Features Blanking/formatting of CD-RW DVD-RW, DVD+RW, DVD-RAM, BD Burning of data or audio tracks to CD, either in versatile Track-at-Once mode (TAO) or in Session-at-Once mode for seamless tracks. Multi-session on CD (follow-up sessions in TAO only) or on DVD-R[W] (in Incremental mode) or on DVD+R. Single session on DVD-RW or DVD-R (Disk-at-once) or on over-writable DVD+RW, DVD-RW, DVD-RAM, BD-RE. Bus scan, burn-free, speed options, retrieving media info, padding, fifo. Works with SATA DVD drives. Write access to disk images. Use UNIX device path (/dev/hdX) on Linux You do not need to be superuser for its daily usage. See also cdrkit cdrtools dvd+rw-tools References External links Official website Sourceforge website Man page Optical disc authoring Free software projects
Libburnia
[ "Technology" ]
840
[ "Multimedia", "Optical disc authoring" ]
11,660,481
https://en.wikipedia.org/wiki/Fibrillarin
rRNA 2'-O-methyltransferase fibrillarin is an enzyme that in humans is encoded by the FBL gene. Function This gene product is a component of a nucleolar small nuclear ribonucleoprotein (snRNP) particle thought to participate in the first step in processing pre-ribosomal (r)RNA. It is associated with the U3, U8, and U13 small nucleolar RNAs and is located in the dense fibrillar component (DFC) of the nucleolus. The encoded protein contains an N-terminal repetitive domain that is rich in glycine and arginine residues, like fibrillarins in other species. Its central region resembles an RNA-binding domain and contains an RNP consensus sequence. Antisera from approximately 8% of humans with the autoimmune disease scleroderma recognize fibrillarin. Fibrillarin is a component of several ribonucleoproteins including a nucleolar small nuclear ribonucleoprotein (SnRNP) and one of the two classes of small nucleolar ribonucleoproteins (snoRNPs). SnRNAs function in RNA splicing while snoRNPs function in ribosomal RNA processing. Fibrillarin is associated with U3, U8 and U13 small nuclear RNAs in mammals and is similar to the yeast NOP1 protein. Fibrillarin has a well conserved sequence of around 320 amino acids, and contains 3 domains, an N-terminal Gly/Arg-rich region; a central domain resembling other RNA-binding proteins and containing an RNP-2-like consensus sequence; and a C-terminal alpha-helical domain. An evolutionarily related pre-rRNA processing protein, which lacks the Gly/Arg-rich domain, has been found in various archaea. A study by Schultz et al. indicated that the K-turn binding 15.5-kDa protein (called Snu13 in yeast) interacts with spliceosome proteins hPRP31, hPRP3, hPRP4, CYPH and the small nucleolar ribonucleoproteins NOP56, NOP58, and fibrillarin. The 15.5-kDa protein has sequence similarity to other RNA-binding proteins such as ribosomal proteins S12, L7a, and L30 and the snoRNP protein NHP2. The U4/U6 snRNP contains 15.5-kDa protein. The 15.5-kDa protein also exists in a ribonucleoprotein complex that binds the U3 box B/C motif. The 15.5-kDa protein also exists as one of the four core proteins of the C/D small nucleolar ribonucleoprotein that mediates methylation of pre-ribosomal RNAs. Structural evidence supporting the idea that fibrillarin is the snoRNA methyltransferase has been reviewed. Interactions Fibrillarin has been shown to interact with DDX5 and SMN1. References Further reading Molecular biology
Fibrillarin
[ "Chemistry", "Biology" ]
670
[ "Biochemistry", "Molecular biology" ]
11,660,809
https://en.wikipedia.org/wiki/Mobile-ITX
Mobile-ITX is the smallest (by 2009) x86 compliant motherboard form factor presented by VIA Technologies in December, 2009. The motherboard size (CPU module) is . There are no computer ports on the CPU module and it is necessary to use an I/O carrier board. The design is intended for medical, transportation and military embedded markets. History The Mobile-ITX form factor was announced by VIA Technologies at Computex in June, 2007. The motherboard size of first prototypes was . The design was intended for ultra-mobile computing such as a smartphone or UMPC. The prototype boards shown to date include a x86-compliant 1 GHz VIA C7-M processor, 256 or 512 megabytes of RAM, a modified version of the VIA CX700 chipset (called the CX700S), an interface for a cellular radio module (demonstration boards contain a CDMA radio), a DC-DC electrical converter, and various connecting interfaces. At the announcement, an ultra-mobile PC reference design was shown running Windows XP Embedded. Notes and references External links Mobile-ITX Specification Motherboard form factors IBM PC compatibles Mobile computers
Mobile-ITX
[ "Technology" ]
240
[ "Computing stubs", "Computer hardware stubs" ]
11,661,648
https://en.wikipedia.org/wiki/Kollsnes
Kollsnes is a natural gas processing plant operated by Equinor on the southern part of the island of Oøy in Øygarden Municipality in Vestland county, Norway. It processes the natural gas from the Troll, Kvitebjørn, and Visund gas fields. Kollsnes has a capacity of of natural gas per day. Operation At Kollsnes, the Natural gas liquids (NGL) are separated out of the gas. The dry gas is compressed and then shoved by large compressors out in the pipe systems that transport it to the customers. In 1999, it was decided that the gas from Kvitebjørn was to be landed at Kollsnes. The consistency of the gas from the field made it well suited to be reprocessed to upgraded products. The new plant that was built cost , with operations starting on 1 October 2004. Starting in October 2005, the gas from Visund is also landed at Kollsnes. With a capacity of gas per day and large flexibility, the new NGL plant can process gas from new fields that would be built. Though the Vestprosess gas pipeline, the plant at Kollsnes is linked to the plants at Mongstad, where the NGL from Kollsnes is fractioned into propane, butane, and naphtha. The gas from Kollsnes is transported through the four pipe systems Statpipe, Zeepipe, Europipe I, and Franpipe to continental Europe and supplies Austria, Belgium, France, Germany, the Netherlands, Spain, and the Czech Republic with gas. The pipes are owned by Gassled, operated by Gassco while the technical responsibility is handled by Equinor. Power Consumption In 2009, the electric power consumption of the plant was per year. This had increased from per year in 1996. References External links Equinor page on Kollsnes Natural gas plants Industrial parks in Norway Ports and harbours of Norway Natural gas industry in Norway Equinor Øygarden
Kollsnes
[ "Chemistry" ]
411
[ "Natural gas technology", "Natural gas plants" ]
11,661,752
https://en.wikipedia.org/wiki/W%C5%82odzimierz%20Ko%C5%82os
Włodzimierz Kołos (1928 - 1996) was a Polish chemist and physicist who was one of the founders of modern quantum chemistry, and pioneered accurate calculations on the electronic structure of molecules. Life and scientific work Kołos was born on September 6, 1928, in Pinsk. He received his M.Sc. in chemistry in 1950 and began his academic career as an organic chemist. However, he was soon attracted to theoretical physics. He began his graduate studies in theoretical physics in 1951 and completed his thesis in only two years. The University of Warsaw and the Polish Chemical Society award the Kołos Medal every two years to commemorate his life and career. Kołos is best known for his work on the theory of electron correlation in molecules. In 1958 he went the University of Chicago, at a time when powerful computers were first becoming available for scientific work. He developed a new computer program to solve the Schrödinger equation for the hydrogen molecule to unprecedented accuracy. In the early 1960s, Kołos and Wolniewicz published a number of pioneering papers on the potential energy curves of the hydrogen molecule, including several corrections to the Born–Oppenheimer approximation, including adiabatic, non-adiabatic, and relativistic terms. One result attracted particular attention: the calculated dissociation energy disagreed with the best experimental data then available, from Gerhard Herzberg’s group. A few years later Herzberg improved his experiment and obtained a new result that agreed with the theoretical prediction. This was the first time that quantum mechanical calculations on a molecule had proved more accurate than the best experiments. Herzberg himself emphasized the importance of this in his Nobel Prize lecture. Kołos established a strong research group in molecular quantum chemistry in Warsaw, and made many other important contributions, particularly in the field of intermolecular forces. He made important contributions to the development of the symmetry-adapted perturbation theory of intermolecular forces and carried out pioneering studies on the nonadditivity of intermolecular forces. He was a member of the Polish Academy of Sciences, the International Academy of Quantum Molecular Science and the Academia Europaea. Awards and recognition Sniadecki Medal Copernicus Medal Medal of the Israel Academy of Sciences and Humanities Alexander von Humboldt Award Jurzykowski Prize Swietoslawski Award Annual Medal of the International Academy of Quantum Molecular Science Honorary doctorate of the Adam Mickiewicz University References 1928 births 1996 deaths Polish chemists Members of the International Academy of Quantum Molecular Science Computational chemists Theoretical chemists Members of Academia Europaea People from Pinsk People from Polesie Voivodeship
Włodzimierz Kołos
[ "Chemistry" ]
536
[ "Quantum chemistry", "Physical chemists", "Computational chemists", "Theoretical chemistry", "Computational chemistry", "Theoretical chemists" ]
11,661,894
https://en.wikipedia.org/wiki/Ko%C5%82os%20Medal
The Kołos Medal (Polish: Medal im. Włodzimierza Kołosa) is a prestigious medal awarded every two years by the University of Warsaw and the Polish Chemical Society for distinction in theoretical or experimental physical chemistry. It was established in 1998 to commemorate the life and career of Włodzimierz Kołos, one of the founding fathers of modern quantum chemistry. The medal features the picture of Kołos, his date of birth and death, the Latin inscriptions Societas Chimica Polonorum, Universitas Varsoviensis and Servire Veritatis Kołos Lectio Praemiumque as well as the name of the recipient. Recipients The winners of the award so far have been: Source: Warsaw University See also List of chemistry awards References External links Kołos Medal page Chemistry awards Polish awards Polish science and technology awards Awards established in 1998
Kołos Medal
[ "Technology" ]
187
[ "Science and technology awards", "Chemistry awards", "Science award stubs" ]
11,661,976
https://en.wikipedia.org/wiki/Bermuda%20Atlantic%20Time-series%20Study
The Bermuda Atlantic Time-series Study (BATS) is a long-term oceanographic study by the Bermuda Institute of Ocean Sciences (BIOS). Based on regular (monthly or better) research cruises, it samples an area of the western Atlantic Ocean nominally at the coordinates . The cruise programme routinely samples physical properties such as ocean temperature and salinity, but focuses on variables of biological or biogeochemical interest including: nutrients (nitrate, nitrite, phosphate and silicic acid), dissolved inorganic carbon, oxygen, HPLC of pigments, primary production and sediment trap flux. The BATS cruises began in 1988 but are supplemented by biweekly Hydrostation "S" cruises to a neighbouring location () that began in 1954. The data collected by these cruises are available online. Scientific Findings Between 1998 and 2013, research conducted at BATS has generated over 450 peer-reviewed articles. Among the findings are measurements showing the gradual acidification of the surface ocean, where surface water pH, carbonate ion concentration, and the saturation state for calcium carbonate minerals, such as aragonite, have all decreased since 1998. Additionally, studies at BATS have shown changes in the Revelle factor, suggesting that the capacity of North Atlantic Ocean surface waters to absorb carbon dioxide has diminished, even as seawater pCO2 has kept pace with increasing atmospheric pCO2. See also Hawaii Ocean Time-series (HOT) Weather ship References External links BATS homepage and dataserver Aquatic ecology Biological oceanography Chemical oceanography Geochemistry Oceanography Physical oceanography Oceanographic Time-Series
Bermuda Atlantic Time-series Study
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
319
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical oceanography", "Physical oceanography", "Geochemistry stubs", "nan", "Ecosystems", "Aquatic ecology" ]
11,662,080
https://en.wikipedia.org/wiki/CCIR%20%28selcall%29
There are many types and formats of CCIR Selcall. For example, CCIR 493-4 is a standard format for HF Selcall for Land Mobile applications. CCIR (Consultative Committee on International Radio) functions have largely been taken over by ITU-R. One common type of CCIR selcall used in VHF and UHF FM two-way radio communications, is a 5-tone selective calling system mainly found in some European countries and used by the Swedish Police and the Turkish Police. The tone duration of a 5 tone CCIR selcall is 100 milliseconds (± 10 ms) and the tones are transmitted sequentially. References Radio technology Telephony signals
CCIR (selcall)
[ "Technology", "Engineering" ]
144
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
11,662,151
https://en.wikipedia.org/wiki/Watertable%20control
In geotechnical engineering, watertable control is the practice of controlling the height of the water table by drainage. Its main applications are in agricultural land (to improve the crop yield using agricultural drainage systems) and in cities to manage the extensive underground infrastructure that includes the foundations of large buildings, underground transit systems, and extensive utilities (water supply networks, sewerage, storm drains, and underground electrical grids). Description and definitions Subsurface land drainage aims at controlling the water table of the groundwater in originally waterlogged land at a depth acceptable for the purpose for which the land is used. The depth of the water table with drainage is greater than without. Purpose In agricultural land drainage, the purpose of water table control is to establish a depth of the water table (Figure 1) that no longer interferes negatively with the necessary farm operations and crop yields (Figure 2, made with the SegReg model, see the page: segmented regression). In addition, land drainage can help with soil salinity control. The soil's hydraulic conductivity plays an important role in drainage design. The development of agricultural drainage criteria is required to give the designer and manager of the drainage system a target to achieve in terms of maintenance of an optimum depth of the water table. Optimization Optimization of the depth of the water table is related to the benefits and costs of the drainage system (Figure 3). The shallower the permissible depth of the water table, the lower the cost of the drainage system to be installed to achieve this depth. However, the lowering of the originally too shallow depth by land drainage entails side effects. These have also to be taken into account, including the costs of mitigation of negative side effects. The optimization of drainage design and the development of drainage criteria are discussed in the article on drainage research. Figure 4 shows an example of the effect of drain depth on soil salinity and various irrigation/drainage parameters as simulated by the SaltMod program. History Historically, agricultural land drainage started with the digging of relatively shallow open ditches that received both runoff from the land surface and outflow of groundwater. Hence the ditches had a surface as well as a subsurface drainage function. By the end of the 19th century and early in the 20th century it was felt that the ditches were a hindrance for the farm operations and the ditches were replaced by buried lines of clay pipes (tiles), each tile about 30 cm long. Hence the term "tile drainage". Since 1960, one started using long, flexible, corrugated plastic (PVC or PE) pipes that could be installed efficiently in one go by trenching machines. The pipes could be pre-wrapped with an envelope material, like synthetic fibre and geotextile, that would prevent the entry of soil particles into the drains. Thus, land drainage became a powerful industry. At the same time agriculture was steering towards maximum productivity, so that the installation of drainage systems came in full swing. Environment As a result of large scale developments, many modern drainage projects were over-designed, while the negative environmental side effects were ignored. In circles with environmental concern, the profession of land drainage got a poor reputation, sometimes justly so, sometimes unjustified, notably when land drainage was confused with the more encompassing activity of wetland reclamation. Nowadays, in some countries, the hardliner trend is reversed. Further, checked or controlled drainage systems were introduced, as shown in Figure 5 and discussed on the page: Drainage system (agriculture). Drainage design The design of subsurface drainage systems in terms layout, depth and spacing of the drains is often done using subsurface drainage equations with parameters like drain depth, depth of the water table, soil depth, hydraulic conductivity of the soil and drain discharge. The drain discharge is found from an agricultural water balance. The computations can be done using computer models like EnDrain, which uses the hydraulic equivalent of Joule's law in electricity. Drainage by wells Subsurface drainage of groundwater can also be accomplished by pumped wells (vertical drainage, in contrast to horizontal drainage). Drainage wells have been used extensively in the Salinity Control and Reclamation Program (SCARP) in the Indus valley of Pakistan. Although the experiences were not overly successful, the feasibility of this technique in areas with deep and permeable aquifers is not to be discarded. The well spacings in these areas can be so wide (more than 1000m) that the installation of vertical drainage systems could be relatively cheap compared to horizontal subsurface drainage (drainage by pipes, ditches, trenches, at a spacing of 100m or less). For the design of a well field for control of the water table, the WellDrain model may be helpful. Classification A classification of drainage systems is found in the article Drainage system (agriculture). Effects on crop yield Most crops need a water table at a minimum depth. For some important food and fiber crops a classification was made because at shallower depths the crop suffers a yield decline. (Where DWT = depth to water table) See also Dewatering Subsurface dyke References External links American Wick Drain- Manufacturer of strip drain used in watertable management Chapters of ILRI publication 16 on "Drainage Principles and Applications" can be viewed in: https://edepot.wur.nl/262058 Website on Waterlogging and Land Drainage : Various articles on Waterlogging and Land Drainage can be freely downloaded from : For answers to frequently asked questions on Waterlogging and Land Drainage see : Reports and case studies on Waterlogging and Land Drainage can be consulted at : Software on Waterlogging and Land Drainage can be freely downloaded from : A model of subsurface groundwater drainage for water table and soil salinity control (SaltMod) can be freely downloaded from : The combination of SaltMod with a polygonal model of groundwater flow (SahysMod) can be freely downloaded from : Drainage Aquifers Land management Land reclamation Water management Hydraulic engineering Water and the environment
Watertable control
[ "Physics", "Engineering", "Environmental_science" ]
1,233
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Aquifers", "Hydraulic engineering" ]
11,662,323
https://en.wikipedia.org/wiki/Land%20drainage%20%28disambiguation%29
{{safesubst:#invoke:RfD||2=Land drainage (disambiguation)|month = January |day = 6 |year = 2025 |time = 20:13 |timestamp = 20250106201335 |content= REDIRECT Drainage Hydrology Land management }}
Land drainage (disambiguation)
[ "Chemistry", "Engineering", "Environmental_science" ]
73
[ "Hydrology", "Environmental engineering" ]
11,662,795
https://en.wikipedia.org/wiki/K%C3%A5rst%C3%B8
Kårstø is an industrial facility located near the village of Susort, along the Boknafjorden, in the municipality of Tysvær in Rogaland county, Norway. The site features a number of natural gas processing plants that refine natural gas and condensate from the fields in the northern parts of the North Sea, including the Åsgard, Mikkel, and Sleipner gas fields. The Kårstø processing complex is Europe's biggest export port for natural gas liquids (NGL) and the third largest in the world. The industrial site is also the location for the now-closed Kårstø Power Station. Operation The first plant on the site opened on 25 July 1985 and it exported the first gas to Germany on October 15 of that year. Gas is transported from the North Sea via Statpipe and Åsgard Transport. Condensate is received from the Sleipner field and stabilised and fractionated in a separate plant that started operation in 1993. About of stabilised condensate are exported from Kårstø each year, by ship. Natural Gas Liquids (NGL) are separated from the rest of the gas and split into propane, butane, isobutane, naphtha, and ethane. The propane is stored in two large mountain halls with a total capacity of . The rest of the refined products are stored in tanks. The facility is the third largest export port for Liquefied Petroleum Gas (LPG) in the world and exported all around the globe. In 2002, 575 shiploads of LPG, ethane, naphtha, and stabilised condensate were sent. Gassnova has been engaged for the development of technology for full-scale CO2-capture at the gas-fired power plant and large-scale transport and geological storage of CO2 from Kårstø. Annual ethane production is and this is sold on long term agreements to the companies Borealis, I/S Noretyl, and Norsk Hydro. Dry gas is exported via Europipe II to Dornum in Germany and via Statpipe and Norpipe to Emden. The pipes are owned by Gassled and operated by Gassco. The company Naturkraft (owned by Statkraft and Statoil) operated the Kårstø Power Station, a natural gas-fired thermal power plant at Kårstø from 2007 until 2014. References Related Reading Natural gas plants Industrial parks in Norway Ports and harbours of Norway Natural gas industry in Norway Equinor Tysvær
Kårstø
[ "Chemistry" ]
531
[ "Natural gas technology", "Natural gas plants" ]
11,662,962
https://en.wikipedia.org/wiki/-ine
-ine is a suffix used in chemistry to denote two kinds of substance. The first is a chemically basic and alkaloidal substance. It was proposed by Joseph Louis Gay-Lussac in an editorial accompanying a paper by Friedrich Sertürner describing the isolation of the alkaloid "morphium", which was subsequently renamed to "morphine". Examples include quinine, morphine and guanidine. The second usage is to denote a hydrocarbon of the second degree of unsaturation. Examples include hexine and heptine. With simple hydrocarbons, this usage is identical to the IUPAC suffix -yne. In common and literary adjectives (e.g. asinine, canine, feline, ursine), the suffix is usually pronounced or in some words alternatively . For demonyms (e.g. Levantine, Byzantine, Argentine) it is usually or . But in chemistry, it is usually pronounced or depending on the word it appears in and the accent of the speaker. In a few words (for example, quinine, iodine and strychnine), the sound is normal in some accents. Gasoline ends with ; glycerine more often with than with . In caffeine, the suffix has merged with the e in the root, for stressed ; in gasoline and margarine as well the suffix is stressed by some people. Some elements of the periodic table (namely the halogens, in the Group 17) have this suffix: fluorine (F), chlorine (Cl), bromine (Br), iodine (I) and astatine (At), ending which was continued in the artificially created tennessine (Ts). The suffix -in () is etymologically related and overlaps in usage with -ine. Many proteins and lipids have names ending with -in: for example, the enzymes pepsin and trypsin, the hormones insulin and gastrin, and the lipids stearin (stearine) and olein. References ine English suffixes
-ine
[ "Chemistry" ]
441
[ "Chemistry suffixes" ]
11,663,107
https://en.wikipedia.org/wiki/Motorola%20C168/C168i
The Motorola C168/C168i is a low-cost 850/1900-band GSM mobile phone, made by Motorola. It was released in the fourth quarter of 2005. Main Features Downloadable wallpaper, screensaver and ringtones MMS and SMS WAP 2.0 and GPRS for Internet access FM stereo radio References External links Product page on Motorola website C168 Mobile phones introduced in 2005
Motorola C168/C168i
[ "Technology" ]
86
[ "Mobile technology stubs", "Mobile phone stubs" ]
11,663,321
https://en.wikipedia.org/wiki/Gossip%20protocol
A gossip protocol or epidemic protocol is a procedure or process of computer peer-to-peer communication that is based on the way epidemics spread. Some distributed systems use peer-to-peer gossip to ensure that data is disseminated to all members of a group. Some ad-hoc networks have no central registry and the only way to spread common data is to rely on each member to pass it along to their neighbors. Communication The concept of gossip communication can be illustrated by the analogy of office workers spreading rumors. Let's say each hour the office workers congregate around the water cooler. Each employee pairs off with another, chosen at random, and shares the latest gossip. At the start of the day, Dave starts a new rumor: he comments to Bob that he believes that Charlie dyes his mustache. At the next meeting, Bob tells Alice, while Dave repeats the idea to Eve. After each water cooler rendezvous, the number of individuals who have heard the rumor roughly doubles (though this doesn't account for gossiping twice to the same person; perhaps Dave tries to tell the story to Frank, only to find that Frank already heard it from Alice). Computer systems typically implement this type of protocol with a form of random "peer selection": with a given frequency, each machine picks another machine at random and shares any rumors. Variants and styles There are probably hundreds of variants of specific gossip-like protocols because each use-scenario is likely to be customized to the organization's specific needs. For example, a gossip protocol might employ some of these ideas: The core of the protocol involves periodic, pairwise, inter-process interactions. The information exchanged during these interactions is of bounded size. When agents interact, the state of at least one agent changes to reflect the state of the other. Reliable communication is not assumed. The frequency of the interactions is low compared to typical message latencies so that the protocol costs are negligible. There is some form of randomness in the peer selection. Peers might be selected from the full set of nodes or from a smaller set of neighbors. Due to the replication there is an implicit redundancy of the delivered information. Protocol types It is useful to distinguish two prevailing styles of gossip protocol: Dissemination protocols (or rumor-mongering protocols). These use gossip to spread information; they basically work by flooding agents in the network, but in a manner that produces bounded worst-case loads: Event dissemination protocols use gossip to carry out multicasts. They report events, but the gossip occurs periodically and events don't actually trigger the gossip. One concern here is the potentially high latency from when the event occurs until it is delivered. Background data dissemination protocols continuously gossip about information associated with the participating nodes. Typically, propagation latency isn't a concern, perhaps because the information in question changes slowly or there is no significant penalty for acting upon slightly stale data. Protocols that compute aggregates. These compute a network-wide aggregate by sampling information at the nodes in the network and combining the values to arrive at a system-wide value – the largest value for some measurement nodes are making, smallest, etc. The key requirement is that the aggregate must be computable by fixed-size pairwise information exchanges; these typically terminate after a number of rounds of information exchange logarithmic in the system size, by which time an all-to-all information flow pattern will have been established. As a side effect of aggregation, it is possible to solve other kinds of problems using gossip; for example, there are gossip protocols that can arrange the nodes in a gossip overlay into a list sorted by node-id (or some other attribute) in logarithmic time using aggregation-style exchanges of information. Similarly, there are gossip algorithms that arrange nodes into a tree and compute aggregates such as "sum" or "count" by gossiping in a pattern biased to match the tree structure. Many protocols that predate the earliest use of the term "gossip" fall within this rather inclusive definition. For example, Internet routing protocols often use gossip-like information exchanges. A gossip substrate can be used to implement a standard routed network: nodes "gossip" about traditional point-to-point messages, effectively pushing traffic through the gossip layer. Bandwidth permitting, this implies that a gossip system can potentially support any classic protocol or implement any classical distributed service. However, such a broadly inclusive interpretation is rarely intended. More typically gossip protocols are those that specifically run in a regular, periodic, relatively lazy, symmetric and decentralized manner; the high degree of symmetry among nodes is particularly characteristic. Thus, while one could run a 2-phase commit protocol over a gossip substrate, doing so would be at odds with the spirit, if not the wording, of the definition. The term convergently consistent is sometimes used to describe protocols that achieve exponentially rapid spread of information. For this purpose, a protocol must propagate any new information to all nodes that will be affected by the information within time logarithmic in the size of the system (the "mixing time" must be logarithmic in system size). Examples Suppose that we want to find the object that most closely matches some search pattern, within a network of unknown size, but where the computers are linked to one another and where each machine is running a small agent program that implements a gossip protocol. To start the search, a user would ask the local agent to begin to gossip about the search string. (We're assuming that agents either start with a known list of peers, or retrieve this information from some kind of a shared store.) Periodically, at some rate (let's say ten times per second, for simplicity), each agent picks some other agent at random, and gossips with it. Search strings known to A will now also be known to B, and vice versa. In the next "round" of gossip A and B will pick additional random peers, maybe C and D. This round-by-round doubling phenomenon makes the protocol very robust, even if some messages get lost, or some of the selected peers are the same or already know about the search string. On receipt of a search string for the first time, each agent checks its local machine for matching documents. Agents also gossip about the best match, to date. Thus, if A gossips with B, after the interaction, A will know of the best matches known to B, and vice versa. Best matches will "spread" through the network. If the messages might get large (for example, if many searches are active all at the same time), a size limit should be introduced. Also, searches should "age out" of the network. It follows that within logarithmic time in the size of the network (the number of agents), any new search string will have reached all agents. Within an additional delay of the same approximate length, every agent will learn where the best match can be found. In particular, the agent that started the search will have found the best match. For example, in a network with 25,000 machines, we can find the best match after about 30 rounds of gossip: 15 to spread the search string and 15 more to discover the best match. A gossip exchange could occur as often as once every tenth of a second without imposing undue load, hence this form of network search could search a big data center in about three seconds. In this scenario, searches might automatically age out of the network after, say, 10 seconds. By then, the initiator knows the answer and there is no point in further gossip about that search. Gossip protocols have also been used for achieving and maintaining distributed database consistency or with other types of data in consistent states, counting the number of nodes in a network of unknown size, spreading news robustly, organizing nodes according to some structuring policy, building so-called overlay networks, computing aggregates, sorting the nodes in a network, electing leaders, etc. Epidemic algorithms Gossip protocols can be used to propagate information in a manner rather similar to the way that a viral infection spreads in a biological population. Indeed, the mathematics of epidemics are often used to model the mathematics of gossip communication. The term epidemic algorithm is sometimes employed when describing a software system in which this kind of gossip-based information propagation is employed. See also Gossip protocols are just one class among many classes of networking protocols. See also virtual synchrony, distributed state machines, Paxos algorithm, database transactions. Each class contains tens or even hundreds of protocols, differing in their details and performance properties but similar at the level of the guarantees offered to users. Some gossip protocols replace the random peer selection mechanism with a more deterministic scheme. For example, in the NeighbourCast algorithm, instead of talking to random nodes, information is spread by talking only to neighbouring nodes. There are a number of algorithms that use similar ideas. A key requirement when designing such protocols is that the neighbor set trace out an expander graph. Routing Tribler, BitTorrent peer-to-peer client using gossip protocol. References Further reading Systematic Design of P2P Technologies for Distributed Systems. Indranil Gupta, Global Data Management, eds: R. Baldoni, G. Cortese, F. Davide and A. Melpignano, 2006. Ordered slicing of very large overlay networks. Márk Jelasity and Anne-Marie Kermarrec. IEEE P2P, 2006. Proximity-aware superpeer overlay topologies. Gian Paolo Jesi, Alberto Montresor, and Ozalp Babaoglu. IEEE Transactions on Network and Service Management, 4(2):74–83, September 2007. X-BOT: A Protocol for Resilient Optimization of Unstructured Overlays. João Leitão, João Marques, José Pereira, Luís Rodrigues. Proc. 28th IEEE International Symposium on Reliable Distributed Systems (SRDS'09). Spatial gossip and resource location protocols. David Kempe, Jon Kleinberg, Alan Demers. Journal of the ACM (JACM) 51: 6 (Nov 2004). Gossip-Based Computation of Aggregate Information. David Kempe, Alin Dobra, Johannes Gehrke. Proc. 44th Annual IEEE Symposium on Foundations of Computer Science (FOCS). 2003. Active and Passive Techniques for Group Size Estimation in Large-Scale and Dynamic Distributed Systems. Dionysios Kostoulas, Dimitrios Psaltoulis, Indranil Gupta, Ken Birman, Al Demers. Elsevier Journal of Systems and Software, 2007. Build One, Get One Free: Leveraging the Coexistence of Multiple P2P Overlay Networks. Balasubramaneyam Maniymaran, Marin Bertier and Anne-Marie Kermarrec. Proc. ICDCS, June 2007. Peer counting and sampling in overlay networks: random walk methods. Laurent Massoulié, Erwan Le Merrer, Anne-Marie Kermarrec, Ayalvadi Ganesh. Proc. 25th ACM PODC. Denver, 2006. Chord on Demand. Alberto Montresor, Márk Jelasity, and Ozalp Babaoglu. Proc. 5th Conference on Peer-to-Peer Computing (P2P), Konstanz, Germany, August 2005. Building low-diameter P2P networks. G. Pandurangan, P. Raghavan, Eli Upfal. In Proceedings of the 42nd Symposium on Foundations of Computer Science (FOCS), 2001. Network architecture Distributed computing
Gossip protocol
[ "Engineering" ]
2,377
[ "Network architecture", "Computer networks engineering" ]
11,663,522
https://en.wikipedia.org/wiki/%CE%A0%20pad
The Π pad (pi pad) is a specific type of attenuator circuit in electronics whereby the topology of the circuit is formed in the shape of the Greek capital letter pi (Π). Attenuators are used in electronics to reduce the level of a signal. They are also referred to as pads due to their effect of padding down a signal by analogy with acoustics. Attenuators have a flat frequency response attenuating all frequencies equally in the band they are intended to operate. The attenuator has the opposite task of an amplifier. The topology of an attenuator circuit will usually follow one of the simple filter sections. However, there is no need for more complex circuitry, as there is with filters, due to the simplicity of the frequency response required. Circuits are required to be balanced or unbalanced depending on the geometry of the transmission lines with which they are to be used. For radio frequency applications, the format is often unbalanced, such as coaxial. For audio and telecommunications, balanced circuits are usually required, such as with the twisted pair format. The Π pad is intrinsically an unbalanced circuit. However, it can be converted to a balanced circuit by placing half the series resistance in the return path. Such a circuit is called a box section because the circuit is formed in the shape of a box. Terminology An attenuator is a form of a two-port network with a generator connected to one port and a load connected to the other. In all of the circuits given below it is assumed that the generator and load impedances are purely resistive (though not necessarily equal) and that the attenuator circuit is required to perfectly match to these. The symbols used for these impedances are; the impedance of the generator the impedance of the load Popular values of impedance are 600Ω in telecommunications and audio, 75Ω for video and dipole antennae, and 50Ω for RF. The voltage transfer function, A, is, While the inverse of this is the loss, L, of the attenuator, The value of attenuation is normally marked on the attenuator as its loss, LdB, in decibels (dB). The relationship with L is; Popular values of attenuator are 3dB, 6dB, 10dB, 20dB, and 40dB. However, it is often more convenient to express the loss in nepers, where is the attenuation in nepers (one neper is approximately 8.7 dB). Impedance and loss The values of resistance of the attenuator's elements can be calculated using image parameter theory. The starting point here is the image impedances of the L section in figure 2. The image admittance of the input is, and the image impedance of the output is, The loss of the L section when terminated in its image impedances is, where the image parameter transmission function, γL is given by, The loss of this L section in the reverse direction is given by, For an attenuator, Z and Y are simple resistors and γ becomes the image parameter attenuation (that is, the attenuation when terminated with the image impedances) in nepers. A Π pad can be viewed as being two L sections back-to-back as shown in figure 3. Most commonly, the generator and load impedances are equal so that and a symmetrical Π pad is used. In this case, the impedance matching terms inside the square roots all cancel and, Substituting Z and Y for the corresponding resistors, These equations can easily be extended to non-symmetrical cases. Resistor values The equations above find the impedance and loss for an attenuator with given resistor values. The usual requirement in a design is the other way around – the resistor values for a given impedance and loss are needed. These can be found by transposing and substituting the last two equations above; If with O pad The unbalanced pi pad can be converted to a balanced O pad by putting one half of Rz in each side of a balanced line. The simple four element O pad attenuates the differential mode signal but does little to attenuate any common mode signal. To ensure attenuation of the common mode signal also, a split O pad can be created by splitting and grounding Rx and Ry. Conversion of two-port to pi pad If a passive two-port can be expressed with admittance parameters, then that two-port is equivalent to a pi pad. In general, the admittance parameters are frequency dependent and not necessarily resistive. In that case the elements of the pi pad would not be simple components. However, in the case where the two-port is purely resistive or substantially resistive over the frequency range of interest, then the two-port can be replaced with a pi pad made of resistors. Conversion of tee pad to pi pad Pi pads and tee pads are easily converted back and forth. If one of the pads is composed of only resistors then the other is also composed entirely of resistors. See also T pad L pad References Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, pp. 41–45, 4McGraw-Hill 1964. Redifon Radio Diary, 1970, pp. 49–60, William Collins Sons & Co, 1969. Analog circuits Electronic design Linear electronic circuits Resistive components
Π pad
[ "Physics", "Engineering" ]
1,139
[ "Physical quantities", "Electronic design", "Analog circuits", "Resistive components", "Electronic engineering", "Design", "Electrical resistance and conductance" ]
11,664,140
https://en.wikipedia.org/wiki/Software%20Engineering%20for%20Adaptive%20and%20Self-Managing%20Systems
The Workshop on Software Engineering for Adaptive and Self-Managing Systems (SEAMS) is an academic conference for exchanging research results and experiences in the areas of autonomic computing, self-managing, self-healing, self-optimizing, self-configuring, and self-adaptive systems theory. It was established in 2006 at the International Conference on Software Engineering (ICSE). It integrated workshops held mainly at ICSE and the Foundations of Software Engineering (FSE) conference since 2002, including the FSE 2002 and 2004 Workshops on Self-Healing (Self-Managed) Systems (WOSS), ICSE 2005 Workshop on Design and Evolution of Autonomic Application Software, and the ICSE 2002, 2003, 2004 and 2005 Workshops on Architecting Dependable Systems. References External links ICSE 2012 SEAMS ICSE 2011 SEAMS ICSE 2010 SEAMS ICSE 2009 SEAMS ICSE 2008 SEAMS ICSE 2007 SEAMS ICSE 2006 SEAMS SEAMS 2007 Organizer Information IEEE International Conference on Autonomic Computing (ICAC) Software engineering conferences
Software Engineering for Adaptive and Self-Managing Systems
[ "Engineering" ]
219
[ "Software engineering", "Software engineering conferences" ]
11,664,412
https://en.wikipedia.org/wiki/Context-adaptive%20binary%20arithmetic%20coding
Context-adaptive binary arithmetic coding (CABAC) is a form of entropy encoding used in the H.264/MPEG-4 AVC and High Efficiency Video Coding (HEVC) standards. It is a lossless compression technique, although the video coding standards in which it is used are typically for lossy compression applications. CABAC is notable for providing much better compression than most other entropy encoding algorithms used in video encoding, and it is one of the key elements that provides the H.264/AVC encoding scheme with better compression capability than its predecessors. In H.264/MPEG-4 AVC, CABAC is only supported in the Main and higher profiles (but not the extended profile) of the standard, as it requires a larger amount of processing to decode than the simpler scheme known as context-adaptive variable-length coding (CAVLC) that is used in the standard's Baseline profile. CABAC is also difficult to parallelize and vectorize, so other forms of parallelism (such as spatial region parallelism) may be coupled with its use. In HEVC, CABAC is used in all profiles of the standard. Algorithm CABAC is based on arithmetic coding, with a few innovations and changes to adapt it to the needs of video encoding standards: It encodes binary symbols, which keeps the complexity low and allows probability modelling for more frequently used bits of any symbol. The probability models are selected adaptively based on local context, allowing better modelling of probabilities, because coding modes are usually locally well correlated. It uses a multiplication-free range division by the use of quantized probability ranges and probability states. CABAC has multiple probability modes for different contexts. It first converts all non-binary symbols to binary. Then, for each bit, the coder selects which probability model to use, then uses information from nearby elements to optimize the probability estimate. Arithmetic coding is finally applied to compress the data. The context modeling provides estimates of conditional probabilities of the coding symbols. Utilizing suitable context models, a given inter-symbol redundancy can be exploited by switching between different probability models according to already-coded symbols in the neighborhood of the current symbol to encode. The context modeling is responsible for most of CABAC's roughly 10% savings in bit rate over the CAVLC entropy coding method. Coding a data symbol involves the following stages. Binarization: CABAC uses Binary Arithmetic Coding which means that only binary decisions (1 or 0) are encoded. A non-binary-valued symbol (e.g. a transform coefficient or motion vector) is "binarized" or converted into a binary code prior to arithmetic coding. This process is similar to the process of converting a data symbol into a variable length code but the binary code is further encoded (by the arithmetic coder) prior to transmission. Stages are repeated for each bit (or "bin") of the binarized symbol. Context model selection: A "context model" is a probability model for one or more bins of the binarized symbol. This model may be chosen from a selection of available models depending on the statistics of recently coded data symbols. The context model stores the probability of each bin being "1" or "0". Arithmetic encoding: An arithmetic coder encodes each bin according to the selected probability model. Note that there are just two sub-ranges for each bin (corresponding to "0" and "1"). Probability update: The selected context model is updated based on the actual coded value (e.g. if the bin value was "1", the frequency count of "1"s is increased). Example 1. Binarize the value MVDx, the motion vector difference in the direction. The first bit of the binarized codeword is bin 1; the second bit is bin 2; and so on. 2. Choose a context model for each bin. One of 3 models is selected for bin 1, based on previous coded MVD values. The L1 norm of two previously-coded values, ek, is calculated: If ek is small, then there is a high probability that the current MVD will have a small magnitude; conversely, if ek is large then it is more likely that the current MVD will have a large magnitude. We select a probability table (context model) accordingly. The remaining bins are coded using one of 4 further context models: 3. Encode each bin. The selected context model supplies two probability estimates: the probability that the bin contains "1" and the probability that the bin contains "0". These estimates determine the two sub-ranges that the arithmetic coder uses to encode the bin. 4. Update the context models. For example, if context model 2 was selected for bin 1 and the value of bin 1 was "0", the frequency count of "0"s is incremented. This means that the next time this model is selected, the probability of a "0" will be slightly higher. When the total number of occurrences of a model exceeds a threshold value, the frequency counts for "0" and "1" will be scaled down, which in effect gives higher priority to recent observations. The arithmetic decoding engine The arithmetic decoder is described in some detail in the Standard. It has three distinct properties: Probability estimation is performed by a transition process between 64 separate probability states for "Least Probable Symbol" (LPS, the least probable of the two binary decisions "0" or "1"). The range representing the current state of the arithmetic coder is quantized to a small range of pre-set values before calculating the new range at each step, making it possible to calculate the new range using a look-up table (i.e. multiplication-free). A simplified encoding and decoding process is defined for data symbols with a near uniform probability distribution. The definition of the decoding process is designed to facilitate low-complexity implementations of arithmetic encoding and decoding. Overall, CABAC provides improved coding efficiency compared with CAVLC-based coding, at the expense of greater computational complexity. History In 1986, IBM researchers Kottappuram M. A. Mohiuddin and Jorma Johannes Rissanen filed a patent for a multiplication-free binary arithmetic coding algorithm. In 1988, an IBM research team including R.B. Arps, T.K. Truong, D.J. Lu, W. B. Pennebaker, L. Mitchell and G. G. Langdon presented an adaptive binary arithmetic coding (ABAC) algorithm called Q-Coder. The above patents and research papers, along several others from IBM and Mitsubishi Electric, were later cited by the CCITT and Joint Photographic Experts Group as the basis for the JPEG image compression format's adaptive binary arithmetic coding algorithm in 1992. However, encoders and decoders of the JPEG file format, which has options for both Huffman encoding and arithmetic coding, typically only support the Huffman encoding option, which was originally because of patent concerns, although JPEG's arithmetic coding patents have since expired due to the age of the JPEG standard. The first reported use of adaptive binary arithmetic coding in motion video compression was in a proposal by IBM researchers to the MPEG group in 1989. This proposal extended the use of arithmetic coding from intraframe JPEG to interframe video coding. In 1999, Youngjun Yoo (Texas Instruments), Young Gap Kwon and Antonio Ortega (University of Southern California) presented a context-adaptive form of binary arithmetic coding. The modern context-adaptive binary arithmetic coding (CABAC) algorithm was commercially introduced with the H.264/MPEG-4 AVC format in 2003. The majority of patents for the AVC format are held by Panasonic, Godo Kaisha IP Bridge and LG Electronics. See also Arithmetic coding Data compression Lossless compression Context-adaptive variable-length coding (CAVLC) References Audiovisual introductions in 2003 Entropy coding MPEG Video compression Data compression
Context-adaptive binary arithmetic coding
[ "Technology" ]
1,647
[ "Multimedia", "MPEG" ]
11,664,498
https://en.wikipedia.org/wiki/List%20of%20sequenced%20bacterial%20genomes
This list of sequenced eubacterial genomes contains most of the eubacteria known to have publicly available complete genome sequences. Most of these sequences have been placed in the International Nucleotide Sequence Database Collaboration, a public database which can be searched on the web. A few of the listed genomes may not be in the INSDC database, but in other public databases. Genomes listed as "Unpublished" are in a database, but not in the peer-reviewed scientific literature. For the genomes of archaea see list of sequenced archaeal genomes. Abditibacteriota Actinomycetota Aquificota Armatimonadota Bacteroidota/Chlorobiota group Caldisericota Chlamydiota/Verrucomicrobiota group Chloroflexota Chrysiogenota Cyanobacteria Deferribacterota Deinococcota Dictyoglomota Elusimicrobiota Fibrobacterota/Acidobacteriota group Bacillota Fusobacteriota Gemmatimonadota Nitrospirota Planctomycetota Pseudomonadota Alphaproteobacteria Betaproteobacteria Gammaproteobacteria Zetaproteobacteria Myxococcota–Campylobacterota Spirochaetota Synergistota Mycoplasmatota Thermodesulfobacteriota Thermotogota See also Genome project Human microbiome project List of sequenced eukaryotic genomes List of sequenced archaeal genomes List of sequenced plastomes References In silico analysis of complete bacterial genomes: PCR, AFLP–PCR and endonuclease restriction Combining diverse evidence for gene recognition in completely sequenced bacterial genomes Intragenomic heterogeneity between multiple 16S ribosomal RNA operons in sequenced bacterial genomes External links BacMap — an up-to-date electronic atlas of annotated bacterial genomes SUPERFAMILY comparative genomics database Includes genomes of completely sequenced prokaryotes, and sophisticated datamining plus visualisation tools for analysis Bacterial genomes Bacterial genomes Lists of bacteria Pathogen genomics
List of sequenced bacterial genomes
[ "Engineering", "Biology" ]
478
[ "Lists of sequenced genomes", "Lists of bacteria", "Genetic engineering", "Molecular genetics", "DNA sequencing", "Bacteria", "Genome projects", "Pathogen genomics" ]
11,664,690
https://en.wikipedia.org/wiki/Magnetofection
Magnetofection is a transfection method that uses magnetic fields to concentrate particles containing vectors to target cells in the body. Magnetofection has been adapted to a variety of vectors, including nucleic acids, non-viral transfection systems, and viruses. This method offers advantages such as high transfection efficiency and biocompatibility which are balanced with limitations. Mechanism Principle The term magnetofection, currently trademarked by the company OZ Biosciences, combines the words magnetic and transfection. Magnetofection uses nucleic acids associated with magnetic nanoparticles. These molecular complexes are then concentrated and transported into cells using an applied magnetic field. Synthesis The magnetic nanoparticles are typically made from iron oxide, which is fully biodegradable, using methods such as coprecipitation or microemulsion. The nanoparticles are then combined with gene vectors (DNA, siRNA, ODN, virus, etc.). One method involves linking viral particles to magnetic particles using an avidin-biotin interaction. Viruses can also bind to the nanoparticles via hydrophobic interaction. Another synthesis method involves coating magnetic nanoparticles with cationic lipids or polymers via salt-induced aggregation. For example, nanoparticles may be conjugated with the polyethylenimine (PEI), a positively charged polymer used commonly as a transfection agent. The PEI solution must have a high pH during synthesis to encourage high gene expression. The positively charged nanoparticles can then associate with negatively charged nucleic acids via electrostatic interaction. Cellular uptake Magnetic particles loaded with vectors are concentrated on the target cells by the influence of an external magnetic field. The cells then take up genetic material naturally via endocytosis and pinocytosis. Consequently, membrane architecture and structure stays intact, in contrast to other physical transfection methods such as electroporation or gene guns that damage the cell membrane. The nucleic acids are then released into the cytoplasm by different mechanisms depending upon the formulation used: the proton sponge effect caused by cationic polymers coated on the nanoparticles that promote endosome osmotic swelling, disruption of the endosome membrane and intracellular release of DNA form, the destabilization of endosome by cationic lipids coated on the particles that release the nucleic acid into cells by flip-flop of cell negative lipids and charge neutralization and the viral infection mechanism. Magnetofection works with cells that are not dividing or slowly dividing, meaning that the genetic materials can go to the cell nucleus without cell division. Applications Magnetofection has been tested on a broad range of cell lines, hard-to-transfect and primary cells. Several optimized and efficient magnetic nanoparticle formulations have been specifically developed for several types of applications such as DNA, siRNA, and primary neuron transfection as well as viral applications. Magnetofection research is currently in the preclinical stage. This technique has primarily been tested in vivo using plasmid DNA in mouse, rat, and rabbit models for applications in the hippocampus, subcutaneous tumors, lungs, spinal cord, and muscle. Some applications include: Delivery of GFP gene into primary neural stem cells, which are typically difficult to transfect, with 18% efficacy with a static magnetic field and 32% efficacy with an oscillating field. Delivery of oligodesoxynucleotides (ODN) into human umbilical vein endothelial cells with 84% efficiency. Delivery of siRNA to HeLa cells to knock down luciferase reporter gene. Delivery of adenoviral vectors to primary human peripheral blood lymphocytes. Advantages Magnetofection attempts to unite the advantages of biochemical (cationic lipids or polymers) and physical (electroporation, gene gun) transfection methods. It allows for local delivery with high transfection efficiency, faster incubation time, and biocompatibility. Transfection efficiency Coupling magnetic nanoparticles to gene vectors results in hundreds-fold increase of the uptake of these vectors on a time scale of minutes, thus leading to high transfection efficiency. Gene vector and magnetic nanoparticle complexes are transfected into cells after 10–15 minutes, which is faster than the 2–4 hours that other transfection methods require. After 24, 48 or 72 hours, most of the particles are localized in the cytoplasm, in vacuoles (membranes surrounded structure into cells) and occasionally in the cell nucleus. Biocompatibility Magnetic nanoparticles do not aggregate easily once the magnet is removed, and therefore are unlikely to block capillaries or cause thrombosis. In addition, iron oxide is biodegradable, and the iron can be reused in hemoglobin or iron metabolism pathways. Disadvantages Particle variability Magnetic nanoparticle synthesis can sometimes lead to a wide range of differently sized particles. The size of particles can influence their usefulness. Specifically, nanoparticles that are less than 10 nm or greater than 200 nm in size tend to be cleared from the body more quickly. Localization in vivo While magnets can be used to localize magnetic nanoparticles to desired cells, this mechanism may be difficult to maintain in practice. The nanoparticles can be concentrated in 2D space such as on a culture plate or at the surface of the body, but it can be more difficult to localize them in the 3D space of the body. Magnetofection does not work well for organs or blood vessels far from the surface of the body, since the magnetic field weakens as distance increases. In addition, the user must consider the frequency and timing of applying the magnetic field, as the particles will not necessarily stay in the desired location once the magnet is removed. Cytotoxicity While iron oxide used to make nanoparticles is biodegradable, the toxicity of magnetic nanoparticles is still under investigation. Some research has found no signs of damage to cells, while others claim that small (< 2 nm) nanoparticles can diffuse across cell membranes and disrupt organelles. In addition, very high concentrations of iron oxide can disrupt homeostasis and lead to iron overload, which can damage or alter DNA, affect cellular responses, and kill cells. Lysosymes can also digest the nanoparticles and release free iron which can react with hydrogen peroxide to form free radicals, leading to cytotoxic, mutagenic, and carcinogenic effects. References Further reading See also Magnet-assisted transfection Molecular biology Molecular genetics Laboratory techniques Biomagnetics
Magnetofection
[ "Chemistry", "Biology" ]
1,390
[ "Biomagnetics", "Molecular genetics", "nan", "Molecular biology", "Biochemistry" ]
11,664,784
https://en.wikipedia.org/wiki/Fizeau%20experiment
The Fizeau experiment was carried out by Hippolyte Fizeau in 1851 to measure the relative speeds of light in moving water. Fizeau used a special interferometer arrangement to measure the effect of movement of a medium upon the speed of light. According to the theories prevailing at the time, light traveling through a moving medium would be dragged along by the medium, so that the measured speed of the light would be a simple sum of its speed through the medium plus the speed of the medium. Fizeau indeed detected a dragging effect, but the magnitude of the effect that he observed was far lower than expected. When he repeated the experiment with air in place of water he observed no effect. His results seemingly supported the partial aether-drag hypothesis of Augustin-Jean Fresnel, a situation that was disconcerting to most physicists. Over half a century passed before a satisfactory explanation of Fizeau's unexpected measurement was developed with the advent of Albert Einstein's theory of special relativity. Einstein later pointed out the importance of the experiment for special relativity, in which it corresponds to the relativistic velocity-addition formula when restricted to small velocities. Although it is referred to as the Fizeau experiment, Fizeau was an active experimenter who carried out a wide variety of different experiments involving measuring the speed of light in various situations. Background As scientists in the 1700's worked on a theory of light and of electromagnetism, luminiferous aether, a medium that would support waves, was the focus of many experiments. Two critical issues were the relation of aether to motion and its relation to matter. For example, astronomical aberration, the apparent motion of stars observed at different times of year, was proposed to be related to starlight propagated through an aether. In 1846 Fresnel proposed that the portion aether that will move with an object relates to the object's index of refraction of light, which was take to be the ratio of the speed of light in the material to the speed of light in interstellar space. Having recently measured the speed of light in air and water, Fizeau set out to measure the speed of light in moving water. Experimental setup A highly simplified representation of Fizeau's 1851 experiment is presented in Fig. 2. Incoming light is split into two beams by a beam splitter (BS) and passed through two columns of water flowing in opposite directions. The two beams are then recombined to form an interference pattern that can be interpreted by an observer. The simplified arrangement illustrated in Fig. 2 would have required the use of monochromatic light, which would have enabled only dim fringes. Because of white light's short coherence length, use of white light would have required matching up the optical paths to an impractical degree of precision, and the apparatus would have been extremely sensitive to vibration, motion shifts, and temperature effects. Fizeau's actual apparatus, illustrated in Fig. 3 and Fig. 4, was set up as a common-path interferometer. This guaranteed that the opposite beams would pass through equivalent paths, so that fringes readily formed even when using the sun as a light source. A light ray emanating from the source is reflected by a beam splitter G and is collimated into a parallel beam by lens L. After passing the slits O1 and O2, two rays of light travel through the tubes A1 and A2, through which water is streaming back and forth as shown by the arrows. The rays reflect off a mirror m at the focus of lens , so that one ray always propagates in the same direction as the water stream, and the other ray opposite to the direction of the water stream. After passing back and forth through the tubes, both rays unite at S, where they produce interference fringes that can be visualized through the illustrated eyepiece. The interference pattern can be analyzed to determine the speed of light traveling along each leg of the tube. Result Fizeau's experiment showed a faster speed of light in water moving in the same direction and a slower speed when the water moved opposite the light. However the amount of difference in the speed of light was only a fraction of the water speed. Interpreted in terms of the aether theory, the water seemed to drag the aether and thus the light propagation, but only partially. Impact At the time of Fizeau's experiment, two different models of how aether related to moving bodies were discussed, Fresnel's partial drag hypothesis and George Stokes' complete aether drag hypothesis. Fresnel had Augustin-Jean Fresnel (1818) proposed his model to explain an 1810 experiment by Arago. In 1845 Stokes showed that complete aether drag could also explain it. Since Fresnel had no model to explain partial drag, scientists favored Stokes explanation. According to the Stokes' hypothesis, the speed of light should be increased or decreased when "dragged" along by the water through the aether frame, dependent upon the direction. The overall speed of a beam of light should be a simple additive sum of its speed through the water plus the speed of the water. That is, if n is the index of refraction of water, so that c/n is the speed of light in stationary water, then the predicted speed of light w in one arm would be and the predicted speed in the other arm would be for water with velocity . Hence light traveling against the flow of water should be slower than light traveling with the flow of water. The interference pattern between the two beams when the light is recombined at the observer depends upon the transit times over the two paths. However Fizeau found that In other words, light appeared to be dragged by the water, but the magnitude of the dragging was much lower than expected. The Fizeau experiment forced physicists to accept the empirical validity of an Fresnel's model, that a medium moving through the stationary aether drags light propagating through it with only a fraction of the medium's speed, with a dragging coefficient f related to the index of refraction: Although Fresnel's hypothesis was empirically successful in explaining Fizeau's results, many experts in the field, including Fizeau himself, found Fresnel's hypothesis partial aether-dragging unsatisfactory. Fresnel had found an empirical formula that worked but no mechanical model of the aether was used to derive it. Confirmation Wilhelm Veltmann's colors of light In 1870 Wilhelm Veltmann demonstrated that Fresnel's formula worked for different frequencies (colors) of light. According the Fresnel's model this would imply different amounts of eather drag for different colors of light. The velocity with white light, a mixture of colors, would be unexplained. Hoek experiment An indirect confirmation of Fresnel's dragging coefficient was provided by Martin Hoek (1868). His apparatus was similar to Fizeau's, though in his version only one arm contained an area filled with resting water, while the other arm was in the air. As seen by an observer resting in the aether, Earth and hence the water is in motion. So the following travel times of two light rays traveling in opposite directions were calculated by Hoek (neglecting the transverse direction, see image): The travel times are not the same, which should be indicated by an interference shift. However, if Fresnel's dragging coefficient is applied to the water in the aether frame, the travel time difference (to first order in v/c) vanishes. Upon turning the apparatus table 180 degrees, altering the direction of a hypothetical aether wind, Hoek obtained a null result, confirming Fresnel's dragging coefficient. In the particular version of the experiment shown here, Hoek used a prism P to disperse light from a slit into a spectrum which passed through a collimator C before entering the apparatus. With the apparatus oriented parallel to the hypothetical aether wind, Hoek expected the light in one circuit to be retarded 7/600 mm with respect to the other. Where this retardation represented an integral number of wavelengths, he expected to see constructive interference; where this retardation represented a half-integral number of wavelengths, he expected to see destructive interference. In the absence of dragging, his expectation was for the observed spectrum to be continuous with the apparatus oriented transversely to the aether wind, and to be banded with the apparatus oriented parallel to the aether wind. His actual experimental results were completely negative. Mascart's birefringence experiment Éleuthère Mascart (1872) demonstrated a result for polarized light traveling through a birefringent medium gives different velocities in accordance with Fresnel's empirical formula. However, the result in terms of Fresnel's physical model requires different aether drag in different direction in the medium. Michelson and Morley confirmation Albert A. Michelson and Edward W. Morley (1886) repeated Fizeau's experiment with improved accuracy, addressing several concerns with Fizeau's original experiment: (1) Deformation of the optical components in Fizeau's apparatus could cause artifactual fringe displacement; (2) observations were rushed, since the pressurized flow of water lasted only a short time; (3) the laminar flow profile of water flowing through Fizeau's small diameter tubes meant that only their central portions were available, resulting in faint fringes; (4) there were uncertainties in Fizeau's determination of flow rate across the diameter of the tubes. Michelson redesigned Fizeau's apparatus with larger diameter tubes and a large reservoir providing three minutes of steady water flow. His common-path interferometer design provided automatic compensation of path length, so that white light fringes were visible at once as soon as the optical elements were aligned. Topologically, the light path was that of a Sagnac interferometer with an even number of reflections in each light path. This offered extremely stable fringes that were, to first order, completely insensitive to any movement of its optical components. The stability was such that it was possible for him to insert a glass plate at h or even to hold a lighted match in the light path without displacing the center of the fringe system. Using this apparatus, Michelson and Morley were able to completely confirm Fizeau's results not just in water, but also in air. Zeeman and Lorentz's improved formula In 1895, Hendrik Lorentz predicted the existence of an extra term due to dispersion: Since the medium is flowing towards or away from the observer, the light traveling through the medium is Doppler-shifted, and the refractive index used in the formula has to be that appropriate to the Doppler-shifted wavelength. Zeeman verified the existence of Lorentz' dispersion term in 1915. Using a scaled-up version of Michelson's apparatus connected directly to Amsterdam's main water conduit, Zeeman was able to perform extended measurements using monochromatic light ranging from violet (4358 Å) through red (6870 Å) to confirm Lorentz's modified coefficient. Later confirmations In 1910, Franz Harress used a rotating device and overall confirmed Fresnel's dragging coefficient. However, he additionally found a "systematic bias" in the data, which later turned out to be the Sagnac effect. Since then, many experiments have been conducted measuring such dragging coefficients in a diversity of materials of differing refractive index, often in combination with the Sagnac effect. For instance, in experiments using ring lasers together with rotating disks, or in neutron interferometric experiments. Also a transverse dragging effect was observed, i.e. when the medium is moving at right angles to the direction of the incident light. Lorentz's interpretation In 1892, Hendrik Lorentz proposed a modification of Fresnel's model, in which the aether is completely stationary. He succeeded in deriving Fresnel's dragging coefficient as the result of an interaction between the moving water with an undragged aether. He also discovered that the transition from one to another reference frame could be simplified by using an auxiliary time variable which he called local time: In 1895, Lorentz more generally explained Fresnel's coefficient based on the concept of local time. However, Lorentz's theory had the same fundamental problem as Fresnel's: a stationary aether contradicted the Michelson–Morley experiment. So in 1892 Lorentz proposed that moving bodies contract in the direction of motion (FitzGerald-Lorentz contraction hypothesis, since George FitzGerald had already arrived in 1889 at this conclusion). The equations that he used to describe these effects were further developed by him until 1904. These are now called the Lorentz transformations in his honor, and are identical in form to the equations that Einstein was later to derive from first principles. Unlike Einstein's equations, however, Lorentz's transformations were strictly ad hoc, their only justification being that they seemed to work. Einstein's use of Fizeau's experiment Einstein showed how Lorentz's equations could be derived as the logical outcome of a set of two simple starting postulates. In addition Einstein recognized that the stationary aether concept has no place in special relativity, and that the Lorentz transformation concerns the nature of space and time. Together with the moving magnet and conductor problem, the negative aether drift experiments, and the aberration of light, the Fizeau experiment was one of the key experimental results that shaped Einstein's thinking about relativity. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of the Fizeau experiment: Modern interpretation Max von Laue (1907) demonstrated that the Fresnel drag coefficient can be explained as a natural consequence of the relativistic formula for addition of velocities. The speed of light in immobile water is c/n. From the velocity composition law it follows that the speed of light observed in the laboratory, where water is flowing with speed v (in the same direction as light) is Thus the difference in speed is (assuming v is small comparing to c, dropping higher order terms) This is accurate when , and agrees with the formula based upon Fizeau's measurements, which satisfied the condition . Alternatively, the Fizeau result can be derived by applying Maxwell's equations to a moving medium. See also Tests of special relativity Aether drag hypothesis History of special relativity References Secondary sources Primary sources Physics experiments 1851 in science
Fizeau experiment
[ "Physics" ]
3,031
[ "Experimental physics", "Physics experiments" ]
11,665,110
https://en.wikipedia.org/wiki/Skylark%20launch%20tower
A Skylark tower was a tower used for the launch of earlier versions of Skylark rockets. As Skylark rockets had no guidance system and accelerated slowly, they required a safe launch tower with a height of at least 24 metres, with its own guidance system. Later versions of the Skylark rocket were equipped with a more powerful engine and therefore did not need such a large guidance tower for launch. Woomera In 1956, a 30 metre tall swivelling launch tower was set up on launch site 2, at Woomera, South Australia at 30.942947° S 136.520678° E. The tower was built of old Bailey bridge segments, weighing 35 tons together. It was since demolished. Salto di Quirra At Salto di Quirra, Sardinia in 1965, a 30 metre tall Skylark tower was erected at 39°36'3"N 9°26'47"E. The tower ceased to be in use from 1972, at which point launches moved to Esrange. The tower remains today. Esrange At Esrange, Sweden in 1972, a 30 metre high Skylark tower was built at 67°53'35"N 21°6'25"E. The tower consists of a pyramid-like building with a launch tower on its top, in order to protect the rocket from cold before launch, necessary as Esrange is within the Arctic Circle. At launch, exhaust doors were opened to enable the smoke to leave the construct. As Skylark rockets are no longer produced, the Esrange Skylark launch tower was modified in 2005 for launching Brazilian VSB-30 rockets. The tower is now used for launches of rockets manufactured in Brazil. References Description in DORADO, José M. Spain and the European Space Effort. Studies in Modern Science and Technology from the International Academy of the History of Science, Volume 5. Beauchesne. Paris, 2008, pp 75–119 External links European Space Agency: In Brief Skyscraper Page – features a diagram of the Woomera Skylark Tower Skyscraper Page – features a diagram of the Esrange Skylark Tower University of Leicester: The Skylark Sounding Rocket Rocket launch sites
Skylark launch tower
[ "Astronomy" ]
444
[ "Rocketry stubs", "Astronomy stubs" ]
11,665,200
https://en.wikipedia.org/wiki/SQL/PSM
SQL/PSM (SQL/Persistent Stored Modules) is an ISO standard mainly defining an extension of SQL with a procedural language for use in stored procedures. Initially published in 1996 as an extension of SQL-92 (ISO/IEC 9075-4:1996, a version sometimes called PSM-96 or even SQL-92/PSM), SQL/PSM was later incorporated into the multi-part SQL:1999 standard, and has been part 4 of that standard since then, most recently in SQL:2023. The SQL:1999 part 4 covered less than the original PSM-96 because the SQL statements for defining, managing, and invoking routines were actually incorporated into part 2 SQL/Foundation, leaving only the procedural language itself as SQL/PSM. The SQL/PSM facilities are still optional as far as the SQL standard is concerned; most of them are grouped in Features P001-P008. SQL/PSM standardizes syntax and semantics for control flow, exception handling (called "condition handling" in SQL/PSM), local variables, assignment of expressions to variables and parameters, and (procedural) use of cursors. It also defines an information schema (metadata) for stored procedures. SQL/PSM is one language in which methods for the SQL:1999 structured types can be defined. The other is Java, via SQL/JRT. SQL/PSM is derived, seemingly directly, from Oracle's PL/SQL. Oracle developed PL/SQL and released it in 1991, basing the language on the US Department of Defense's Ada programming language. However, Oracle has maintained a distance from the standard in its documentation. IBM's SQL PL (used in DB2) and Mimer SQL's PSM were the first two products officially implementing SQL/PSM. It is commonly thought that these two languages, and perhaps also MySQL/MariaDB's procedural language, are closest to the SQL/PSM standard. However, a PostgreSQL addon implements SQL/PSM (alongside its other procedural languages like the PL/SQL-derived plpgsql), although it is not part of the core product. RDF functionality in OpenLink Virtuoso was developed entirely through SQL/PSM, combined with custom datatypes (e.g., ANY for handling URI and Literal relation objects), sophisticated indexing, and flexible physical storage choices (column-wise or row-wise). See also The following implementations adopt the standard, but they are not 100% compatible to SQL/PSM: Open source: HSQLDB stored procedures and functions MySQL stored procedures MariaDB stored procedures OpenLink Virtuoso SQL Procedures (VSP) PostgreSQL PL/pgSQL Proprietary: Oracle PL/SQL Microsoft and Sybase Transact-SQL Invantive Procedural SQL Mimer SQL stored procedures References Further reading Jim Melton, Understanding SQL's Stored Procedures: A Complete Guide to SQL/PSM, Morgan Kaufmann Publishers, 1998, Data management SQL Data-centric programming languages Programming languages created in 1996
SQL/PSM
[ "Technology" ]
651
[ "Data management", "Data" ]
11,665,275
https://en.wikipedia.org/wiki/Power%20good%20signal
The Power Good signal (power-good) is a signal provided by a computer power supply to indicate to the motherboard that all of the voltages are within specification and that the system may proceed to boot and operate. ATX Power Good The ATX specification defines the Power-Good signal as a +5-volt (V) signal generated in the power supply when it has passed its internal self-tests and the outputs have stabilized. This normally takes between 0.1 and 0.5 seconds after the power supply is switched on. The signal is then sent to the motherboard, where it is received by the processor timer chip that controls the reset line to the processor. The ATX specification requires that the power-good signal ("PWR_OK") go high no sooner than after the power rails have stabilized, and remain high for after loss of AC power, and fall (to less than ) at least before the power rails fall out of specification (to 95% of their nominal value). Cheaper and/or lower quality power supplies do not follow the ATX specification of a separate monitoring circuit; they instead wire the power good output to one of the lines. This means the processor will never reset given bad power unless the line drops low enough to turn off the trigger, which could be too low for proper operation. Power Good values Power good value is based on the delay in ms, that a power supply takes to become fully ready. Power good values are often considered abnormal if detected lower than 100 ms or higher than 500 ms. References External links (Wayback Machine | 31.01.2019) Power Good article on pcguide (Wayback Machine | 22.11.2009) ATX12V power supply design guide 2.01 ATX12V power supply design guide 2.01 Desktop Platform Form Factors Power Supply Computer jargon Power supplies
Power good signal
[ "Technology" ]
379
[ "Natural language and computing", "Computer jargon", "Computing terminology" ]
11,665,297
https://en.wikipedia.org/wiki/Slip%20%28telecommunication%29
In telecommunications, a slip is a positional displacement in a sequence of transmitted symbols that causes the loss or insertion of one or more symbols. Slips are usually caused by inadequate synchronization of the two clocks controlling the transmission or by poor reception of the signal. References Federal Standard 1037C Synchronization Telecommunication theory
Slip (telecommunication)
[ "Engineering" ]
67
[ "Telecommunications engineering", "Synchronization" ]
11,665,456
https://en.wikipedia.org/wiki/Slip%20%28vehicle%20dynamics%29
In (automotive) vehicle dynamics, slip is the relative motion between a tire and the road surface it is moving on. This slip can be generated either by the tire's rotational speed being greater or less than the free-rolling speed (usually described as percent slip), or by the tire's plane of rotation being at an angle to its direction of motion (referred to as slip angle). In rail vehicle dynamics, this overall slip of the wheel relative to the rail is called creepage. It is distinguished from the local sliding velocity of surface particles of wheel and rail, which is called micro-slip. Longitudinal slip The longitudinal slip is generally given as a percentage of the difference between the surface speed of the wheel compared to the speed between axle and road surface, as: where is the longitudinal component of the rotational speed of the wheel, is wheel radius at the point of contact and is vehicle speed in the plane of the tire. A positive slip indicates that the wheels are spinning; negative slip indicates that they are skidding. Locked brakes, , means that and sliding without rotating. Rotation with no velocity, and , means that . Lateral slip The lateral slip of a tire is the angle between the direction it is moving and the direction it is pointing. This can occur, for instance, in cornering, and is enabled by deformation in the tire carcass and tread. Despite the name, no actual sliding is necessary for small slip angles. Sliding may occur, starting at the rear of the contact patch, as slip angle increases. The slip angle can be defined as: References See also Contact patch Frictional contact mechanics Aristotle's wheel paradox Explanation with animation of the elastic slip website tec-science.com Tires Motorcycle dynamics
Slip (vehicle dynamics)
[ "Physics" ]
351
[ "Classical mechanics stubs", "Classical mechanics" ]
11,665,692
https://en.wikipedia.org/wiki/Selenonic%20acid
A selenonic acid is an organoselenium compound containing the functional group. The formula of selenonic acids is , where R is organyl group. Selenonic acids are the selenium analogs of sulfonic acids. Examples of the acid are rare. Benzeneselenonic acid (where Ph stands for phenyl) is a white solid. It can be prepared by the oxidation of benzeneselenol. See also Selenenic acid Seleninic acid References Functional groups
Selenonic acid
[ "Chemistry" ]
107
[ "Functional groups", "Organic chemistry stubs" ]
11,666,975
https://en.wikipedia.org/wiki/Fermentation%20starter
A fermentation starter (called simply starter within the corresponding context, sometimes called a mother) is a preparation to assist the beginning of the fermentation process in preparation of various foods and alcoholic drinks. Food groups where they are used include breads, especially sourdough bread, and cheese. A starter culture is a microbiological culture which actually performs fermentation. These starters usually consist of a cultivation medium, such as grains, seeds, or nutrient liquids that have been well colonized by the microorganisms used for the fermentation. These starters are formed using a specific cultivation medium and a specific mix of fungal and bacterial strains. Typical microorganisms used in starters include various bacteria and fungi (yeasts and molds): Rhizopus, Aspergillus, Mucor, Amylomyces, Endomycopsis, Saccharomyces, Hansenula anomala, Lactobacillus, Acetobacter, etc. Various national cultures have various active ingredients in starters, and often involve mixed microflora. Industrial starters include various enzymes, in addition to microflora. National names In descriptions of national cuisines, fermentation starters may be referred to by their national names: Qū (simplified: 曲; traditional: 麴, also romanized as chu) (China) Jiuqu (): the starter used for making Chinese alcoholic beverages Laomian ( ): Chinese sourdough starter commonly used in Northern Chinese cuisine, the sourness of the starter is commonly quenched with sodium carbonate prior to use. Mae dombae or mae sra () (Cambodia) Meju () (Korea) Nuruk () (Korea) Koji (麹) (Japan) Ragi tapai (Indonesia and Malaysia) Bakhar, ranu, marchaar (murcha), Virjan (India) Bubod, tapay, budbud (Philippines) Loogpaeng, loog-pang, or look-pang () (Thailand) Levain (France) Bread zakvaska (закваска, sourdough) (Russia, Ukraine) or zakwas (Poland) Opara (опара), a starter based on yeast (Russia) Juuretis (Estonia) See also Bread starter Leaven Malting Symbiotic culture of bacteria and yeast References Brewing Fermentation in food processing
Fermentation starter
[ "Chemistry" ]
502
[ "Fermentation in food processing", "Fermentation" ]
11,667,414
https://en.wikipedia.org/wiki/Dzus%20fastener
The Dzus fastener, also known as a turnlock fastener or quick-action panel fastener, is a type of proprietary quarter-turn spiral cam lock fastener often used to secure skin panels on aircraft and other high-performance vehicles. It is named after its inventor William Dzus (). The Dzus brand is owned by Southco and fastener Dzus are produced by Southco. History The fastener was invented and patented by William Dzus, an American engineer of Ukrainian descent, in the early 1930s. Operation Functionality To fasten the cowling (designated as part 10 in the patent) to the fuselage (11), the button's shank (13) is inserted into a hole (25) on the fuselage. A screwdriver is then used to turn the button (12) via a slot (21) in its head (14). As the button rotates, the spiral slots (16) on the shank act as cams, pulling a spring (22) into position. The projections (17) on the slots resist reverse rotation, preventing the fastener from loosening due to vibration. Optionally, felt or rubber strips (26) can be placed between the cowling and the fuselage to minimize noise. Unfastening To unfasten the cowling (10) from the fuselage (11) turn the button (12) one-quarter of a turn. This will disengage the button (12) from the spring (22). The holes (18) are large-enough to allow the spring (22) to clear the projection (17) either while engaging the button (12) or disengaging it. The end of the shank (13) that has the slots (16) must be well-rounded so spring (22) can easily enter its slots (16). Components The removable part of the Dzus fastener consists of a button (12) with a head (14) that includes a slot (21) for turning. A groove (19) on the button ensures it remains attached to the cowling (10) when unfastened. The stationary part includes the spring (22), which is riveted (24) to the fuselage. The spring has arched coils (23) between the rivets, providing the necessary tension for secure fastening. The shank (13) of the button contains spiral bayonet slots (16) that engage the spring. These slots include holes (18) that hold the spring in place once fastened, with projections (17) preventing accidental unfastening. The button’s head (14) is pressed against the cowling, keeping it firmly in place. Improvements Over time, several improvements have been made to the Dzus fastener design. Some versions include a housing or bucket around the female part to reduce water ingress. Others have been optimized for ease of use, such as incorporating self-centering screwdrivers. Cost-saving measures, like securing the spring directly to the female hole without rivets, have also been introduced. Additionally, the button is often die-cast in modern versions to reduce manufacturing costs compared to earlier machined versions. Uses Dzus fasteners are also used to secure plates, doors, and panels that require frequent removal for inspection and servicing. These fasteners are notable in that they are of an "over-centre" design, requiring positive sustained torque to unfasten. Thus, any minor disturbance to the fastener (e.g., vibration) will tend to correct itself rather than proceed to further loosening as it would in threaded fasteners. Turnlock fasteners are available in several different styles and are usually referred to by the manufacturer's trade name. Some of the most common are DZUS, Camloc, and Airloc. References External links DZUS fastener Data Sheet - DZUS Standard Line Quarter-Turn Fasteners Tests on Dzus Self-Locking Fasteners – Aero Digest Fasteners
Dzus fastener
[ "Engineering" ]
841
[ "Construction", "Fasteners" ]
11,667,541
https://en.wikipedia.org/wiki/The%20End%20of%20Time%20%28book%29
The End of Time: The Next Revolution in Our Understanding of the Universe, also sold with the alternate subtitle The Next Revolution in Physics, is a 1999 popular science book in which the author Julian Barbour argues that time exists merely as an illusion. Autobiography The book begins by describing how Barbour's view of time evolved. After taking physics in graduate school, Barbour went to Cologne for Ph.D. work on Einstein's theory of gravity. However he became preoccupied with the idea proposed by Ernst Mach that time is nothing but change. A remark by Paul Dirac prompted him to reconsider some mainstream physical assumptions. He worked as a translator of Russian scientific articles and remained outside of academic institutions which provided him time to pursue his research as he desired. For some twenty years Barbour sought to reformulate physics in the spirit of Mach but found that his results have been already discovered in a different form called ADM formalism. He nearly gave up research, became involved in politics (p. 238) and began writing books on the history of physics. His interest however was rekindled after talking with Lee Smolin and reflecting on quantum mechanics. Barbour came to the conclusion that "If the Machian approach to classical dynamics is correct, quantum cosmology will have no dynamics. It will be timeless. It must also be frameless" (p. 232). He develops this view in the book. He acknowledges also that John Bell presented in 1980 a "quantum mechanics for cosmologists" which comes in close agreement with his conclusions, except on the point about the reality of time (p. 301). Possibility Barbour recounts that he read a newspaper article about Dirac's work in which he was quoted as saying: "This result has led me to doubt how fundamental the four-dimensional requirement in physics is". The nature of time as a fourth dimension or something else became the topic of research. Cognisant of the counter-intuitive nature of his fundamental claim, Barbour eases the reader into the topic by first endeavouring to persuade the reader that our experiences are, at the very least, consistent with a timeless universe, leaving aside the question as to why one would hold such a view. Barbour points out that some sciences have long done away with the "I" as a persisting identity. To take atomic theory seriously is to deny that the cat that jumps is the cat that lands, to use an illustration of Barbour's. The seething nebula of molecules of which we, cats, and all matter are made is ceaselessly rearranging at incomprehensibly fast speeds. The microcosm metamorphoses constantly, therefore one must deny there is any sense to say a cat or a person persists through time. Early on, Barbour addresses the charge that writing with tensed verbs disproves his proposal. The next revolution in physics will undermine speaking in terms of time, he says, but there is no alternative. If a universe is composed of timeless instants in the sense of configurations of matter that do not endure, one could nonetheless have the impression that time flows, Barbour asserts. The stream of consciousness and the sensation of the present, lasting about a second, is all in our heads, literally. In our brains is information about the recent past, but not as a result of a causal chain leading back to earlier instants. Rather, it is a property of thinking things, perhaps a necessary one to become thinking in the first place, that this information is present. In Barbour's words, brains are "time-capsules". In order to explain away the widely shared stance about past events, Barbour analyses in detail how (historical) 'records' are created. His prime example are traces in a cloud chamber to which he devotes the penultimate chapter of the book. Except for the inexistence of time, he admits that John Bell had already solved most difficulties. He investigates configuration spaces and best-matching mathematics, fleshing out how fundamental physics might deal with different instants in a timeless scheme. He calls his universe without time and only relative positions "Platonia" after Plato's world of eternal forms. Plausibility Why, then, is the instant in configuration space, not matter in space-time, the true object and frame of the universe? He marshals as evidence a non-standard analysis of relativity, many-worlds theory and the ADM formalism. Since, he believes, we should be open to physics without time, we must evaluate anew physical laws, such as the Wheeler–DeWitt equation, that take on radical but powerful and fruitful forms when time is left out. Barbour writes that our notion of time, and our insistence on it in physical theory, has held science back, and that a scientific revolution awaits. Barbour suspects that the wave function is somehow constrained by the "terrain" of Platonia. Barbour ends with a short meditation on some of the consequences of "the end of time". If there is no arrow of time, there is no becoming, but only being. "Creation" becomes something that is equally inherent in every instant. Criticism and reviews Julian Barbour's research has been published in academic journals and monographs, whereas The End of Time was aimed at a more general and philosophically minded public. A number of professional philosophers have responded to the book. Developing ideas from his book, in 2009 Barbour wrote an essay On the Nature of Time which was awarded first prize in the contest organized by FQXi. Editions The End of Time: The Next Revolution in Physics, Oxford University Press, 1999, ———, OUP USA, 2000, The End of Time: The next revolution in our understanding of the universe, Weidenfeld & Nicolson, 1999, ———, Phoenix paperback, 2000, Reviews Simon W. Saunders, "Clock Watcher", The New York Times, March 26, 2000 References 1999 non-fiction books Popular physics books Oxford University Press books Weidenfeld & Nicolson books Works about time
The End of Time (book)
[ "Physics" ]
1,245
[ "Spacetime", "Physical quantities", "Time", "Works about time" ]
11,668,491
https://en.wikipedia.org/wiki/Groundwater%20model
Groundwater models are computer models of groundwater flow systems, and are used by hydrologists and hydrogeologists. Groundwater models are used to simulate and predict aquifer conditions. Characteristics An unambiguous definition of "groundwater model" is difficult to give, but there are many common characteristics. A groundwater model may be a scale model or an electric model of a groundwater situation or aquifer. Groundwater models are used to represent the natural groundwater flow in the environment. Some groundwater models include (chemical) quality aspects of the groundwater. Such groundwater models try to predict the fate and movement of the chemical in natural, urban or hypothetical scenario. Groundwater models may be used to predict the effects of hydrological changes (like groundwater pumping or irrigation developments) on the behavior of the aquifer and are often named groundwater simulation models. Groundwater models are used in various water management plans for urban areas. As the computations in mathematical groundwater models are based on groundwater flow equations, which are differential equations that can often be solved only by approximate methods using a numerical analysis, these models are also called mathematical, numerical, or computational groundwater models. The mathematical or the numerical models are usually based on the real physics the groundwater flow follows. These mathematical equations are solved using numerical codes such as MODFLOW, ParFlow, HydroGeoSphere, OpenGeoSys etc. Various types of numerical solutions like the finite difference method and the finite element method are discussed in the article on "Hydrogeology". Inputs For the calculations one needs inputs like: hydrological inputs, operational inputs, external conditions: initial and boundary conditions, (hydraulic) parameters. The model may have chemical components like water salinity, soil salinity and other quality indicators of water and soil, for which inputs may also be needed. Hydrological inputs The primary coupling between groundwater and hydrological inputs is the unsaturated zone or vadose zone. The soil acts to partition hydrological inputs such as rainfall or snowmelt into surface runoff, soil moisture, evapotranspiration and groundwater recharge. Flows through the unsaturated zone that couple surface water to soil moisture and groundwater can be upward or downward, depending upon the gradient of hydraulic head in the soil, can be modeled using the numerical solution of Richards' equation partial differential equation, or the ordinary differential equation Finite Water-Content method as validated for modeling groundwater and vadose zone interactions. Operational inputs The operational inputs concern human interferences with the water management like irrigation, drainage, pumping from wells, watertable control, and the operation of retention or infiltration basins, which are often of an hydrological nature. These inputs may also vary in time and space. Many groundwater models are made for the purpose of assessing the effects hydraulic engineering measures. Boundary and initial conditions Boundary conditions can be related to levels of the water table, artesian pressures, and hydraulic head along the boundaries of the model on the one hand (the head conditions), or to groundwater inflows and outflows along the boundaries of the model on the other hand (the flow conditions). This may also include quality aspects of the water like salinity. The initial conditions refer to initial values of elements that may increase or decrease in the course of the time inside the model domain and they cover largely the same phenomena as the boundary conditions do. The initial and boundary conditions may vary from place to place. The boundary conditions may be kept either constant or be made variable in time. Parameters The parameters usually concern the geometry of and distances in the domain to be modelled and those physical properties of the aquifer that are more or less constant with time but that may be variable in space. Important parameters are the topography, thicknesses of soil / rock layers and their horizontal/vertical hydraulic conductivity (permeability for water), aquifer transmissivity and resistance, aquifer porosity and storage coefficient, as well as the capillarity of the unsaturated zone. For more details see the article on hydrogeology. Some parameters may be influenced by changes in the groundwater situation, like the thickness of a soil layer that may reduce when the water table drops and/the hydraulic pressure is reduced. This phenomenon is called subsidence. The thickness, in this case, is variable in time and not a parameter proper. Applicability The applicability of a groundwater model to a real situation depends on the accuracy of the input data and the parameters. Determination of these requires considerable study, like collection of hydrological data (rainfall, evapotranspiration, irrigation, drainage) and determination of the parameters mentioned before including pumping tests. As many parameters are quite variable in space, expert judgment is needed to arrive at representative values. The models can also be used for the if-then analysis: if the value of a parameter is A, then what is the result, and if the value of the parameter is B instead, what is the influence? This analysis may be sufficient to obtain a rough impression of the groundwater behavior, but it can also serve to do a sensitivity analysis to answer the question: which factors have a great influence and which have less influence. With such information one may direct the efforts of investigation more to the influential factors. When sufficient data have been assembled, it is possible to determine some of missing information by calibration. This implies that one assumes a range of values for the unknown or doubtful value of a certain parameter and one runs the model repeatedly while comparing results with known corresponding data. For example, if salinity figures of the groundwater are available and the value of hydraulic conductivity is uncertain, one assumes a range of conductivities and the selects that value of conductivity as "true" that yields salinity results close to the observed values, meaning that the groundwater flow as governed by the hydraulic conductivity is in agreement with the salinity conditions. This procedure is similar to the measurement of the flow in a river or canal by letting very saline water of a known salt concentration drip into the channel and measuring the resulting salt concentration downstream. Dimensions Groundwater models can be one-dimensional, two-dimensional, three-dimensional and semi-three-dimensional. Two and three-dimensional models can take into account the anisotropy of the aquifer with respect to the hydraulic conductivity, i.e. this property may vary in different directions. One-, two- and three-dimensional One-dimensional models can be used for the vertical flow in a system of parallel horizontal layers. Two-dimensional models apply to a vertical plane while it is assumed that the groundwater conditions repeat themselves in other parallel vertical planes (Fig. 4). Spacing equations of subsurface drains and the groundwater energy balance applied to drainage equations are examples of two-dimensional groundwater models. Three-dimensional models like Modflow require discretization of the entire flow domain. To that end the flow region must be subdivided into smaller elements (or cells), in both horizontal and vertical sense. Within each cell the parameters are maintained constant, but they may vary between the cells (Fig. 5). Using numerical solutions of groundwater flow equations, the flow of groundwater may be found as horizontal, vertical and, more often, as intermediate. Semi three-dimensional In semi 3-dimensional models the horizontal flow is described by 2-dimensional flow equations (i. e. in horizontal x and y direction). Vertical flows (in z-direction) are described (a) with a 1-dimensional flow equation, or (b) derived from a water balance of horizontal flows converting the excess of horizontally incoming over the horizontally outgoing groundwater into vertical flow under the assumption that water is incompressible. There are two classes of semi 3-dimensional models: Continuous models or radial models consisting of 2 dimensional submodels in vertical radial planes intersecting each other in one single axis. The flow pattern is repeated in each vertical plane fanning out from the central axis. Discretized models or prismatic models consisting of submodels formed by vertical blocks or prisms for the horizontal flow combined with one or more methods of superposition of the vertical flow. Continuous radial model An example of a non-discretized radial model is the description of groundwater flow moving radially towards a deep well in a network of wells from which water is abstracted. The radial flow passes through a vertical, cylindrical, cross-section representing the hydraulic equipotential of which the surface diminishes in the direction of the axis of intersection of the radial planes where the well is located. Prismatically discretized model Prismatically discretized models like SahysMod have a grid over the land surface only. The 2-dimensional grid network consists of triangles, squares, rectangles or polygons. Hence, the flow domain is subdivided into vertical blocks or prisms. The prisms can be discretized into horizontal layers with different characteristics that may also vary between the prisms. The groundwater flow between neighboring prisms is calculated using 2-dimensional horizontal groundwater flow equations. Vertical flows are found by applying one-dimensional flow equations in a vertical sense, or they can be derived from the water balance: excess of horizontal inflow over horizontal outflow (or vice versa) is translated into vertical flow, as demonstrated in the article Hydrology (agriculture). In semi 3-dimensional models, intermediate flow between horizontal and vertical is not modelled like in truly 3-dimensional models. Yet, like the truly 3-dimensional models, such models do permit the introduction of horizontal and vertical subsurface drainage systems. Semiconfined aquifers with a slowly permeable layer overlying the aquifer (the aquitard) can be included in the model by simulating vertical flow through it under influence of an overpressure in the aquifer proper relative to the level of the watertable inside or above the aquitard. Groundwater modeling software and references Analytic Element Method FEFLOW PORFLOW SVFlux FEHM HydroGeoSphere Integrated Water Flow Model MicroFEM MODFLOW GMS Visual MODFLOW Processing Modflow OpenGeoSys SahysMod, Spatial agro-hydro-salinity-aquifer model, online: US Geological Survey Water Resources Ground Water Software MARTHE from the French Geological Survey (BRGM) ZOOMQ3D Free groundwater modelling course for starters See also Aquifer Groundwater Groundwater flow equation Groundwater energy balance Hydraulic conductivity Hydrogeology Salinity model Watertable control Groundwater drainage by wells Footnotes Scientific simulation software Hydrogeology Hydrology models
Groundwater model
[ "Biology", "Environmental_science" ]
2,150
[ "Hydrology", "Biological models", "Hydrology models", "Environmental modelling", "Hydrogeology" ]
11,668,925
https://en.wikipedia.org/wiki/Eric%20van%20Douwen
Eric Karel van Douwen (April 25, 1946 in Voorburg, South Holland, Netherlands – July 28, 1987 in Athens, Ohio, United States) was a Dutch mathematician specializing in set-theoretic topology. He received his Ph.D. in 1975 from Vrije Universiteit under the supervision of Maarten Maurice and Johannes Aarts, both of whom were in turn students of Johannes de Groot. He began his academic career studying physics, but became dissatisfied partway through the program. His wife helped inspire his choice to switch to mathematics by asking, "Why not mathematics? It's what you work on all the time anyway". He produced the content of his dissertation unsupervised, and seeking better credentials, he transferred to Vrije to defend, a maneuver permitted by the Dutch university rules. References External links Eric van Douwen's papers Includes a short bio. From Scott Williams's pages at SUNY Buffalo 1946 births 1987 deaths 20th-century Dutch mathematicians Topologists People from Voorburg Vrije Universiteit Amsterdam alumni
Eric van Douwen
[ "Mathematics" ]
222
[ "Topologists", "Topology" ]
11,669,402
https://en.wikipedia.org/wiki/Fusarium%20solani
Fusarium solani is a species complex of at least 26 closely related filamentous fungi in the division Ascomycota, family Nectriaceae. It is the anamorph of Nectria haematococca. It is a common soil inhabiting mold. Fusarium solani is implicated in plant diseases as well as in serious human diseases such as fungal keratitis. History and taxonomy The genus Fusarium was described in 1809 by Link. In the 1930s, Wollenweber and Reinking organized the genus Fusarium into sections, including Martiella and Ventricosum, which were collapsed together by Snyder and Hansen in the 1940s to form a single species, Fusarium solani; one of nine Fusarium species they recognized based on morphological features. The current concept of F. solani is as a species complex consisting of multiple, closely related and morphologically poorly distinguishable, "cryptic" species with characteristic genetic differences. There is a proposed concept for the entire genus - widely subscribed by specialists - that would include this complex. However, there is a smaller counterproposal that radically refiles the genus including making this complex into a genus Neocosmospora. The fungus is allied with the sexual species, Nectria haematococca, in the family Nectriaceae (phylum Ascomycota). Growth and morphology Like other species in its genus, Fusarium solani produces colonies that are white and cottony. However, instead of developing a pink or violet centre like most Fusarium species, F. solani becomes blue-green or bluish brown. On the underside, they may be pale, tea-with-milk-brown, or red-brown. However, some clinical isolates have been blue-green or ink-blue on the underside. F. solani colonies are low-floccose, loose, slimy, and sporadic. When grown on potato dextrose agar (PDA), this fungus grows rapidly, but not as rapidly as Fusarium oxysporum. In PDA, F. solani colonies reach a diameter of 64–70 mm in 7 days. F. solani has aerial hyphae that give rise to conidiophores laterally. The conidiophores branch into thin, elongated monophialides that produce conidia. Phialides that produce macroconidia are shorter than those that produce microconidia. The macroconidia produced by F. solani are slightly curved, hyaline, and broad, often aggregating in fascicles. Typically the macroconidia of this species have 3 septa but may have as many as 4–5. Microconidia have thickened basal cells and tapered, rounded apical cells. However, some F. solani isolates have pointed, rather than rounded, macroconidia. Microconidia are oval or cylindrical, hyaline, and smooth. Some microconidia may be curved. Microconidia typically lack septa, but occasionally they may have up to two. Fusarium solani also forms chlamydospores most commonly under suboptimal growth conditions. These may be produced in pairs or individually. They are abundant, have rough walls, and are 6-11 μm. F. solani chlamydospores are also brown and round. Ecology F. solani is found in soil worldwide. However, a given species within the complex may not be as widespread and may not have the same ecology as others in the complex. In general, as a soil fungus, F. solani is associated with the roots of plants and may be found as deep in the ground as 80 cm. It is frequently isolated in tropic, subtropic, and temperate locations, and less frequently isolated from alpine habitats. The pH of soil does not have a significant effect on F. solani, however, soil fumigation causes an increase in occurrence. F. solani is typically sensitive to soil fungicides. F. solani has been found in ponds, rivers, sewage facilities, and water pipes. It has also been found in larvae and adults of the picnic beetle, is a symbiote of the ambrosia beetle. Life cycle F. solani can be found in soils worldwide, where its chlamydospores overwinter on plant tissue/seed or as mycelium in the soil. The pathogen enters hosts through developing roots, where it can infect the host. After infection, F. solani produces asexual macro and microconidia which are dispersed through wind and rain. The pathogen can persist in the soil for a decade, and if left unchecked can cause complete crop loss. Physiology and biochemistry F. solani have 5-13 chromosomes, with a genome size of about 40 Mb. The GC-content of its DNA is 50%. Mycelium of F. solani is rich in the amino acid alanine, as well as a range of fatty acids including δ-aminobutyric-, palmitic-, oleic-, and linolenic acids. Fusarium solani requires potassium for growth, and develops a feathery pattern when potassium levels are below 3 mM. In culture the following disaccharides are utilized (from most- to least preferential): mannose, rhamnose and sorbose. This species can decompose cellulose at an optimal pH of 6.5 and temperature of 30 °C. It can also metabolise steroids and lignin, and reduce Fe3+ to Fe2+. Fusarium solani produces mycotoxins like Fusaric acid and naphthoquinones. Other toxins have also been isolated from F. solani, including: Fusarubin Javanicin Marticin Isomarticin - causes chlorosis in citrus Solaniol Neosolaniol T-2 toxin HT-2 toxin Diacetoxyscirpenol Pathology Humans F. solani is largely resistant to typical antifungal agents. The most effective antifungals in treating F. solani infections are amphotericin B and natamycin; however, these agents have only modest success in the treatment of serious systemic infection. As of 2006, there has been increasing evidence that F. solani  can act as a causal agent of mycoses in humans. F. solani has been implicated in the following diseases: disseminated disease, osteomyelitis, skin infection, fungemia, and endophthalmitis. Half of human disease involving Fusarium is caused by F. solani and it is involved in most cases of systemic fusariosis and corneal infections. In immunocompromised patients, F. solani is one of the most common agents in disseminated and cutaneous infections. In the southern USA, fungal keratitis has been most commonly caused by F. solani, as well as F. oxysporum. Cases occur most frequently during harvest season as a result of corneal trauma from dust or plant material. Fungal spores come into contact with the damaged cornea and grow. Without treatment, the hyphae can grow into the cornea and into the anterior chamber of the eye. F. solani is also a major cause of fungal keratitis in HIV positive patients in Africa. As of 2011, F. solani was implicated in cases of fungal keratitis involving the Bausch and Lomb ReNu contact lens solution. Some strains of F. solani can produce a biofilm on soft contact lenses. However, when lenses are cleaned correctly with solution, these biofilms are prevented. Prevention also includes leaving lenses in polyhexanide biguanide solution overnight to inhibit F. solani. Other risk factors of contact lens-related Fusarium keratitis include use of daily-wear lenses beyond the recommended timeline and overnight wear. An investigation into a meningitis outbreak of 79 cases since October 2022, which had killed 35 people (34 of them women who had undergone cesarean section) in Durango (city) revealed contamination of bupivacaine with Fusarium solani in 4 batches, used by an anesthesiologist. US news reported however, that the anesthesiologist used multi-dose vials of morphine, which he would administer in more than one patient for his anesthesias in the 4 private hospitals. As of May 26, 2023 WHO had been asked to declare a public health emergency. As of June 1, 2023, a multistate outbreak of meningitis due to F. solani was ongoing among patients who underwent epidural anesthesia at two clinics in the Mexican city of Matamoros, Tamaulipas, with a total of 212 residents in 25 US states identified as being at risk, two of whom had died. Other animals F. solani is implicated in cutaneous infections of young turtles as well as infections of turtle egg shells. It has also caused infections in Australian crocodile farms, sea lions and grey seals. F. solani is a facultative pathogen of the castor bean tick. It is also lethal to southern pine beetles. Plants F. solani rots the roots of its host plant. It also causes soft rot of plant tissues by penetrating plant cell walls and destroying the torus. It is implicated, along with Pythium myriotylum, in pod rot of the pods of groundnuts. F. solani can cause damping off, corn rot, and root rot, as well as sudden death of soybeans (SDS). It is a very generalistic fungal species and has been known to infect peas, beans, potatoes, and many types of cucurbits. Symptoms include general plant decline, wilting, and large necrotic spots on tap roots. Recently the pathogen has also done serious damage to olive trees throughout the mediterranean. Virulence of this agent in plants is controlled by the cutinase genes cut1 and cut2. These genes are upregulated by exposure to the plant's cutin monomers. F. solani is known to cause sudden death syndrome in soybeans, and it is also known to cause disease in other economically important crops such as avocado, citrus, orchids, passion fruit, peas, peppers, potato, and squash. Management Agriculture The ubiquitous nature of  F. solani gives rise to a plethora of management practices developed independently. One particular method is the use of the bacterial complex Burkholderia cepacia,  which is a registered control method. This bacterial complex has been shown to produce several types of antibiotics (depending on the strain), and can act as a substitute for chemical pesticides. Precautionary methods include planting during warm/dry weather, 3 plus years of crop rotation of non host species, and avoiding dense seed planting. Humans In the 2023 Matamoros outbreak of F. solani meningitis CDC recommended liposomal amphotericin B and voriconazole, however, disease progressed on this regimen, and patients were trialed on fosmanogepix through a compassionate use authorization. Biotechnology F. solani has been investigated as a biological control for certain plants including leafy spurge, morning glory, striga, gourd, and water hyacinth. References solani Fungi described in 1881 Fungal plant pathogens and diseases Fungus species Animal fungal diseases
Fusarium solani
[ "Biology" ]
2,405
[ "Fungi", "Fungus species" ]
11,669,530
https://en.wikipedia.org/wiki/Nose
A nose is a sensory organ and respiratory structure in vertebrates. It consists of a nasal cavity inside the head, and an external nose on the face. The external nose houses the nostrils, or nares, a pair of tubes providing airflow through the nose for respiration. Where the nostrils pass through the nasal cavity they widen, are known as nasal fossae, and contain turbinates and olfactory mucosa. The nasal cavity also connects to the paranasal sinuses (dead-end air cavities for pressure buffering and humidification). From the nasal cavity, the nostrils continue into the pharynx, a switch track valve connecting the respiratory and digestive systems. In humans, the nose is located centrally on the face and serves as an alternative respiratory passage especially during suckling for infants. The protruding nose that is completely separate from the mouth part is a characteristic found only in therian mammals. It has been theorized that this unique mammalian nose evolved from the anterior part of the upper jaw of the reptilian-like ancestors (synapsids). Air treatment Acting as the first interface between the external environment and an animal's delicate internal lungs, a nose conditions incoming air, both as a function of thermal regulation and filtration during respiration, as well as enabling the sensory perception of smell. Hair inside nostrils filter incoming air, as a first line of defense against dust particles, smoke, and other potential obstructions that would otherwise inhibit respiration, and as a kind of filter against airborne illness. In addition to acting as a filter, mucus produced within the nose supplements the body's effort to maintain temperature, as well as contributes moisture to integral components of the respiratory system. Capillary structures of the nose warm and humidify air entering the body; later, this role in retaining moisture enables conditions for alveoli to properly exchange O2 for CO2 (i.e., respiration) within the lungs. During exhalation, the capillaries then aid recovery of some moisture, mostly as a function of thermal regulation, again. Sense of direction The wet nose of dogs is useful for the perception of direction. The sensitive cold receptors in the skin detect the place where the nose is cooled the most and this is the direction a particular smell that the animal just picked up comes from. Structure in air-breathing forms In amphibians and lungfish, the nostrils open into small sacs that, in turn, open into the forward roof of the mouth through the choanae. These sacs contain a small amount of olfactory epithelium, which, in the case of caecilians, also lines a number of neighbouring tentacles. Despite the general similarity in structure to those of amphibians, the nostrils of lungfish are not used in respiration, since these animals breathe through their mouths. Amphibians also have a vomeronasal organ, lined by olfactory epithelium, but, unlike those of amniotes, this is generally a simple sac that, except in salamanders, has little connection with the rest of the nasal system. In reptiles, the nasal chamber is generally larger, with the choanae located much further back in the roof of the mouth. In crocodilians, the chamber is exceptionally long, helping the animal to breathe while partially submerged. The reptilian nasal chamber is divided into three parts: an anterior vestibule, the main olfactory chamber, and a posterior nasopharynx. The olfactory chamber is lined by olfactory epithelium on its upper surface and possesses a number of turbinates to increase the sensory area. The vomeronasal organ is well-developed in lizards and snakes, in which it no longer connects with the nasal cavity, opening directly into the roof of the mouth. It is smaller in turtles, in which it retains its original nasal connection, and is absent in adult crocodilians. Birds have a similar nose to reptiles, with the nostrils located at the upper rear part of the beak. Since they generally have a poor sense of smell, the olfactory chamber is small, although it does contain three turbinates, which sometimes have a complex structure similar to that of mammals. In many birds, including doves and fowls, the nostrils are covered by a horny protective shield. The vomeronasal organ of birds is either under-developed or altogether absent, depending on the species. The nasal cavities in mammals are both fused into one. Among most species, they are exceptionally large, typically occupying up to half the length of the skull. In some groups, however, including primates, bats, and cetaceans, the nose has been secondarily reduced, and these animals consequently have a relatively poor sense of smell. The nasal cavity of mammals has been enlarged, in part, by the development of a palate cutting off the entire upper surface of the original oral cavity, which consequently becomes part of the nose, leaving the palate as the new roof of the mouth. The enlarged nasal cavity contains complex turbinates forming coiled scroll-like shapes that help to warm the air before it reaches the lungs. The cavity also extends into neighbouring skull bones, forming additional air cavities known as paranasal sinuses. In cetaceans, the nose has been reduced to one or two blowholes, which are the nostrils that have migrated to the top of the head. This adaptation gave cetaceans a more streamlined body shape and the ability to breathe while mostly submerged. Conversely, the elephant's nose has elaborated into a long, muscular, manipulative organ called the trunk. The vomeronasal organ of mammals is generally similar to that of reptiles. In most species, it is located in the floor of the nasal cavity, and opens into the mouth via two nasopalatine ducts running through the palate, but it opens directly into the nose in many rodents. It is, however, lost in bats, and in many primates, including humans. In fish Fish have a relatively good sense of smell. Unlike that of tetrapods, the nose has no connection with the mouth, nor any role in respiration. Instead, it generally consists of a pair of small pouches located behind the nostrils at the front or sides of the head. In many cases, each of the nostrils is divided into two by a fold of skin, allowing water to flow into the nose through one side and out through the other. The pouches are lined by olfactory epithelium, and commonly include a series of internal folds to increase the surface area, often forming an elaborate "olfactory rosette". In some teleosts, the pouches branch off into additional sinus-like cavities, while in coelacanths, they form a series of tubes. In the earliest vertebrates, there was only one nostril and olfactory pouch, and the nasal passage was connected to the hypophysis. The same anatomy is observed in the most primitive living vertebrates, the lampreys and hagfish. In gnathostome ancestors, the olfactory apparatus gradually became paired (presumably to allow sense of direction of smells), and freeing the midline from the nasal passage allowed evolution of jaws. See also Nasal bridge Obligate nasal breathing Rhinarium, the wet, naked surface around the nostrils in most mammals, absent in haplorrhine primates such as humans References External links Human head and neck Respiratory system Olfactory system Facial features
Nose
[ "Biology" ]
1,557
[ "Organ systems", "Respiratory system" ]
11,669,883
https://en.wikipedia.org/wiki/Groundwater%20flow
In hydrogeology, groundwater flow is defined as the "part of streamflow that has infiltrated the ground, entered the phreatic zone, and has been (or is at a particular time) discharged into a stream channel or springs; and seepage water." It is governed by the groundwater flow equation. Groundwater is water that is found underground in cracks and spaces in the soil, sand and rocks. Where water has filled these spaces is the phreatic (also called) saturated zone. Groundwater is stored in and moves slowly (compared to surface runoff in temperate conditions and watercourses) through layers or zones of soil, sand and rocks: aquifers. The rate of groundwater flow depends on the permeability (the size of the spaces in the soil or rocks and how well the spaces are connected) and the hydraulic head (water pressure). In polar regions groundwater flow may be obstructed by permafrost. See also Subsurface flow Groundwater energy balance Baseflow Ecohydrology Groundwater Hydrogeology Catchment hydrology References Hydrology Limnology Aquifers Water streams
Groundwater flow
[ "Chemistry", "Engineering", "Environmental_science" ]
227
[ "Hydrology", "Aquifers", "Environmental engineering" ]
11,670,070
https://en.wikipedia.org/wiki/693%20%28number%29
693 (six hundred [and] ninety-three) is the natural number following 692 and preceding 694. In mathematics 693 has twelve divisors: 1, 3, 7, 9, 11, 21, 33, 63, 77, 99, 231, and 693. Thus, 693 is tied with 315 for the highest number of divisors for any odd natural number below 900. The smallest positive odd integer with more divisors is 945, which has 16 divisors. Consequently, 945 is also the smallest odd abundant number, having an abundancy index of 1920/945 ≈ 2.03175. 693 appears as the first three digits after the decimal point in the decimal form for the natural logarithm of 2. To 10 digits, this number is 0.6931471805. As a result, if an event has a constant probability of 0.1% of occurring, 693 is the smallest number of trials that must be performed for there to be at least a 50% chance that the event occurs at least once. More generally, for any probability p, the probability that the event occurs at least once in a sample of n items, assuming the items are independent, is given by the following formula: 1 − (1 − p)n For p = 10−3 = 0.001, plugging in n = 692 gives, to four decimal places, 0.4996, while n = 693 yields 0.5001. 693 is the lowest common multiple of 7, 9, and 11. Multiplying 693 by 5 gives 3465, the smallest positive integer divisible by 3, 5, 7, 9, and 11. 693 is a palindrome in bases 32, 62, 76, 98, 230, and 692. It is also a palindrome in binary: 1010110101. The reciprocal of 693 has a period of six: = 0.. 693 is a triangular matchstick number. References Integers
693 (number)
[ "Mathematics" ]
427
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
11,670,238
https://en.wikipedia.org/wiki/744%20%28number%29
744 (seven hundred [and] forty four) is the natural number following 743 and preceding 745. In mathematics 744 is a semiperfect number. It is also an abundant number. The -invariant, an important function in the study of modular forms and Monstrous moonshine, can be written as a Fourier series in which the constant term is 744: where . One consequence of this is that 744 appears in expressions for Ramanujan's constant and other almost integers. See also Moonshine theory References Integers Moonshine theory
744 (number)
[ "Mathematics" ]
112
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
11,670,842
https://en.wikipedia.org/wiki/The%20Daily%20WTF
The Daily WTF (also called Worse Than Failure from February to December 2007) is a humorous blog dedicated to "Curious Perversions in Information Technology". The blog, run by Alex Papadimoulis, "offers living examples of code that invites the exclamation ‘WTF!?'" (What The Fuck!?) and "recounts tales of disastrous development, from project management gone spectacularly bad to inexplicable coding choices." In addition to horror stories, The Daily WTF "serve[s] as [a] repositor[y] of knowledge and discussion forums for inquisitive web designers and developers" and has introduced several anti-patterns, including Softcoding, the Inner-Platform Effect, and IHBLRIA (Invented Here But Let's Reinvent It Anyway). The site also has an associated "Edition Française", a French-language edition headed up by Jocelyn Demoy, launched in March 2008, as well as a Polish edition. History The website was started on 17 May 2004, when Papadimoulis posted an entry entitled "Your Daily Cup of WTF" on his blog as a means of simply complaining about the quality of development at his then current employer. On his third such post, a reader of his blog suggested that he start a new website dedicated exclusively to such humorous "bad code" postings. A few days later, he registered TheDailyWTF.com domain name and began posting stories from readers of the site. The content of the site kept evolving, and the body of articles was split into several columns. On 2 November 2006 Papadimoulis starting running code samples as articles entitled the "Code Snippets of the Day", "CodeSOD" for short. Originally edited by Tim Gallagher, the column was taken over by Derrick Pallas (now the sole editor of CodeSOD) as well as Devin Moore and Mike Nuss on 2 January 2007. On 12 February 2007 Jake Vinson started a new column, "Error'd", based on the old monthly series "Pop-Up Potpourri". The site was renamed to "Worse Than Failure" on 24 February 2007 because "'Daily' and 'What The F*' didn’t quite describe it anymore". Papadimoulis also did not enjoy explaining the meaning "WTF" to people unfamiliar to the phrase, as it contains profanity. This was not without controversy, and some readers threatened to stop reading the site because of this. The change was reverted on December 12, 2007, after a short and tongue-in-cheek stint as "The Daily Worse Than Failure". Olympiad of Misguided Geeks Olympiad of Misguided Geeks at Worse Than Failure (abbr. OMGWTF) was a programming contest to "solve an incredibly simple problem using the most obscenely convoluted way imaginable". It was started by Alex Papadimoulis because he wanted "to try out something new on [the] site." Contestants for the OMGWTF contest were encouraged to focus on writing "clever code" (code which is unconventional and solves a problem that may or may not be solvable with conventional means) as opposed to "ugly code" (single letter variable names, no subroutines, and so on). The goal of the first (and so far, only) contest was to "implement the logic for a four-function calculator." It ran from 24 April 2007 to 14 May 2007 and received over 350 submissions which were then judged by popular technology bloggers Raymond Chen, Jeremy Zawodny and Joel Spolsky. The winning entry was Stephen Oberholtzer's "Buggy 4-Function Calculator", which, according to judge Joel Spolsky "best exemplifies what real-world code looks like ... [it's] not just bad code, [it's] believable bad code." In addition to "a High-Resolution JPEG of an Official Olympiad of Misguided Geeks at Worse Than Failure First Prize Trophy," the winner received a 15-inch MacBook Pro. Notable guest appearances In addition to the mostly anonymous stories, several prominent figures have written stories they’ve encountered in their professional experience such as Blake Ross who wrote of the failure of Netscape 7. See also List of satirical magazines List of satirical news websites List of satirical television news programs Inedo References External links The Daily WTF American satirical websites Computing websites Programming contests Computer humour Software bugs Internet properties established in 2004
The Daily WTF
[ "Technology" ]
947
[ "Computing websites" ]
11,671,030
https://en.wikipedia.org/wiki/Walking%20Stewart
John "Walking" Stewart (19 February 1747 – 20 February 1822) was an English philosopher and traveller. Stewart developed a unique system of materialistic pantheism. Travels Known as "Walking" Stewart to his contemporaries for having travelled on foot from Madras, India (where he had worked as a clerk for the East India Company) back to Europe between 1765 and the mid-1790s, Stewart is thought to have walked alone across Persia, Abyssinia, Arabia, and Africa before wandering into every European country as far east as Russia. Over the next three decades Stewart wrote prolifically, publishing nearly thirty philosophical works, including The Opus Maximum (London, 1803) and the long verse-poem The Revelation of Nature (New York, 1795). In 1796, George Washington's portrait-painter, James Sharples, executed a pastel likeness of Stewart for a series of portraits which included such sitters as William Godwin, Joseph Priestley, and Humphry Davy, suggesting the intellectual esteem in which Stewart was once held. After his travels in East India, Stewart became a vegetarian. He was also a teetotaler. Philosophy During his journeys, he developed a unique system of materialist philosophy which combines elements of Spinozistic pantheism with yogic notions of a single indissoluble consciousness. Stewart began to promote his ideas publicly in 1790 with the publication in two volumes of his works Travels over the most interesting parts of the Globe and The Apocalypse of Nature (London, 1790). Historian David Fairer has written that "Stewart expounds what might be described as a panbiomorphic universe (it deserves an entirely new term just for itself), in which human identity is no different in category from a wave, flame, or wind, having an entirely modal existence.". According to Henry Stephens Salt, writing for the Temple Bar in 1893, Stewart repeatedly insisted upon "The immortality of matter and the sympathy that exists between all forms of nature". Stewart declared that if he were about to die, these should be his last words: "The only measure to save mankind and all sensitive life is to educate the judgment of man and not the memory, that he may be able through reflection to calculate the golden mean of good and evil". Retirement After retiring from travelling, Stewart eventually settled in London where he held philosophical soirées and earned a reputation as one of the city's celebrated eccentrics. He was often seen in public wearing a threadbare Armenian military uniform. John Timbs described Stewart as one of London's famous eccentrics. Death On 20 February 1822, the morning after his seventy-fifth birthday, 'Walking' Stewart's body was found in a rented room in Northumberland Place, near present-day Trafalgar Square, London. An empty bottle of laudanum lay beside him. Literary influence After Walking Stewart's travels came to an end around the turn of the nineteenth century, he became close friends with the English essayist and fellow-Londoner Thomas De Quincey, with the radical pamphleteer Thomas Paine, and with the Platonist Thomas Taylor (1758-1835). In 1792, while residing in Paris in the weeks following the September Massacres, he made the acquaintance of the young Romantic poet William Wordsworth, who later concurred with De Quincey in describing Stewart as the most eloquent man on the subject of Nature that either had ever met. Recent scholarship by Kelly Grovier has suggested that Stewart's persona and philosophical writings had a major influence on Wordsworth's poetry. References Further reading The life and adventures of the celebrated Walking Stewart: including his travels in the East Indies, Turkey, Germany, & America. By a relative, London, E. Wheatley, 1822. Bertrand Harris Bronson, "Walking Stewart", Essays & Studies, xiv (University of California Press, 1943), pp. 123–55. Gregory Claeys. "'The Only Man of Nature That Ever Appeared in the World'": 'Walking' John Stewart and the Trajectories of Social Radicalism, 1790-1822", Journal of British Studies, 53 (2014), 1–24. Thomas De Quincey, The Works of Thomas De Quincey, ed. Grevel Lindop (London: Pickering & Chatto, 2000-), vol. xi, p. 247. Kelly Grovier, 'Dream Walker: A Wordsworth Mystery Solved', Times Literary Supplement, 16 February 2007 Kelly Grovier, '"Shades of the Prison House": "Walking" Stewart and the making of Wordsworth's "two consciousnesses", Studies in Romanticism, Fall 2005 (Boston University), pp. 341–66. Barry Symonds, 'Stewart, John (1747–1822)’, Oxford Dictionary of National Biography, Oxford University Press, 2004 John Taylor, "Walking Stewart", Record of My Life, pp. 163–68 External links John Stewart's "Sensate Matter" in the Early Republic The Most Unlikely Man to Influence A Generation of Writers: Walking Stewart 1747 births 1822 deaths 18th-century English writers 18th-century English philosophers 19th-century English writers 19th-century English philosophers Drug-related deaths in London English philosophers Materialists Pantheists
Walking Stewart
[ "Physics" ]
1,081
[ "Materialism", "Matter", "Materialists" ]
11,671,112
https://en.wikipedia.org/wiki/Zuclopenthixol
Zuclopenthixol (brand names Cisordinol, Clopixol and others), also known as zuclopentixol, is a medication used to treat schizophrenia and other psychoses. It is classed, pharmacologically, as a typical antipsychotic. Chemically it is a thioxanthene. It is the cis-isomer of clopenthixol (Sordinol, Ciatyl). Clopenthixol was introduced in 1961, while zuclopenthixol was introduced in 1978. Zuclopenthixol is a D1 and D2 antagonist, α1-adrenergic and 5-HT2 antagonist. While it is approved for use in Australia, Canada, Ireland, India, New Zealand, Singapore, South Africa and the UK, it is not approved for use in the United States. Medical uses Available forms Zuclopenthixol is available in three major preparations: As zuclopenthixol decanoate (Clopixol Depot, Cisordinol Depot), it is a long-acting intramuscular injection. Its main use is as a long-acting injection given every two or three weeks to people with schizophrenia who have a poor compliance with medication and suffer frequent relapses of illness. There is some evidence it may be more helpful in managing aggressive behaviour. As zuclopenthixol acetate (Clopixol-Acuphase, Cisordinol-Acutard), it is a shorter-acting intramuscular injection used in the acute sedation of psychotic inpatients. The effect peaks at 48–72 hours providing 2–3 days of sedation. As zuclopenthixol dihydrochloride (Clopixol, Cisordinol), it is a tablet used in the treatment of schizophrenia in those who are compliant with oral medication. It is also used in the treatment of acute bipolar mania. Dosing As a long-acting injection, zuclopenthixol decanoate comes in a 200 mg and 500 mg ampoule. Doses can vary from 50 mg weekly to the maximum licensed dose of 600 mg weekly. In general, the lowest effective dose to prevent relapse is preferred. The interval may be shorter as a patient starts on the medication before extending to 3 weekly intervals subsequently. The dose should be reviewed and reduced if side effects occur, though in the short-term an anticholinergic medication benztropine may be helpful for tremor and stiffness, while diazepam may be helpful for akathisia. 100 mg of zuclopenthixol decanoate is roughly equivalent to 20 mg of flupentixol decanoate or 12.5 mg of fluphenazine decanoate. In oral form zuclopenthixol is available in 2, 10, 25 and 40 mg tablets, with a dose range of 20–60 mg daily. Side effects Chronic administration of zuclopenthixol (30 mg/kg/day for two years) in rats resulted in small, but significant, increases in the incidence of thyroid parafollicular carcinomas and, in females, of mammary adenocarcinomas and of pancreatic islet cell adenomas and carcinomas. An increase in the incidence of mammary adenocarcinomas is a common finding for D2 antagonists which increase prolactin secretion when administered to rats. An increase in the incidence of pancreatic islet cell tumours has been observed for some other D2 antagonists. The physiological differences between rats and humans with regard to prolactin make the clinical significance of these findings unclear. Withdrawal syndrome: Abrupt cessation of therapy may cause acute withdrawal symptoms (eg, nausea, vomiting, or insomnia). Symptoms usually begin in 1 to 4 days of withdrawal and subside within 1 to 2 weeks. Other permanent side effects are similar to many other typical antipsychotics, namely extrapyramidal symptoms as a result of dopamine blockade in subcortical areas of the brain. This may result in symptoms similar to those seen in Parkinson's disease and include a restlessness and inability to sit still known as akathisia, a slow tremor and stiffness of the limbs. Zuclopenthixol is thought to be more sedating than the related flupentixol, though possibly less likely to induce extrapyramidal symptoms than other typical depots. As with other dopamine antagonists, zuclopenthixol may sometimes elevate prolactin levels; this may occasionally result in amenorrhoea or galactorrhoea in severe cases. Neuroleptic malignant syndrome is a rare but potentially fatal side effect. Any unexpected deterioration in mental state with confusion and muscle stiffness should be seen by a physician. Zuclopenthixol decanoate induces a transient dose-dependent sedation. However, if the patient is switched to maintenance treatment with zuclopenthixol decanoate from oral zuclopenthixol or from i.m. zuclopenthixol acetate the sedation will be no problem. Tolerance to the unspecific sedative effect develops rapidly. Very common Adverse Effects (≥10% incidence) Hypersalivation Somnolence Akathisia Hyperkinesia Hypokinesia Common (1–10%) Tachycardia Heart palpitations Vertigo Accommodation disorder Abnormal vision Salivary hypersecretion Constipation Vomiting Dyspepsia Diarrhoea Asthenia Fatigue Malaise Pain (at the injection site) Increased appetite Weight gain Myalgia Tremor Dystonia Hypertonia Dizziness Headache Paraesthesia Disturbance in attention Amnesia Abnormal gait Insomnia Depression Anxiety Abnormal dreams Agitation Decreased libido Nasal congestion Dyspnoea Hyperhidrosis Pruritus Uncommon (0.1–1%) Hyperacusis Tinnitus Mydriasis Abdominal pain Nausea Flatulence Thirst Injection site reaction Hypothermia Pyrexia Abnormal liver function tests Decreased appetite Weight loss Muscle rigidity Trismus Torticollis Tardive dyskinesia Hyperreflexia Dyskinesia Parkinsonism Syncope Ataxia Speech disorder Hypotonia Convulsion Migraine Apathy Nightmares Libido increased Confused state Ejaculation failure Erectile dysfunction Female orgasmic disorder Vulvovaginal Dryness Rash Photosensitivity Pigmentation disorder Seborrhoea Dermatitis Purpura Hypotension Hot flush Rare (0.01–0.1%) Thrombocytopenia Neutropenia Leukopenia Agranulocytosis QT prolongation Hyperprolactinaemia Hypersensitivity Anaphylactic reaction Hyperglycaemia Glucose tolerance impaired Hyperlipidaemia Gynaecomastia Galactorrhoea Amenorrhoea Priapism Withdrawal symptoms Very rare (<0.01%) Cholestatic hepatitis Jaundice Neuroleptic malignant syndrome Venous thromboembolism Pharmacology Pharmacodynamics Zuclopenthixol antagonises both dopamine D1 and D2 receptors, α1-adrenoceptors and 5-HT2 receptors with a high affinity, but has no affinity for muscarinic acetylcholine receptors. It weakly antagonises the histamine (H1) receptor but has no α2-adrenoceptor blocking activity . Evidence from in vitro work and clinical sources (i.e. therapeutic drug monitoring databases) suggests that both CYP2D6 and CYP3A4 play important roles in zuclopenthixol metabolism. Pharmacokinetics History Zuclopenthixol was introduced by Lundbeck in 1978. References Typical antipsychotics Alcohols Chloroarenes CYP2D6 inhibitors Piperazines Thioxanthene antipsychotics Enantiopure drugs
Zuclopenthixol
[ "Chemistry" ]
1,698
[ "Stereochemistry", "Enantiopure drugs" ]
4,045,546
https://en.wikipedia.org/wiki/Triphone
In linguistics, a triphone is a sequence of three consecutive phonemes. Triphones are useful in models of natural language processing where they are used to establish the various contexts in which a phoneme can occur in a particular natural language. See also Diphone References Natural language processing Phonology
Triphone
[ "Technology" ]
61
[ "Natural language processing", "Natural language and computing" ]
4,046,265
https://en.wikipedia.org/wiki/Smart%20host
A smart host or smarthost is an email server via which third parties can send emails and have them forwarded on to the email recipients' email servers. Smarthosts were originally open mail relays, but most providers now require authentication from the sender, to verify that the sender is authorised – for example, an ISP might run a smarthost for their paying customers only. Use in spam control efforts In an effort to reduce email spam originating from their customer's IP addresses, some internet service providers (ISPs), will not allow their customers to communicate directly with recipient mailservers via the default SMTP port number 25. Instead, often they will set up a smarthost to which their customers can direct all their outward mail – or customers could alternatively use one of the commercial smarthost services. Sometimes, even if an outward port 25 is not blocked, an individual or organisation's normal external IP address has a difficulty in getting SMTP mail accepted. This could be because that IP was assigned in the past to someone who sent spam from it, or appears to be a dynamic address such as typically used for home connection. Whatever the reason for the "poor reputation" or "blacklisting", they can choose to redirect all their email out to an external smarthost for delivery. Reducing complexity When a host runs its own local mail server, a smart host is often used to transmit all mail to other systems through a central mail server. This is used to ease the management of a single mail server with aliases, security, and Internet access rather than maintaining numerous local mail servers. See also Mail submission agent References Email Internet terminology
Smart host
[ "Technology" ]
344
[ "Computing terminology", "Internet terminology" ]
4,046,303
https://en.wikipedia.org/wiki/Binch%C5%8Dtan
Binchō-tan (, ), also called white charcoal or binchō-zumi, is a type of high-quality charcoal traditionally used in Japanese cooking. Its use dates back to the Edo period when during the Genroku era, a craftsman named Bichū-ya Chōzaemon () began to produce it in Tanabe, Wakayama. The typical raw material used to make binchō-tan in Japan is oak, specifically , now the official tree of Wakayama Prefecture. Wakayama continues to be a major producer of high-quality charcoal, with the town of Minabe, Wakayama, producing more binchō-tan than any other town in Japan. Binchō-tan produced in Wakayama is referred to as Kishū binchō-tan (), Kishū being the old name of Wakayama. White charcoal is made by pyrolysing wood in a kiln at approximately for 120 hours, then raising the temperature to around . Once carbonised, the material is taken out and covered to cure in a damp mixture of earth, sand, and ash. Binchō-tan is a type of hardwood charcoal which takes the natural shape of the wood that was used to make it. It is also harder than black charcoal, ringing with a metallic sound when struck. Due to its physical structure, binchō-tan takes on a whiter or even metallic appearance. Apart from being used for cooking, it has other benefits, such as absorption of odors. References External links 紀州備長炭 —Making of Kishū Binchōtan by Wakayama Pref. 炭琴 —Tankin ("charcoal-xylophone") "Charcoal Adds to the Good Life" – an article from 2001 touting the benefits of black and white charcoal, the latter including binchōtan Allotropes of carbon Charcoal Edo period Fuels Japanese cuisine terms
Binchōtan
[ "Chemistry" ]
387
[ "Allotropes of carbon", "Allotropes", "Fuels", "Chemical energy sources" ]
4,046,430
https://en.wikipedia.org/wiki/Minidish
The Minidish is the tradename used for the small-sized satellite dish used by Freesat and Sky. The term has entered the vocabulary in the UK and Ireland as a generic term for a satellite dish, particularly small ones. The Minidish is an oval, mesh satellite dish capable of reflecting signals broadcast in the upper X band and . Two sizes exist: "Zone 1" dishes are issued in southern and Northern England and parts of Scotland and were 43 cm vertically prior to 2009; newer mark 4 dishes are approximately 50 cm "Zone 2" dishes are issued in elsewhere (Wales, Northern Ireland, Republic of Ireland, Scotland and northern England), which are 57 cm vertically. The Minidish uses a non-standard connector for the LNB, consisting of a peg about in width and in height prior to the mark 4 dishes introduced in 2009, as opposed to the 40 mm collar. This enforces the use of Sky-approved equipment, but also ensures that a suitable LNB is used. Due to the shape of the dish, an LNB with an oval feedhorn is required to get full signal. References Satellite television Radio electronics Sky Group Brands that became generic
Minidish
[ "Engineering" ]
239
[ "Radio electronics" ]
4,046,461
https://en.wikipedia.org/wiki/Records%20manager
A records manager is the professional responsible for records management in an organization. This role has evolved over time and takes many forms, with many related areas of knowledge required for professional competency. Records managers are found in all types of organizations, including business, government, and nonprofit sectors. Generally, dedicated (i.e., full-time) records managers are found in larger organizations. History Records management evolved from the development of archives in the United States government following World War II. With the explosion of paper records during that war, better systems of management were needed to retain and make the records available for current use. Records managers became specialists that bridged the gap between file clerks and archivists. The profession expanded into the corporate world in the 1950s. Competencies The records manager generally provides expertise in records management, constituting knowledge areas of: Records creation and use Active and inactive records systems Records appraisal, retention and disposition Vital records identification and protection Records and information management technology The Records Manager may also have subject matter expertise in: Law Privacy and data protection Information technology and electronic storage systems General business principles Specialization Records managers are present in virtually every type of organization. The role can range from one of a file clerk to the chief information officer of an organization. Records managers may focus on operational responsibilities, design strategies and policies for maintaining and utilizing information, or combine elements of those jobs. The health care industry has a very specialized view of records management. Health information management involves not only maintaining patient files, but also coding the files to reflect the diagnoses of the conditions suffered by patients. The American Health Information Management Association (AHIMA) is the professional organization in this space. Records managers in the pharmaceutical industry are responsible for maintaining laboratory research, clinical trials data, and manufacturing information. Records managers in law firms often have responsibility for managing conflicts, as well as managing client matter files. In the United States, records managers in nuclear power plants specialize in compliance with the Nuclear Regulatory Commission rules regarding the handling of nuclear materials. NIRMA is their local professional organization. Education and certification Records managers may have degrees in a wide variety of subjects in all disciplines, and few universities offer formal records management education. Graduate-level programs are often specialties within Library Science and Archival Science programs. Graduate-level Public History programs generally offer coursework in archives and records management. A recent addition to records management education in the United States is the MARA – the Master of Archives and Records Administration degree program — offered by the San Jose State University School of Information. Professional and trade organizations offer continuing education conferences, seminars, and workshops. Governmental archives and records management departments such as the National Archives and Records Administration offer educational programs of interest to government records managers. A professional certification, the Certified Records Manager credential is offered by the Institute of Certified Records Managers. Other organizations may offer certificates reflecting completion of a course of studies, attendance at a seminar, or passing a subject matter test. See also Records management Records management taxonomy Institute of Certified Records Managers References Information management Records management
Records manager
[ "Technology" ]
611
[ "Information systems", "Information management" ]
4,046,826
https://en.wikipedia.org/wiki/Indolamines
Indolamines are a family of neurotransmitters that share a common molecular structure. Indolamines are a classification of monoamine neurotransmitter, along with catecholamines and ethylamine derivatives. A common example of an indolamine is the tryptophan derivative serotonin, a neurotransmitter involved in mood and sleep. Another example of an indolamine is melatonin. In biochemistry, indolamines are substituted indole compounds that contain an amino group. Examples of indolamines include the lysergamides. Synthesis Indolamines are biologically synthesized from the essential amino acid tryptophan. Tryptophan is synthesized into serotonin through the addition of a hydroxyl group by the enzyme tryptophan hydroxylase and the subsequent removal of the carboxyl group by the enzyme 5-HTP decarboxylase. See also Indole Tryptamine References Neurotransmitters Indoles Amines
Indolamines
[ "Chemistry" ]
213
[ "Neurotransmitters", "Functional groups", "Amines", "Neurochemistry", "Bases (chemistry)" ]
4,046,891
https://en.wikipedia.org/wiki/Truncated%205-cell
In geometry, a truncated 5-cell is a uniform 4-polytope (4-dimensional uniform polytope) formed as the truncation of the regular 5-cell. There are two degrees of truncations, including a bitruncation. Truncated 5-cell The truncated 5-cell, truncated pentachoron or truncated 4-simplex is bounded by 10 cells: 5 tetrahedra, and 5 truncated tetrahedra. Each vertex is surrounded by 3 truncated tetrahedra and one tetrahedron; the vertex figure is an elongated tetrahedron. Construction The truncated 5-cell may be constructed from the 5-cell by truncating its vertices at 1/3 of its edge length. This transforms the 5 tetrahedral cells into truncated tetrahedra, and introduces 5 new tetrahedral cells positioned near the original vertices. Structure The truncated tetrahedra are joined to each other at their hexagonal faces, and to the tetrahedra at their triangular faces. Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time. Projections The truncated tetrahedron-first Schlegel diagram projection of the truncated 5-cell into 3-dimensional space has the following structure: The projection envelope is a truncated tetrahedron. One of the truncated tetrahedral cells project onto the entire envelope. One of the tetrahedral cells project onto a tetrahedron lying at the center of the envelope. Four flattened tetrahedra are joined to the triangular faces of the envelope, and connected to the central tetrahedron via 4 radial edges. These are the images of the remaining 4 tetrahedral cells. Between the central tetrahedron and the 4 hexagonal faces of the envelope are 4 irregular truncated tetrahedral volumes, which are the images of the 4 remaining truncated tetrahedral cells. This layout of cells in projection is analogous to the layout of faces in the face-first projection of the truncated tetrahedron into 2-dimensional space. The truncated 5-cell is the 4-dimensional analogue of the truncated tetrahedron. Images Alternate names Truncated pentatope Truncated 4-simplex Truncated pentachoron (Acronym: tip) (Jonathan Bowers) Coordinates The Cartesian coordinates for the vertices of an origin-centered truncated 5-cell having edge length 2 are: More simply, the vertices of the truncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,0,1,2) or of (0,1,2,2,2). These coordinates come from positive orthant facets of the truncated pentacross and bitruncated penteract respectively. Related polytopes The convex hull of the truncated 5-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 60 cells: 10 tetrahedra, 20 octahedra (as triangular antiprisms), 30 tetrahedra (as tetragonal disphenoids), and 40 vertices. Its vertex figure is a hexakis triangular cupola. Vertex figure Bitruncated 5-cell The bitruncated 5-cell (also called a bitruncated pentachoron, decachoron and 10-cell) is a 4-dimensional polytope, or 4-polytope, composed of 10 cells in the shape of truncated tetrahedra. Topologically, under its highest symmetry, [[3,3,3]], there is only one geometrical form, containing 10 uniform truncated tetrahedra. The hexagons are always regular because of the polychoron's inversion symmetry, of which the regular hexagon is the only such case among ditrigons (an isogonal hexagon with 3-fold symmetry). E. L. Elte identified it in 1912 as a semiregular polytope. Each hexagonal face of the truncated tetrahedra is joined in complementary orientation to the neighboring truncated tetrahedron. Each edge is shared by two hexagons and one triangle. Each vertex is surrounded by 4 truncated tetrahedral cells in a tetragonal disphenoid vertex figure. The bitruncated 5-cell is the intersection of two pentachora in dual configuration. As such, it is also the intersection of a penteract with the hyperplane that bisects the penteract's long diagonal orthogonally. In this sense it is a 4-dimensional analog of the regular octahedron (intersection of regular tetrahedra in dual configuration / tesseract bisection on long diagonal) and the regular hexagon (equilateral triangles / cube). The 5-dimensional analog is the birectified 5-simplex, and the -dimensional analog is the polytope whose Coxeter–Dynkin diagram is linear with rings on the middle one or two nodes. The bitruncated 5-cell is one of the two non-regular convex uniform 4-polytopes which are cell-transitive. The other is the bitruncated 24-cell, which is composed of 48 truncated cubes. Symmetry This 4-polytope has a higher extended pentachoric symmetry (2×A4, [[3,3,3]]), doubled to order 240, because the element corresponding to any element of the underlying 5-cell can be exchanged with one of those corresponding to an element of its dual. Alternative names Bitruncated 5-cell (Norman W. Johnson) 10-cell as a cell-transitive 4-polytope Bitruncated pentachoron Bitruncated pentatope Bitruncated 4-simplex Decachoron (Acronym: deca) (Jonathan Bowers) Images Coordinates The Cartesian coordinates of an origin-centered bitruncated 5-cell having edge length 2 are: More simply, the vertices of the bitruncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,1,2,2). These represent positive orthant facets of the bitruncated pentacross. Another 5-space construction, centered on the origin are all 20 permutations of (-1,-1,0,1,1). Related polytopes The bitruncated 5-cell can be seen as the intersection of two regular 5-cells in dual positions. = ∩ . Configuration Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time. Related regular skew polyhedron The regular skew polyhedron, {6,4|3}, exists in 4-space with 4 hexagonal around each vertex, in a zig-zagging nonplanar vertex figure. These hexagonal faces can be seen on the bitruncated 5-cell, using all 60 edges and 30 vertices. The 20 triangular faces of the bitruncated 5-cell can be seen as removed. The dual regular skew polyhedron, {4,6|3}, is similarly related to the square faces of the runcinated 5-cell. Disphenoidal 30-cell The disphenoidal 30-cell is the dual of the bitruncated 5-cell. It is a 4-dimensional polytope (or polychoron) derived from the 5-cell. It is the convex hull of two 5-cells in opposite orientations. Being the dual of a uniform polychoron, it is cell-transitive, consisting of 30 congruent tetragonal disphenoids. In addition, it is vertex-transitive under the group Aut(A4). Related polytopes These polytope are from a set of 9 uniform 4-polytope constructed from the [3,3,3] Coxeter group. References H.S.M. Coxeter: H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, p. 88 (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues, Proceedings of the London Mathematics Society, Ser. 2, Vol 43, 1937.) Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33-62, 1937. Norman Johnson Uniform Polytopes, Manuscript (1991) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) x3x3o3o - tip, o3x3x3o - deca Specific Uniform 4-polytopes
Truncated 5-cell
[ "Physics" ]
2,062
[ "Uniform 4-polytopes", "Uniform polytopes", "Symmetry" ]
4,046,924
https://en.wikipedia.org/wiki/Flock%20of%20Dodos
Flock of Dodos: The Evolution-Intelligent Design Circus is a documentary film by American marine biologist and filmmaker Randy Olson. It highlights the debate between proponents of the concept of intelligent design and the scientific evidence and consensus that supports evolution, as well as the potential consequences of science rejection. The documentary was first screened publicly on February 2, 2006, in Kansas, where much of the public controversy on intelligent design began, as well as the starting point of discussion in the documentary. Other public screenings followed in universities, including Harvard and Stony Brook University, marking the celebration of Charles Darwin's birthday. Synopsis Flock of Dodos examines the disagreements that proponents of intelligent design have with the scientific consensus position of evolution. Olsen also expressed concerns in relation to the potential to distrust and reject science in general. The evolutionarily famous dodo (Raphus cucullatus) is a now-extinct bird that lived on the island of Mauritius. Due to its lack of fear of humans and inability to fly, the dodo was easy prey, and thus became known for its apparent stupidity. The film attempts to determine who the real "dodos" are in a constantly evolving world: the scientists who are failing to effectively promote evolution as a scientifically accepted fact, the intelligent design advocates, or the American public who get fooled by the "salesmanship" of evolution critics. The film gives equal air time to both sides of the argument, including intelligent design proponent Michael Behe and several of his colleagues. While Randy Olson ultimately sides with the scientists who accept evolution, the scientists are criticized for their elitism and inability to efficiently present science to the general public, which ultimately contributes to the spread of misconceptions. The film begins by going over the history of intelligent design thought from Plato and Paley to the present-day incarnation promoted by the Discovery Institute. Olson mixes in humorous cartoons of squawking dodos with commentary from his mother and interviews with proponents on both sides of the intelligent design/evolution debate. On the intelligent design side, Olson interviews Behe, John Calvart (founder of the Access Research Network) and a member of the Kansas school board. Olson also unsuccessfully tries to interview Kansas Board of Education member Connie Morris (associated with Kansas evolution hearings) and members of the Discovery Institute. Release The documentary premiered at the Tribeca Film Festival in New York, in April 2006, and since then has played at film festivals all over the U.S. and abroad. The documentary was shown in museums and universities as part of a "Dodos Darwin Day" event (celebrating Charles Darwin's birthday) on or around February 12, 2007. Flock of Dodos: the Evolution-Intelligent Design Circus is currently (as of January 2008) in rotation on Showtime in the US and available on DVD. The documentary was praised by the journal Nature and a variety of other publications. In 2007, Olson released a collection of "pulled punches," of unreleased material that he chose to leave out that reflected poorly on intelligent design supporters. Discovery Institute response Olson invited the Discovery Institute, a hub of the intelligent design movement, to appear in the film. Instead the institute responded by creating a website, Hoax of Dodos, characterizing the documentary as "revisionist history," and a "hoax" filled with inaccuracies and misrepresentations. Biologist PZ Myers responded to the institute's "bogus complaint that Olson was lying in the movie" about Ernst Haeckel's drawings of embryos is false. Myers explained the drawings have not been used in recent biology textbooks "other than a mention that once upon a time Haeckel came up with this idea of ontogeny recapitulating phylogeny." Myers and other critics of intelligent design have shown that each of these texts treats Haeckel's theory of ontogeny recapitulating phylogeny as an example of an outdated exaggeration. Myers notes in his rebuttal of the criticism from design proponents that, "I would add that progress in evolutionary biology has led to better explanations of the phenomenon that vertebrate embryos go through a period of similarity: it lies in conserved genetic circuitry that lays down the body plan." In early 2007, in response to Olson's claim, "the Discovery Institute is truly the big fish in this picture, with an annual budget of around $5 million," the Institute responded that their budget is only $4.2 million, and that they spend close to $1 million per year funding intelligent design. References External links Flock of Dodos Science Friday Commentary (Feb. 23. 07) Profile of "Flock of Dodos" director Randy Olson by Eric Sorensen in (2007) Forward thinkers: People to watch in 2007. Conservation, 8(1). PZ Myers responding to the criticisms about Haeckel's drawings 2006 films 2006 documentary films Works about creationism Intelligent design American documentary films Documentary films about education in the United States Documentary films about science 2000s English-language films 2000s American films English-language documentary films
Flock of Dodos
[ "Engineering" ]
1,028
[ "Intelligent design", "Design" ]
4,047,242
https://en.wikipedia.org/wiki/Indexing%20Service
Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer. The IFilter plugins to extend the indexing capabilities to more file formats and protocols are compatible between the legacy Indexing Service how and the newer Windows Search Indexer. History Indexing Service was a desktop search service included with Windows NT 4.0 Option Pack as well as Windows 2000 and later. The first incarnation of the indexing service was shipped in August 1996 as a content search system for Microsoft's web server software, Internet Information Services. Its origins, however, date further back to Microsoft's Cairo operating system project, with the component serving as the Content Indexer for the Object File System. Cairo was eventually shelved, but the content indexing capabilities would go on to be included as a standard component of later Windows desktop and server operating systems, starting with Windows 2000, which includes Indexing Service 3.0. In Windows Vista, the content indexer was replaced with the Windows Search indexer which was enabled by default. Indexing Service is still included with Windows Server 2008 but is not installed or running by default. Indexing Service has been deprecated in Windows 7 and Windows Server 2008 R2. It has been removed from Windows 8. Search interfaces Comprehensive searching is available after initial building of the index, which can take up to hours or days, depending on the size of the specified directories, the speed of the hard drive, user activity, indexer settings and other factors. Searching using Indexing service works also on UNC paths and/or mapped network drives if the sharing server indexes appropriate directory and is aware of its sharing. Once the indexing service has been turned on and has built its index it can be searched in three ways. The search option available from the Start menu on the Windows Taskbar will use the indexing service if it is enabled and will even accept complex queries. Queries can also be performed using either the Indexing Service Query Form in the Computer Management snap-in of Microsoft Management Console, or, alternatively, using third-party applications such as 'Aim at File' or 'Grokker Desktop'. References Windows communication and services Desktop search engines Information retrieval systems Windows components
Indexing Service
[ "Technology" ]
492
[ "Information technology", "Information retrieval systems" ]