id
int64 39
79M
| url
stringlengths 31
227
| text
stringlengths 6
334k
| source
stringlengths 1
150
⌀ | categories
sequencelengths 1
6
| token_count
int64 3
71.8k
| subcategories
sequencelengths 0
30
|
|---|---|---|---|---|---|---|
15,825,071
|
https://en.wikipedia.org/wiki/DIO2
|
Type II iodothyronine deiodinase (iodothyronine 5'-deiodinase, iodothyronine 5'-monodeiodinase) is an enzyme that in humans is encoded by the DIO2 gene.
Function
The protein encoded by this gene belongs to the iodothyronine deiodinase family. It activates thyroid hormone by converting the prohormone thyroxine (T4) by outer ring deiodination (ORD) to bioactive 3,3',5-triiodothyronine (T3). It is highly expressed in the thyroid, and may contribute significantly to the relative increase in thyroidal T3 production in patients with Graves' disease and thyroid adenomas. This protein contains selenocysteine (Sec) residues encoded by the UGA codon, which normally signals translation termination. The 3' UTR of Sec-containing genes have a common stem-loop structure, the Sec insertion sequence (SECIS), which is necessary for the recognition of UGA as a Sec codon rather than as a stop signal. Alternative splicing results in multiple transcript variants encoding different isoforms.
Interactions
DIO2 has been shown to interact with USP33.
See also
Deiodinase
References
Further reading
Selenoproteins
|
DIO2
|
[
"Chemistry"
] | 280
|
[
"Biochemistry stubs",
"Protein stubs"
] |
15,826,481
|
https://en.wikipedia.org/wiki/Unsaid
|
The term "unsaid" refers what is not explicitly stated, what is hidden and/or implied in the speech of an individual or a group of people.
The unsaid may be the product of intimidation; of a mulling over of thought; or of bafflement in the face of the inexpressible.
Linguistics
Sociolinguistics points out that in normal communication what is left unsaid is as important as what is actually said—that we expect our auditors regularly to fill in the social context/norms of our conversations as we proceed.
Basil Bernstein saw one difference between the restricted code and the elaborated code of speech is that more would be left implicit in the former than the latter.
Ethnology
In ethnology, ethnomethodology established a strong link between unsaid and axiomatic.
Harold Garfinkel, following Durkheim, stressed that in any given situation, even a legally binding contract, the terms of agreement rest upon the 90% of unspoken assumptions that underlie the visible (spoken) tip of the interactive iceberg.
Edward T. Hall argued that much cross-cultural miscommunication stemmed from neglect of the silent, unspoken, but differing cultural patterns that each participant unconsciously took for granted.
Psychoanalysis
Luce Irigaray has emphasised the importance of listening to the unsaid dimension of discourse in psychoanalytic practice—something which may shed light on the unconscious phantasies of the person being analysed.
Other psychotherapies have also emphasised the importance of the non-verbal component of the patient's communication, sometimes privileging this over the verbal content. Behind all such thinking stands Freud's dictum: "no mortal can keep a secret. If his lips are silent, he chatters with his fingertips...at every pore".
Cultural examples
Sherlock Holmes is said to have owed his success to his attention to the unsaid in his client's communications.
In Small World, the heroine cheekily excuses her lack of note-taking to a Sorbonne professor by saying: "it is not what you say that impresses me most, it is what you are silent about: ideas, morality, love, death, things...Vos silences profonds".
See also
References
Further reading
External links
Human communication
Nonverbal communication
Sociolinguistics
Ethnology
Psychotherapy
|
Unsaid
|
[
"Biology"
] | 493
|
[
"Human communication",
"Behavior",
"Human behavior"
] |
15,826,631
|
https://en.wikipedia.org/wiki/Black%20Warrior%20Basin
|
The Black Warrior Basin is a geologic sedimentary basin of western Alabama and northern Mississippi in the United States. It is named for the Black Warrior River and is developed for coal and coalbed methane production, as well as for conventional oil and natural gas production. Coalbed methane of the Black Warrior Basin has been developed and in production longer than in any other location in the United States. The coalbed methane is produced from the Pennsylvanian Pottsville Coal Interval.
The Black Warrior basin was a foreland basin during the Ouachita Orogeny during the Pennsylvanian and Permian Periods. The basin also received sediments from the Appalachian orogeny during the Pennsylvanian. The western margin of the basin lies beneath the sediments of the Mississippi embayment where it is contiguous with the Arkoma Basin of northern Arkansas and northeastern Oklahoma. The region existed as a quiescent
continental shelf environment through the early Paleozoic from the Cambrian through the Mississippian with the deposition of shelf sandstones, shale, limestone, dolomite and chert.
References
Further reading
Hatch J.R. and M.J. Pawlewicz. (2007). Geologic assessment of undiscovered oil and gas resources of the Black Warrior Basin Province, Alabama and Mississippi [Digital Data Series 069-I]. Reston, VA: U.S. Department of the Interior, U.S. Geological Survey.
External links
Geological Survey of Alabama; Alabama State Oil and Gas Board
Pashin, J.C. (2005). Pottsville Stratigraphy and the Union Chapel Lagerstatte. (PDF) Pennsylvanian Footprints in the Black Warrior Basin of Alabama, Alabama Paleontological Society Monograph no.1. Buta, R. J., Rindsberg, A. K., and Kopaska-Merkel, D. C., eds.
Internet Map Application for the Black Warrior Basin Province, USGS Energy Resources Program, Map Service for the Black Warrior Basin Province, 2002 National Assessment of Oil and Gas
Sedimentary basins of North America
Coal mining regions in the United States
Coal mining in Appalachia
Geology of Alabama
Geology of Mississippi
Geologic provinces of the United States
Methane
Mining in Alabama
Mining in Mississippi
|
Black Warrior Basin
|
[
"Chemistry"
] | 445
|
[
"Greenhouse gases",
"Methane"
] |
15,826,762
|
https://en.wikipedia.org/wiki/Cahaba%20Basin
|
The Cahaba Basin is a geologic area of central Alabama developed for coal and coalbed methane (CBM) production. Centered in eastern Bibb and southwestern Shelby Counties, the basin is significantly smaller in area and production than the larger Black Warrior Basin in Tuscaloosa and western Jefferson Counties to the northwest. The coalbed methane is produced from the Gurnee Field of the Pottsville Coal Interval. Coalbed gas production has been continuous since at least 1990 and annual gas production has increased from 344,875 Mcf in 1990 to 3,154, 554 Mcf through October 2007.
Geology
The Cahaba Basin is located across an anticline from the neighboring Black Warrior Basin. Within the Cahaba Basin, the Pennsylvanian age coal beds have an average bed thickness of . The developed formations are known as the Gurnee Field of the Pottsville Formation.
Development
The coal resources of the Cahaba Basin have been developed for over a century and contributed to the Birmingham area's rise as an iron and steel production center. Numerous small coal mines continue to operate in the basin. Several CBM developers operate within the Cahaba Basin with GeoMet, Inc. and CDX Gas being two of the largest. The field has been developed for CBM since the 1980s. GeoMet, Inc. and CDX both operate pipelines which join the SONAT Bessemer Calera Pipeline and Enbridge Pipeline respectively. GeoMet, Inc. operates a discharge water pipeline to the Black Warrior River.
References
External links
Geological Survey of Alabama; Alabama State Oil and Gas Board
Coalbed Methane Association of Alabama; non-profit trade association
CDX Gas – a significant Cahaba Basin CBM developer
GeoMet, Inc. - a significant Cahaba Basin CBM developer
Geography of Bibb County, Alabama
Geography of Shelby County, Alabama
Methane
Coal mining regions in the United States
Mining in Alabama
|
Cahaba Basin
|
[
"Chemistry"
] | 388
|
[
"Greenhouse gases",
"Methane"
] |
15,828,681
|
https://en.wikipedia.org/wiki/Dimroth%20rearrangement
|
The Dimroth rearrangement is a rearrangement reaction taking place with certain 1,2,3-triazoles where endocyclic and exocyclic nitrogen atoms switch place. This organic reaction was discovered in 1909 by Otto Dimroth.
With R a phenyl group the reaction takes place in boiling pyridine for 24 hours.
This type of triazole has an amino group in the 5 position. After ring-opening to a diazo intermediate, C-C bond rotation is possible with 1,3-migration of a proton.
Certain 1-alkyl-2-iminopyrimidines also display this type of rearrangement.
In the first step is an addition reaction of water followed by ring-opening of the hemiaminal to the aminoaldehyde followed by ring closure.
A known drug example of the Dimroth rearrangement includes in the synthesis of Bemitradine [88133-11-3].
References
Rearrangement reactions
Name reactions
|
Dimroth rearrangement
|
[
"Chemistry"
] | 214
|
[
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
15,828,771
|
https://en.wikipedia.org/wiki/Stable%20theory
|
In the mathematical field of model theory, a theory is called stable if it satisfies certain combinatorial restrictions on its complexity. Stable theories are rooted in the proof of Morley's categoricity theorem and were extensively studied as part of Saharon Shelah's classification theory, which showed a dichotomy that either the models of a theory admit a nice classification or the models are too numerous to have any hope of a reasonable classification. A first step of this program was showing that if a theory is not stable then its models are too numerous to classify.
Stable theories were the predominant subject of pure model theory from the 1970s through the 1990s, so their study shaped modern model theory and there is a rich framework and set of tools to analyze them. A major direction in model theory is "neostability theory," which tries to generalize the concepts of stability theory to broader contexts, such as simple and NIP theories.
Motivation and history
A common goal in model theory is to study a first-order theory by analyzing the complexity of the Boolean algebras of (parameter) definable sets in its models. One can equivalently analyze the complexity of the Stone duals of these Boolean algebras, which are type spaces. Stability restricts the complexity of these type spaces by restricting their cardinalities. Since types represent the possible behaviors of elements in a theory's models, restricting the number of types restricts the complexity of these models.
Stability theory has its roots in Michael Morley's 1965 proof of Łoś's conjecture on categorical theories. In this proof, the key notion was that of a totally transcendental theory, defined by restricting the topological complexity of the type spaces. However, Morley showed that (for countable theories) this topological restriction is equivalent to a cardinality restriction, a strong form of stability now called -stability, and he made significant use of this equivalence. In the course of generalizing Morley's categoricity theorem to uncountable theories, Frederick Rowbottom generalized -stability by introducing -stable theories for some cardinal , and finally Shelah introduced stable theories.
Stability theory was much further developed in the course of Shelah's classification theory program. The main goal of this program was to show a dichotomy that either the models of a first-order theory can be nicely classified up to isomorphism using a tree of cardinal-invariants (generalizing, for example, the classification of vector spaces over a fixed field by their dimension), or are so complicated that no reasonable classification is possible. Among the concrete results from this classification theory were theorems on the possible spectrum functions of a theory, counting the number of models of cardinality as a function of . Shelah's approach was to identify a series of "dividing lines" for theories. A dividing line is a property of a theory such that both it and its negation have strong structural consequences; one should imply the models of the theory are chaotic, while the other should yield a positive structure theory. Stability was the first such dividing line in the classification theory program, and since its failure was shown to rule out any reasonable classification, all further work could assume the theory to be stable. Thus much of classification theory was concerned with analyzing stable theories and various subsets of stable theories given by further dividing lines, such as superstable theories.
One of the key features of stable theories developed by Shelah is that they admit a general notion of independence called non-forking independence, generalizing linear independence from vector spaces and algebraic independence from field theory. Although non-forking independence makes sense in arbitrary theories, and remains a key tool beyond stable theories, it has particularly good geometric and combinatorial properties in stable theories. As with linear independence, this allows the definition of independent sets and of local dimensions as the cardinalities of maximal instances of these independent sets, which are well-defined under additional hypotheses. These local dimensions then give rise to the cardinal-invariants classifying models up to isomorphism.
Definition and alternate characterizations
Let T be a complete first-order theory.
For a given infinite cardinal , T is -stable if for every set A of cardinality in a model of T, the set S(A) of complete types over A also has cardinality . This is the smallest the cardinality of S(A) can be, while it can be as large as . For the case , it is common to say T is -stable rather than -stable.
T is stable if it is -stable for some infinite cardinal .
Restrictions on the cardinals for which a theory can simultaneously by -stable are described by the stability spectrum, which singles out the even tamer subset of superstable theories.
A common alternate definition of stable theories is that they do not have the order property. A theory has the order property if there is a formula and two infinite sequences of tuples , in some model M such that defines an infinite half graph on , i.e. is true in M . This is equivalent to there being a formula and an infinite sequence of tuples in some model M such that defines an infinite linear order on A, i.e. is true in M .
There are numerous further characterizations of stability. As with Morley's totally transcendental theories, the cardinality restrictions of stability are equivalent to bounding the topological complexity of type spaces in terms of Cantor-Bendixson rank. Another characterization is via the properties that non-forking independence has in stable theories, such as being symmetric. This characterizes stability in the sense that any theory with an abstract independence relation satisfying certain of these properties must be stable and the independence relation must be non-forking independence.
Any of these definitions, except via an abstract independence relation, can instead be used to define what it means for a single formula to be stable in a given theory T. Then T can be defined to be stable if every formula is stable in T. Localizing results to stable formulas allows these results to be applied to stable formulas in unstable theories, and this localization to single formulas is often useful even in the case of stable theories.
Examples and non-examples
For an unstable theory, consider the theory DLO of dense linear orders without endpoints. Then the atomic order relation has the order property. Alternatively, unrealized 1-types over a set A correspond to cuts (generalized Dedekind cuts, without the requirements that the two sets be non-empty and that the lower set have no greatest element) in the ordering of A, and there exist dense orders of any cardinality with -many cuts.
Another unstable theory is the theory of the Rado graph, where the atomic edge relation has the order property.
For a stable theory, consider the theory of algebraically closed fields of characteristic p, allowing . Then if K is a model of , counting types over a set is equivalent to counting types over the field k generated by A in K. There is a (continuous) bijection from the space of n-types over k to the space of prime ideals in the polynomial ring . Since such ideals are finitely generated, there are only many, so is -stable for all infinite .
Some further examples of stable theories are listed below.
The theory of any module over a ring (in particular, any theory of vector spaces or abelian groups).
The theory of non-abelian free groups.
The theory of differentially closed fields of characteristic p. When , the theory is -stable.
The theory of any nowhere dense graph class. These include graph classes with bounded expansion, which in turn include planar graphs and any graph class of bounded degree.
Geometric stability theory
Geometric stability theory is concerned with the fine analysis of local geometries in models and how their properties influence global structure. This line of results was later key in various applications of stability theory, for example to Diophantine geometry. It is usually taken to start in the late 1970s with Boris Zilber's analysis of totally categorical theories, eventually showing that they are not finitely axiomatizble. Every model of a totally categorical theory is controlled by (i.e. is prime and minimal over) a strongly minimal set, which carries a matroid structure determined by (model-theoretic) algebraic closure that gives notions of independence and dimension. In this setting, geometric stability theory then asks the local question of what the possibilities are for the structure of the strongly minimal set, and the local-to-global question of how the strongly minimal set controls the whole model.
The second question is answered by Zilber's Ladder Theorem, showing every model of a totally categorical theory is built up by a finite sequence of something like "definable fiber bundles" over the strongly minimal set. For the first question, Zilber's Trichotomy Conjecture was that the geometry of a strongly minimal set must be either like that of a set with no structure, or the set must essentially carry the structure of a vector space, or the structure of an algebraically closed field, with the first two cases called locally modular. This conjecture illustrates two central themes. First, that (local) modularity serves to divide combinatorial or linear behavior from nonlinear, geometric complexity as in algebraic geometry. Second, that complicated combinatorial geometry necessarily comes from algebraic objects; this is akin to the classical problem of finding a coordinate ring for an abstract projective plane defined by incidences, and further examples are the group configuration theorems showing certain combinatorial dependencies among elements must arise from multiplication in a definable group. By developing analogues of parts of algebraic geometry in strongly minimal sets, such as intersection theory, Zilber proved a weak form of the Trichotomy Conjecture for uncountably categorical theories. Although Ehud Hrushovski developed the Hrushovski construction to disprove the full conjecture, it was later proved with additional hypotheses in the setting of "Zariski geometries".
Notions from Shelah's classification program, such as regular types, forking, and orthogonality, allowed these ideas to be carried to greater generality, especially in superstable theories. Here, sets defined by regular types play the role of strongly minimal sets, with their local geometry determined by forking dependence rather than algebraic dependence. In place of the single strongly minimal set controlling models of a totally categorical theory, there may be many such local geometries defined by regular types, and orthogonality describes when these types have no interaction.
Applications
While stable theories are fundamental in model theory, this section lists applications of stable theories to other areas of mathematics. This list does not aim for completeness, but rather a sense of breadth.
Since the theory of differentially closed fields of characteristic 0 is -stable, there are many applications of stability theory in differential algebra. For example, the existence and uniqueness of the differential closure of such a field (an analogue of the algebraic closure) were proved by Lenore Blum and Shelah respectively, using general results on prime models in -stable theories.
In Diophantine geometry, Ehud Hrushovski used geometric stability theory to prove the Mordell-Lang conjecture for function fields in all characteristics, which generalizes Faltings's theorem about counting rational points on curves and the Manin-Mumford conjecture about counting torsion points on curves. The key point in the proof was using Zilber's Trichotomy in differential fields to show certain arithmetically defined groups are locally modular.
In online machine learning, the Littlestone dimension of a concept class is a complexity measure characterizing learnability, analogous to the VC-dimension in PAC learning. Bounding the Littlestone dimension of a concept class is equivalent to a combinatorial characterization of stability involving binary trees. This equivlanece has been used, for example, to prove that online learnability of a concept class is equivalent to differentially private PAC learnability.
In functional analysis, Jean-Louis Krivine and Bernard Maurey defined a notion of stability for Banach spaces, equivalent to stating that no quantifier-free formula has the order property (in continuous logic, rather than first-order logic). They then showed that every stable Banach space admits an almost-isometric embedding of for some . This is part of a broader interplay between functional analysis and stability in continuous logic; for example, early results of Alexander Grothendieck in functional analysis can be interpreted as equivalent to fundamental results of stability theory.
A countable (possibly finite) structure is ultrahomogeneous if every finite partial automorphism extends to an automorphism of the full structure. Gregory Cherlin and Alistair Lachlan provided a general classification theory for stable ultrahomogeneous structures, including all finite ones. In particular, their results show that for any fixed finite relational language, the finite homogeneous structures fall into finitely many infinite families with members parametrized by numerical invariants and finitely many sporadic examples. Furthermore, every sporadic example becomes part of an infinite family in some richer language, and new sporadic examples always appear in suitably richer languages.
In arithmetic combinatorics, Hrushovski proved results on the structure of approximate subgroups, for example implying a strengthened version of Gromov's theorem on groups of polynomial growth. Although this did not directly use stable theories, the key insight was that fundamental results from stable group theory could be generalized and applied in this setting. This directly led to the Breuillard-Green-Tao theorem classifying approximate subgroups.
Generalizations
For about twenty years after its introduction, stability was the main subject of pure model theory. A central direction of modern pure model theory, sometimes called "neostability" or "classification theory,"consists of generalizing the concepts and techniques developed for stable theories to broader classes of theories, and this has fed into many of the more recent applications of model theory.
Two notable examples of such broader classes are simple and NIP theories. These are orthogonal generalizations of stable theories, since a theory is both simple and NIP if and only if it is stable. Roughly, NIP theories keep the good combinatorial behavior from stable theories, while simple theories keep the good geometric behavior of non-forking independence. In particular, simple theories can be characterized by non-forking independence being symmetric, while NIP can be characterized by bounding the number of types realized over either finite or infinite sets.
Another direction of generalization is to recapitulate classification theory beyond the setting of complete first-order theories, such as in abstract elementary classes.
See also
Stability spectrum
Spectrum of a theory
Morley's categoricity theorem
NIP theories
Notes
References
External links
A map of the model-theoretic classification of theories, highlighting stability
Two book reviews discussing stability and classification theory for non-model theorists: Fundamentals of Stability Theory and Classification Theory
An overview of (geometric) stability theory for non-model theorists
Model theory
|
Stable theory
|
[
"Mathematics"
] | 3,075
|
[
"Mathematical logic",
"Model theory"
] |
15,831,300
|
https://en.wikipedia.org/wiki/Tellegen%27s%20theorem
|
Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory.
The Tellegen theorem is applicable to a multitude of network systems. The basic assumptions for the systems are the conservation of flow of extensive quantities (Kirchhoff's current law, KCL) and the uniqueness of the potentials at the network nodes (Kirchhoff's voltage law, KVL). The Tellegen theorem provides a useful tool to analyze complex network systems including electrical circuits, biological and metabolic networks, pipeline transport networks, and chemical process networks.
The theorem
Consider an arbitrary lumped network that has branches and nodes. In an electrical network, the branches are two-terminal components and the nodes are points of interconnection. Suppose that to each branch we assign arbitrarily a branch potential difference and a branch current for , and suppose that they are measured with respect to arbitrarily picked associated reference directions. If the branch potential differences satisfy all the constraints imposed by KVL and if the branch currents satisfy all the constraints imposed by KCL, then
Tellegen's theorem is extremely general; it is valid for any lumped network that contains any elements, linear or nonlinear, passive or active, time-varying or time-invariant. The generality is extended when and are linear operations on the set of potential differences and on the set of branch currents (respectively) since linear operations don't affect KVL and KCL. For instance, the linear operation may be the average or the Laplace transform. More generally, operators that preserve KVL are called Kirchhoff voltage operators, operators that preserve KCL are called Kirchhoff current operators, and operators that preserve both are simply called Kirchhoff operators. These operators need not necessarily be linear for Tellegen's theorem to hold.
The set of currents can also be sampled at a different time from the set of potential differences since KVL and KCL are true at all instants of time. Another extension is when the set of potential differences is from one network and the set of currents is from an entirely different network, so long as the two networks have the same topology (same incidence matrix) Tellegen's theorem remains true. This extension of Tellegen's Theorem leads to many theorems relating to two-port networks.
Definitions
We need to introduce a few necessary network definitions to provide a compact proof.
Incidence matrix:
The matrix is called node-to-branch incidence matrix for the matrix elements being
A reference or datum node is introduced to represent the environment and connected to all dynamic nodes and terminals. The matrix , where the row that contains the elements of the reference node is eliminated, is called reduced incidence matrix.
The conservation laws (KCL) in vector-matrix form:
The uniqueness condition for the potentials (KVL) in vector-matrix form:
where are the absolute potentials at the nodes to the reference node .
Proof
Using KVL:
because by KCL. So:
Applications
Network analogs have been constructed for a wide variety of physical systems, and have proven extremely useful in analyzing their dynamic behavior. The classical application area for network theory and Tellegen's theorem is electrical circuit theory. It is mainly in use to design filters in signal processing applications.
A more recent application of Tellegen's theorem is in the area of chemical and biological processes. The assumptions for electrical circuits (Kirchhoff laws) are generalized for dynamic systems obeying the laws of irreversible thermodynamics. Topology and structure of reaction networks (reaction mechanisms, metabolic networks) can be analyzed using the Tellegen theorem.
Another application of Tellegen's theorem is to determine stability and optimality of complex process systems such as chemical plants or oil production systems. The Tellegen theorem can be formulated for process systems using process nodes, terminals, flow connections and allowing sinks and sources for production or destruction of extensive quantities.
A formulation for Tellegen's theorem of process systems:
where are the production terms, are the terminal connections, and are the dynamic storage terms for the extensive variables.
References
In-line references
General references
Basic Circuit Theory by C.A. Desoer and E.S. Kuh, McGraw-Hill, New York, 1969
"Tellegen's Theorem and Thermodynamic Inequalities", G.F. Oster and C.A. Desoer, J. Theor. Biol 32 (1971), 219–241
"Network Methods in Models of Production", Donald Watson, Networks, 10 (1980), 1–15
External links
Circuit example for Tellegen's theorem
G.F. Oster and C.A. Desoer, Tellegen's Theorem and Thermodynamic Inequalities
Network thermodynamics
Circuit theorems
Eponymous theorems of physics
|
Tellegen's theorem
|
[
"Physics"
] | 1,063
|
[
"Circuit theorems",
"Eponymous theorems of physics",
"Equations of physics",
"Physics theorems"
] |
15,832,717
|
https://en.wikipedia.org/wiki/Computational%20statistics
|
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is fast developing. The view that the broader concept of computing must be taught as part of general statistical education is gaining momentum.
As in traditional statistics the goal is to transform raw data into knowledge, but the focus lies on computer intensive statistical methods, such as cases with very large sample size and non-homogeneous data sets.
The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics",
and 'computational statistics' as "aiming at the design of algorithm for implementing
statistical methods on computers, including the ones unthinkable before the computer
age (e.g. bootstrap, simulation), as well as to cope with analytically intractable problems" [sic].
The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation, artificial neural networks and generalized additive models.
History
Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology.
In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution. With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise.
Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo. One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards.
By the mid-1950s, several articles and patents for devices had been proposed for random number generators. The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions. This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible.
Methods
Maximum likelihood estimation
Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model.
Monte Carlo method
Monte Carlo is a statistical method that relies on repeated random sampling to obtain numerical results. The concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
Markov chain Monte Carlo
The Markov chain Monte Carlo method creates samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, such as its expected value or variance. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution.
Bootstrapping
The bootstrap is a resampling technique used to generate samples from an empirical probability distribution defined by an original sample of the population. It can be used to find a bootstrapped estimator of a population parameter. It can also be used to estimate the standard error of an estimator as well as to generate bootstrapped confidence intervals. The jackknife is a related technique.
Applications
Computational biology
Computational linguistics
Computational physics
Computational mathematics
Computational materials science
Machine Learning
Computational statistics journals
Communications in Statistics - Simulation and Computation
Computational Statistics
Computational Statistics & Data Analysis
Journal of Computational and Graphical Statistics
Journal of Statistical Computation and Simulation
Journal of Statistical Software
The R Journal
The Stata Journal
Statistics and Computing
Wiley Interdisciplinary Reviews: Computational Statistics
Associations
International Association for Statistical Computing
See also
Algorithms for statistical classification
Data science
Statistical methods in artificial intelligence
Free statistical software
List of statistical algorithms
List of statistical packages
Machine learning
References
Further reading
Articles
Books
External links
Associations
International Association for Statistical Computing
Statistical Computing section of the American Statistical Association
Journals
Computational Statistics & Data Analysis
Journal of Computational & Graphical Statistics
Statistics and Computing
Numerical analysis
Computational fields of study
Mathematics of computing
|
Computational statistics
|
[
"Mathematics",
"Technology"
] | 1,073
|
[
"Computational fields of study",
"Computational mathematics",
"Mathematical relations",
"Computing and society",
"Numerical analysis",
"Computational statistics",
"Approximations"
] |
15,833,063
|
https://en.wikipedia.org/wiki/Scribd
|
Scribd Inc. (pronounced ) operates three primary platforms: Scribd, Everand, and SlideShare. Scribd is a digital document library that hosts over 195 million documents. Everand is a digital content subscription service offering a wide selection of ebooks, audiobooks, magazines, podcasts, and sheet music. SlideShare is an online platform featuring over 15 million presentations from subject matter experts.
The company was founded in 2007 by Trip Adler, Jared Friedman, and Tikhon Bernstam, and headquartered in San Francisco, California. Tony Grimminck took over as CEO in 2024.
History
Founding (2007–2013)
Scribd began as a site to host and share documents. While at Harvard, Trip Adler was inspired to start Scribd after learning about the lengthy process required to publish academic papers. His father, a doctor at Stanford, was told it would take 18 months to have his medical research published. Adler wanted to create a simple way to publish and share written content online. He co-founded Scribd with Jared Friedman and attended the inaugural class of Y Combinator in the summer of 2006. There, Scribd received its initial $120,000 in seed funding and then launched in a San Francisco apartment in March 2007.
Scribd was called "the YouTube for documents", allowing anyone to self-publish on the site using its document reader. The document reader turns PDFs, Word documents, and PowerPoints into Web documents that can be shared on any website that allows embeds. In its first year, Scribd grew rapidly to 23.5 million visitors as of November 2008. It also ranked as one of the top 20 social media sites according to Comscore.
In June 2009, Scribd launched the Scribd Store, enabling writers to easily upload and sell digital copies of their work online. That same month, the site partnered with Simon & Schuster to sell e-books on Scribd. The deal made digital editions of 5,000 titles available for purchase on Scribd, including books from bestselling authors like Stephen King, Dan Brown, and Mary Higgins Clark.
In October 2009, Scribd launched its branded reader for media companies including The New York Times, Los Angeles Times, Chicago Tribune, The Huffington Post, TechCrunch, and MediaBistro. ProQuest began publishing dissertations and theses on Scribd in December 2009. In August 2010, many notable documents hosted on Scribd became viral phenomenons, including the California Proposition 8 ruling, which received over 100,000 views in about 24 minutes, and HP's lawsuit against Mark Hurd's move to Oracle.
Subscription service (2013–2023)
In October 2013, Scribd officially launched its unlimited subscription service for e-books. This gave users unlimited access to Scribd's library of digital books for a flat monthly fee. The company also announced a partnership with HarperCollins which made the entire backlist of HarperCollins' catalog available on the subscription service.
According to Chantal Restivo-Alessi, chief digital officer at HarperCollins, this marked the first time that the publisher has released such a large portion of its catalog.
In March 2014, Scribd announced a deal with Lonely Planet, offering the travel publisher's entire library on its subscription service.
In May 2014, Scribd further increased its subscription offering with 10,000 titles from Simon & Schuster. These titles included works from authors such as: Ray Bradbury, Doris Kearns Goodwin, Ernest Hemingway, Walter Isaacson, Stephen King, Chuck Klosterman, and David McCullough. Scribd has been criticized for advertising a free 14 day trial for which payment is required before readers can trial the products. Readers discover this when they attempt to download material.
Scribd added audiobooks to its subscription service in November 2014 and comic books in February 2015.
In February 2016, it was announced that only titles from a rotating selection of the library would be available for unlimited reading, and subscribers would have credits to read three books and one audiobook per month from the entire library with unused credits rolling over to the next month.
The reporting system was discontinued on February 6, 2018, in favor of a system of "constantly rotating catalogs of ebooks and audiobooks" that provided "an unlimited number of books and audiobooks, alongside unlimited access to news, magazines, documents, and sheet music" for a monthly subscription fee of US$8.99. However, under this unlimited service, Scribd would occasionally "limit the titles that you’re able to access within a specific content library in a 30-day period."
In October 2018, Scribd announced a joint subscription to Scribd and The New York Times for $12.99 per month.
Audiobooks
In November 2014, Scribd added audiobooks to its subscription library. Wired noted that this was the first subscription service to offer unlimited access to audiobooks, and "it represents a much larger shift in the way digital content is consumed over the net." In April 2015, the company expanded its audiobook catalog in a deal with Penguin Random House. This added 9,000 audiobooks to its platform including titles from authors like Lena Dunham, John Grisham, Gillian Flynn, and George R.R. Martin.
Comics
In February 2015, Scribd introduced comics to its subscription service. The company added 10,000 comics and graphic novels from publishers including Marvel, Archie, Boom! Studios, Dynamite, IDW, and Valiant. These included series such as Guardians of the Galaxy, Daredevil, X-O Manowar, and The Avengers. However, in December 2016, comics were eliminated from the service due to low demand.
Unbundling (2023 - present)
In November 2023, Scribd unbundled from one single product into three distinct ones: Everand, Scribd, and Slideshare. Everand was launched as a new subscription-based service, focused solely on a customer looking for entertainment in the form of books, magazines, podcasts and more.
Timeline
In February 2010, Scribd unveiled its first mobile plans for e-readers and smartphones. In April 2010 Scribd launched a new feature called "Readcast", which allows automatic sharing of documents on Facebook and Twitter. Also in April 2010, Scribd announced its integration of Facebook social plug-ins at the Facebook f8 Developer Conference.
Scribd rolled out a redesign on September 13, 2010, to become, according to TechCrunch, "the social network for reading".
In October 2013, Scribd launched its e-book subscription service, allowing readers to pay a flat monthly fee in exchange for unlimited access to all of Scribd's book titles.
In August 2020, Scribd announced its acquisition of the LinkedIn-owned SlideShare for an undisclosed amount.
In November 2023, Scribd unbundled into three distinct products: Everand, Scribd, and Slideshare. Everand was launched as a new product, focusing solely on books, magazines, podcasts and more.
Financials
The company was initially funded with US$120,000 from Y Combinator in 2006, and received over US$3.7 million in June 2007 from Redpoint Ventures and The Kinsey Hills Group.
In December 2008, the company raised US$9 million in a second round of funding led by Charles River Ventures with re-investment from Redpoint Ventures and Kinsey Hills Group.
David O. Sacks, former PayPal COO and founder of Yammer and Geni, joined Scribd's board of directors in January 2010.
In January 2011, Scribd raised $13 million in a Series C round led by MLC Investments of Australia and SVB Capital.
In January 2015, the company raised US$22 million from Khosla Ventures with partner Keith Rabois joining the Scribd board of directors.
In 2019, Scribd raised $58 million in a financing round led by Spectrum Equity.
Technology
In July 2008, Scribd began using iPaper, a rich document format similar to PDF and built for the web, which allows users to embed documents into a web page. iPaper was built with Adobe Flash, allowing it to be viewed the same across different operating systems (Windows, Mac OS, and Linux) without conversion, as long as the reader has Flash installed (although Scribd has announced non-Flash support for the iPhone). All major document types can be formatted into iPaper including Word docs, PowerPoint presentations, PDFs, OpenDocument documents, OpenOffice.org XML documents, and PostScript files.
All iPaper documents are hosted on Scribd. Scribd allows published documents to either be private or open to the larger Scribd community. The iPaper document viewer is also embeddable in any website or blog, making it simple to embed documents in their original layout regardless of file format. Scribd iPaper required Flash cookies to be enabled, which is the default setting in Flash.
On May 5, 2010, Scribd announced that they would be converting the entire site to HTML5 at the Web 2.0 Conference in San Francisco. TechCrunch reported that Scribd is migrating away from Flash to HTML5. "Scribd co-founder and chief technology officer Jared Friedman tells me: 'We are scrapping three years of Flash development and betting the company on HTML5 because we believe HTML5 is a dramatically better reading experience than Flash. Now any document can become a Web page.'"
Scribd has its own API to integrate external/third-party applications, but is no longer offering new API accounts.
Since 2010, Scribd has been available on mobile phones and e-readers, in addition to personal computers. As of December 2013, Scribd became available on app stores and various mobile devices.
Reception
Accusations of defrauding and stealing from users
Scribd has been accused of "[having] built its business on stealing from former customers" after numerous complaints of continuing to charge former subscribers on a monthly basis who had cancelled their subscriptions long prior to the charges.
Accusations of copyright infringement
Scribd has been accused of copyright infringement. In 2007, one year after its inception, Scribd was served with 25 Digital Millennium Copyright Act (DMCA) takedown notices. In March 2009, The Guardian writes, "Harry Potter author [J.K. Rowling] is among writers shocked to discover their books available as free downloads. Neil Blair, Rowling’s lawyer, said the Harry Potter downloads were 'unauthorised and unlawful'...Rowling's novels aren't the only ones to be available from Scribd. A quick search throws up novels from Salman Rushdie, Ian McEwan, Jeffrey Archer, Ken Follett, Philippa Gregory, and J.R.R. Tolkien." In September 2009, American author Elaine Scott alleged that Scribd "shamelessly profits from the stolen copyrighted works of innumerable authors". Her attorneys sought class action status in their efforts to win damages from Scribd for allegedly "egregious copyright infringement" and accused it of calculated copyright infringement for profit. The suit was dropped in July 2010.
Controversies
In March 2009, the passwords of several Comcast customers were leaked on Scribd. The passwords were later removed when the news was published by The New York Times.
In July 2010, the script of The Social Network (2010) movie was uploaded and leaked on Scribd; it was promptly taken down per Sony's DMCA request.
Following a decision of the Istanbul 12th Criminal Court of Peace, dated March 8, 2013, access to Scribd is blocked for Internet users in Turkey.
In July 2014, Scribd was sued by Disability Rights Advocates (represented by Haben Girma), on behalf of the National Federation of the Blind and a blind Vermont resident, for allegedly failing to provide access to blind readers, in violation of the Americans with Disability Act. Scribd moved to dismiss, arguing that the ADA only applied to physical locations. In March 2015, the U.S. District Court of Vermont ruled that the ADA covered online businesses as well. A settlement agreement was reached, with Scribd agreeing to provide content accessible to blind readers by the end of 2017.
BookID
To counteract the uploading of unauthorized content, Scribd created BookID, an automated copyright protection system that helps authors and publishers identify unauthorized use of their works on Scribd. This technology works by analyzing documents for semantic data, metadata, images, and other elements and creates an encoded "fingerprint" of the copyrighted work.
Supported file formats
Supported formats include:
Microsoft Excel (.xls, .xlsx)
Microsoft PowerPoint (.ppt, .pps, .pptx, .ppsx)
Microsoft Word (.doc, .docx)
OpenDocument (.odt, .odp, .ods, .odf, .odg)
OpenOffice.org XML (.sxw, .sxi, .sxc, .sxd)
Plain text (.txt)
Portable Document Format (.pdf)
PostScript (.ps)
Rich text format (.rtf)
Tagged image file format (.tif, .tiff)
See also
Slideshare
Everand
Amazon Lending Library and Kindle Unlimited
Document collaboration
Oyster (company)
Wayback Machine
WebCite
References
External links
2007 establishments in California
American companies established in 2007
Android (operating system) software
Companies based in San Francisco
Ebook suppliers
File sharing communities
Internet properties established in 2007
Online retailers of the United States
Privately held companies based in California
Retail companies established in 2007
Subscription services
Y Combinator companies
|
Scribd
|
[
"Technology"
] | 2,933
|
[
"File sharing communities",
"Computing websites"
] |
13,160,155
|
https://en.wikipedia.org/wiki/Energy%20Performance%20of%20Buildings%20Directive%202024
|
The Energy Performance of Buildings Directive (2024/1275, the "EPBD") is the European Union's main legislative instrument aiming to promote the improvement of the energy performance of buildings within the European Union. It was inspired by the Kyoto Protocol which commits the EU and all its parties by setting binding emission reduction targets.
History
Directive 2002/91/EC
The first version of the EPBD, directive 2002/91/EC, was approved on 16 December 2002 and entered into force on 4 January 2003. EU Member States (MS) had to comply with the Directive within three years of the inception date (4 January 2006), by bringing into force necessary laws, regulations and administrative provisions. In the case of lack of qualified and/or accredited experts, the directive allowed for a further extension in implementation by 4 January 2006.
The Directive required that the MS strengthen their building regulations and introduce energy performance certification of buildings. More specifically, it required member states to comply with Article 7 (Energy Performance Certificates), Article 8 (Inspection of boilers) and Article 9 (Inspection of air conditioning systems).
Directive 2010/31/EU
Directive 2002/91/EC was later on replaced by the so-called "EPBD recast", which was approved on 19 May 2010 and entered into force on 18 June 2010.
This version of the EPBD (Directive 2010/31/EU) broadened its focus on Nearly Zero-energy buildings, cost optimal levels of minimum energy performance requirements as well as improved policies.
According to the recast:
for buildings offered for sale or rent, the energy performance certificates shall be stated in the advertisements
Member States shall lay down the necessary measures to establish inspection schemes for heating and air-conditioning systems or take measures with equivalent impact
all new buildings shall be nearly zero energy buildings by 31 December 2020; the same applies to all new public buildings after 31 December 2018.
Member States shall set minimum energy performance requirements for new buildings, for buildings subject to major renovation, as well as for the replacement or retrofit of building elements
Member States shall draw up lists of national financial measures and instruments to improve the energy efficiency of buildings.
Directive 2018/844/EU
On 30 November 2016, the European Commission published the "Clean Energy For All Europeans", a package of measures boosting the clean energy transition in line with its commitment to cut emissions by at least 40% by 2030, modernise the economy and create conditions for sustainable jobs and growth.
The proposal for a revised directive on the EPBD (COM/2016/0765) puts energy efficiency first and supports cost-effective building renovation. The proposal updated the EPBD through:
The incorporation of long-term building renovation strategies (Article of 4 Energy Efficiency Directive), the support to mobilise finance and a clear vision for the decarbonisation of buildings by 2050
The encouragement of the use of information communication and smart technologies to ensure the efficient operation of buildings
Streamlined provisions in the case of delivery failure of the expected results
introduces building automation and control (BAC) systems as an alternative to physical inspections
encourages the roll-out of the required infrastructure for e-mobility and introduces a "smartness indicator"
strengthens the links between public funding for building renovation and energy performance certificates and
incentivises tackling energy poverty through building renovation.
On 11 October 2017, the European Parliament's Committee on Industry, Research and Energy (ITRE) voted positively on a draft report led by Danish MEP Bendt Bendtsen. The Committee "approved rules to channel the focus towards energy-efficiency and cost-effectiveness of building renovations in the EU, updating the EPBD as part of the "Clean Energy for All Europeans" package".
Bendt Bendtsen, member of ITRE and rapporteur of the EPBD review dossier said: "It is vital that Member States show a clear commitment and take concrete actions in their long-term planning. This includes facilitating access to financial tools, showing investors that energy efficiency renovations are prioritised, and enabling public authorities to invest in well-performing buildings".
The proposal was finally approved by the Council and the European Parliament in May 2018.
2024 revisions
In 2021, the European Commission, under the leadership of Estonian Commissioner Kadri Simson proposed a new revision of the Directive, in the context of the "Fit for 55" legislative package. The proposal includes the following priorities:
Obligation for all member states to establish National building renovation plans
Establishment of minimum energy performance standards (MEPS), requiring the worst energy performant (non-residential) buildings to reach at least class F by 2030 and class E by 2033.
Promotion of technical assistance, including one-stop-shops and renovation passports
Introduction of new financial mechanisms to incentivize banks and mortgage holders to promote energy efficient renovation (mortgage portfolio standard)
Following the start of the Russian invasion of Ukraine, the Commission issued additional proposals, such as the obligation to ensure new buildings are solar ready and to install solar energy installations on buildings.
The commission's proposal is currently being discussed and negotiated in the council and at the European Parliament. The chief negotiator for the file in the European Parliament is Green MEP Ciaran Cuffe.
In 2021, the European Commission proposed to review the directive, with a view of introducing more exigent energy efficiency minimum standards for new and existing buildings, improved availability of energy performance certificates by means of public online databases, and to introduce financial mechanisms to incentivize banks to provide loans for energy efficient renovations. The informal agreement was endorsed by both Parliament and Council.
Contents
EPBD support initiatives
The European Commission has launched practical support initiatives with the objective to help and support EU countries with the implementation of the EPBD.
EPBD Concerted Action
The Concerted Action EPBD (CA EPBD) was launched in 2005 under the European Union's Horizon 2020 research and innovation programme to address the Energy Performance of Buildings Directive (EPBD), with the objective to promote dialogue and exchange of knowledge and best practices between all 28 Member States and Norway for reducing energy use in buildings.
The first CA EPBD was launched in 2005 and closed in June 2007 followed by a second phase and a third phase from 2011 to 2015. The current CA EPBD (CA EPBD IV), a joint initiative between the EU Member States and the European Commission, runs since October 2015 to March 2018 with the aim to transpose and implement the EPBD recast.
EPBD Buildings Platform
The EPBD Buildings Platform was launched by the European Commission in the framework of the Intelligent Energy – Europe, 2003–2006 Programme, as the central resource of information on the EPBD. The Platform comprises databases with publications, events, standards and software tools. Interested organisations or individuals could submit events and publications to the databases. A high number of information papers (fact sheets) were also produced, with the aim to inform a wide range of people of the status of work in a specific area. The platform also offered a helpdesk with lists of frequently asked questions and the possibility to ask individual questions.
This initiative was completed at the end of 2008, and a new one, 'BUILD UP' was launched in 2009.
BUILD UP
As a continuation of its support to the Member States in implementing the EPBD, the European Commission launched the BUILD UP initiative in 2009. The initiative has been receiving funding under the framework of the Intelligent Energy Europe (IEE) Programme. The first BUILD UP (BUILD UP I) was launched in 2009 and closed in 2011 when BUILD UP II followed in 2012 and ran until 2014. BUILD UP III was running from January 2015 until December 2017. BUILD UP IV started early 2018.
The BUILD UP web portal aims to increase awareness and foster the market transformation towards Nearly Zero-Energy Buildings, catalysing and releasing Europe's collective intelligence for an effective implementation of energy saving measures in buildings, by connecting building professionals, including competent authorities.
The portal includes databases of publications, news, events, software tools & blog posts. Since the start of BUILD UP II in 2009 the portal introduced added value content items namely as overview articles (allowing for users to read / download them on demand) and free participation webinars, providing an effective learning resource.
The platform also incorporates the "BUILD UP Skills" webpage, an initiative launched in 2011 under the IEE programme to assist with the training and further education of craftsmen, on-site workers and systems installers of the building sector. BUILD UP hosts all BUILD UP Skills related information (EU Exchange Meetings, Technical Working Groups (TWGs), National pages and country factsheets, news, events and previous newsletters) under its separate section "Skills".
Intelligent Energy Europe (IEE) Programme
The EU's Intelligent Energy Europe (IEE) Programme was launched in 2003; the first IEE Programme (IEE I) closed in 2006, and was followed by the second IEE Programme (IEE II) from 2007 to 2013. Most parts of the IEE programme were run by the Executive Agency for SMEs, EASME -formerly known as the Executive Agency for Competitiveness and Innovation (EACI)- on behalf of the European Commission. The Programme "supported projects which sought to overcome non-technical barriers to the uptake, implementation and replication of innovative sustainable energy solutions". From 2007 to 2013, the IEE II Programme allocated €72m (16% of the entire IEE II funding) to 63 building-related projects (including CA EPBD II & III), revealing the strong support for enabling EPBD implementation. The range of topics was broad, covering the fields of deep renovation, Nearly Zero-Energy Buildings, Energy Performance Certificates, renewable energy and the exemplary role of public buildings. Since the Programme's completion, the EU's Horizon 2020 Framework Programme has been funding these type of activities.
See also
Energy performance certificate, which arose from the implementation of the Directive in the United Kingdom
EU law
UK enterprise law
References
External links
Concerted Action EPBD
BUILD UP portal
Building thermal regulations
Energy development
Energy economics
Energy policies and initiatives of the European Union
Energy performance of buildings
Low-energy building
2002 in law
2002 in the European Union
|
Energy Performance of Buildings Directive 2024
|
[
"Environmental_science"
] | 2,091
|
[
"Energy economics",
"Environmental social science"
] |
13,160,226
|
https://en.wikipedia.org/wiki/Breather%20surface
|
In differential geometry, a breather surface is a one-parameter family of mathematical surfaces which correspond to breather solutions of the sine-Gordon equation, a differential equation appearing in theoretical physics. The surfaces have the remarkable property that they have constant curvature , where the curvature is well-defined. This makes them examples of generalized pseudospheres.
Mathematical background
There is a correspondence between embedded surfaces of constant curvature -1, known as pseudospheres, and solutions to the sine-Gordon equation. This correspondence can be built starting with the simplest example of a pseudosphere, the tractroid. In a special set of coordinates, known as asymptotic coordinates, the Gauss–Codazzi equations, which are consistency equations dictating when a surface of prescribed first and second fundamental form can be embedded into three-dimensional space with the flat metric, reduce to the sine-Gordon equation.
In the correspondence, the tractroid corresponds to the static 1-soliton solution of the sine-Gordon solution. Due to the Lorentz invariance of sine-Gordon, a one-parameter family of Lorentz boosts can be applied to the static solution to obtain new solutions: on the pseudosphere side, these are known as Lie transformations, which deform the tractroid to the one-parameter family of surfaces known as Dini's surfaces.
The method of Bäcklund transformation allows the construction of a large number of distinct solutions to the sine-Gordon equation, the multi-soliton solutions. For example, the 2-soliton corresponds to the Kuen surface. However, while this generates an infinite family of solutions, the breather solutions are not among them.
Breather solutions are instead derived from the inverse scattering method for the sine-Gordon equation. They are localized in space but oscillate in time.
Each solution to the sine-Gordon equation gives a first and second fundamental form which satisfy the Gauss-Codazzi equations. The fundamental theorem of surface theory then guarantees that there is a parameterized surface which recovers the prescribed first and second fundamental forms. Locally the parameterization is well-behaved, but extended arbitrarily the resulting surface may have self-intersections and cusps. Indeed, a theorem of Hilbert says that any pseudosphere cannot be embedded regularly (roughly, meaning without cusps) into .
Parameterization
The parameterization with parameter is given by
References
External links
Xah Lee Web - Surface Gallery
Breather surface in Virtual Math Museum
Surfaces
Mathematics articles needing expert attention
Differential equations
|
Breather surface
|
[
"Mathematics"
] | 523
|
[
"Mathematical objects",
"Differential equations",
"Equations"
] |
13,160,311
|
https://en.wikipedia.org/wiki/Airborne%20Real-time%20Cueing%20Hyperspectral%20Enhanced%20Reconnaissance
|
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance, also known by the acronym ARCHER, is an aerial imaging system that produces ground images far more detailed than plain sight or ordinary aerial photography can.
It is the most sophisticated unclassified hyperspectral imaging system available, according to U.S. Government officials.
ARCHER can automatically scan detailed imaging for a given signature of the object being sought (such as a missing aircraft), for abnormalities in the surrounding area, or for changes from previous recorded spectral signatures.
It has direct applications for search and rescue, counterdrug, disaster relief and impact assessment, and homeland security, and has been deployed by the Civil Air Patrol (CAP) in the US on the Australian-built Gippsland GA8 Airvan fixed-wing aircraft. CAP, the civilian auxiliary of the United States Air Force, is a volunteer education and public-service non-profit organization that conducts aircraft search and rescue in the US.
Overview
ARCHER is a daytime non-invasive technology, which works by analyzing an object's reflected light. It cannot detect objects at night, underwater, under dense cover, underground, under snow or inside buildings. The system uses a special camera facing down through a quartz glass portal in the belly of the aircraft, which is typically flown at a standard mission altitude of and 100 knots (50 meters/second) ground speed.
The system software was developed by Space Computer Corporation of Los Angeles and the system hardware is supplied by NovaSol Corp. of Honolulu, Hawaii specifically for CAP. The ARCHER system is based on hyperspectral technology research and testing previously undertaken by the United States Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL).
CAP developed ARCHER in cooperation with the NRL, AFRL and the United States Coast Guard Research & Development Center in the largest interagency project CAP has undertaken in its 74-year history.
Since 2003, almost US$5 million authorized under the 2002 Defense Appropriations Act has been spent on development and deployment. , CAP reported completing the initial deployment of 16 aircraft throughout the U.S. and training over 100 operators, but had only used the system on a few search and rescue missions, and had not credited it with being the first to find any wreckage.
In searches in Georgia and Maryland during 2007, ARCHER located the aircraft wreckage, but both accidents had no survivors, according to Col. Drew Alexa, director of advanced technology, and the ARCHER program manager at CAP. An ARCHER equipped aircraft from the Utah Wing of the Civil Air Patrol was used in the search for adventurer Steve Fossett in September 2007. ARCHER did not locate Mr. Fossett, but was instrumental in uncovering eight previously uncharted crash sites in the high desert area of Nevada,
some decades old.
Col. Alexa described the system to the press in 2007: "The human eye sees basically three bands of light. The ARCHER sensor sees 50. It can see things that are anomalous in the vegetation such as metal or something from an airplane wreckage." Major Cynthia Ryan of the Nevada Civil Air Patrol, while also describing the system to the press in 2007, stated, "ARCHER is essentially something used by the geosciences. It's pretty sophisticated stuff … beyond what the human eye can generally see," She elaborated further, "It might see boulders, it might see trees, it might see mountains, sagebrush, whatever, but it goes 'not that' or 'yes, that'. The amazing part of this is that it can see as little as 10 per cent of the target, and extrapolate from there."
In addition to the primary search and rescue mission, CAP has tested additional uses for ARCHER. For example, an ARCHER equipped CAP GA8 was used in a pilot project in Missouri in August 2005 to assess the suitability of the system for tracking hazardous material releases into the environment, and one was deployed to track oil spills in the aftermath of Hurricane Rita in Texas during September 2005.
Since then, in the case of a flight originating in Missouri, the ARCHER system proved its usefulness in October 2006, when it found the wreckage in Antlers, Okla. The National Transportation and Safety Board was extremely pleased with the data ARCHER provided, which was later used to locate aircraft debris spread over miles of rough, wooded terrain. In July 2007, the ARCHER system identified a flood-borne oil spill originating in a Kansas oil refinery, that extended downstream and had invaded previously unsuspected reservoir areas. The client agencies (EPA, Coast Guard, and other federal and state agencies) found the data essential to quick remediation. In September 2008, a Civil Air Patrol GA-8 from Texas Wing searched for a missing aircraft from Arkansas. It was found in Oklahoma, identified simultaneously by ground searchers and the overflying ARCHER system. Rather than a direct find, this was a validation of the system's accuracy and efficacy. In the subsequent recovery, it was found that the ARCHER plotted the debris area with great accuracy.
Technical description
The major ARCHER subsystem components include:
advanced hyperspectral imaging (HSI) system with a resolution of one square meter per pixel.
panchromatic high-resolution imaging (HRI) camera with a resolution of per pixel.
global positioning system (GPS) integrated with an inertial navigation system (INS)
Hyperspectral imager
The passive hyperspectral imaging spectroscopy remote sensor observes a target in multi-spectral bands. The HSI camera separates the image spectra into 52 "bins" from 500 nanometers (nm) wavelength at the blue end of the visible spectrum to 1100 nm in the infrared, giving the camera a spectral resolution of 11.5 nm. Although ARCHER records data in all 52 bands, the computational algorithms only use the first 40 bands, from 500 nm to 960 nm because the bands above 960 nm are too noisy to be useful. For comparison, the normal human eye will respond to wavelengths from approximately 400 to 700 nm, and is trichromatic, meaning the eye's cone cells only sense light in three spectral bands.
As the ARCHER aircraft flies over a search area, reflected sunlight is collected by the HSI camera lens. The collected light passes through a set of lenses that focus the light to form an image of the ground. The imaging system uses a pushbroom approach to image acquisition. With the pushbroom approach, the focusing slit reduces the image height to the equivalent of one vertical pixel, creating a horizontal line image.
The horizontal line image is then projected onto a diffraction grating, which is a very finely etched reflecting surface that disperses light into its spectra. The diffraction grating is specially constructed and positioned to create a two-dimensional (2D) spectrum image from the horizontal line image. The spectra are projected vertically, i.e., perpendicular to the line image, by the design and arrangement of the diffraction grating.
The 2D spectrum image projects onto a charge-coupled device (CCD) two-dimensional image sensor, which is aligned so that the horizontal pixels are parallel to the image's horizontal. As a result, the vertical pixels are coincident to the spectra produced from the diffraction grating. Each column of pixels receives the spectrum of one horizontal pixel from the original image. The arrangement of vertical pixel sensors in the CCD divides the spectrum into distinct and non-overlapping intervals. The CCD output consists of electrical signals for 52 spectral bands for each of 504 horizontal image pixels.
The on-board computer records the CCD output signal at a frame rate of sixty times each second. At an aircraft altitude of 2,500 ft AGL and a speed of 100 knots, a 60 Hz frame rate equates to a ground image resolution of approximately one square meter per pixel. Thus, every frame captured from the CCD contains the spectral data for a ground swath that is approximately one meter long and 500 meters wide.
High-resolution imager
A high-resolution imaging (HRI) black-and-white, or panchromatic, camera is mounted adjacent to the HSI camera to enable both cameras to capture the same reflected light. The HRI camera uses a pushbroom approach just like the HSI camera with a similar lens and slit arrangement to limit the incoming light to a thin, wide beam. However, the HRI camera does not have a diffraction grating to disperse the incoming reflected light. Instead, the light is directed to a wider CCD to capture more image data. Because it captures a single line of the ground image per frame, it is called a line scan camera. The HRI CCD is 6,144 pixels wide and one pixel high. It operates at a frame rate of 720 Hz. At ARCHER search speed and altitude (100 knots over the ground at 2,500 ft AGL) each pixel in the black-and-white image represents a 3 inch by 3 inch area of the ground. This high resolution adds the capability to identify some objects.
Processing
A monitor in the cockpit displays detailed images in real time, and the system also logs the image and Global Positioning System data at a rate of 30 gigabytes (GB) per hour for later analysis. The on-board data processing system performs numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information.
ARCHER has three methods for locating targets:
signature matching where reflected light is matched to spectral signatures
anomaly detection using a statistical model of the pixels in the image to determine the probability that a pixel does not match the profile, and
change detection which executes a pixel-by-pixel comparison of the current image against ground conditions that were obtained in a previous mission over the same area.
In change detection, scene changes are identified, and new, moved or departed targets are highlighted for evaluation. In spectral signature matching, the system can be programmed with the parameters of a missing aircraft, such as paint colors, to alert the operators of possible wreckage. It can also be used to look for specific materials, such as petroleum products or other chemicals released into the environment, or even ordinary items like commonly available blue polyethylene tarpaulins. In an impact assessment role, information on the location of blue tarps used to temporarily repair buildings damaged in a storm can help direct disaster relief efforts; in a counterdrug role, a blue tarp located in a remote area could be associated with illegal activity.
References
External links
NovaSol Corp
Space Computer Corporation
Civil Air Patrol
Spectroscopy
Earth observation remote sensors
|
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance
|
[
"Physics",
"Chemistry"
] | 2,169
|
[
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
13,161,364
|
https://en.wikipedia.org/wiki/Thiocyanogen
|
Thiocyanogen, (SCN)2, is a pseudohalogen derived from the pseudohalide thiocyanate, [SCN]−, with behavior intermediate between dibromine and diiodine. This hexatomic compound exhibits C2 point group symmetry and has the connectivity NCS-SCN.
In the lungs, lactoperoxidase may oxidize thiocyanate to thiocyanogen or hypothiocyanite.
History
Berzelius first proposed that thiocyanogen ought exist as part of his radical theory, but the compound's isolation proved problematic. Liebig pursued a wide variety of synthetic routes for the better part of a century, but, even with Wöhler's assistance, only succeeded in producing a complex mixture with the proportions of thiocyanic acid. In 1861, Linnemann generated appreciable quantities of thiocyanogen from a silver thiocyanate suspension in diethyl ether and excess iodine, but misidentified the minor product as sulfur iodide cyanide (ISCN). Indeed, that reaction suffers from competing equilibria attributed to the weak oxidizing power of iodine; the major product is sulfur dicyanide. The following year, Schneider produced thiocyangen from silver thiocyanate and disulfur dichloride, but the product disproportionated to sulfur and trisulfur dicyanides.
The subject then lay fallow until the 1910s, when Niels Bjerrum began investigating gold thiocyanate complexes. Some eliminated reductively and reversibly, whereas others appeared to irreversibly generate cyanide and sulfate salt solutions. Understanding the process required reanalyzing the decomposition of thiocyanogen using the then-new techniques of physical chemistry. Bjerrum's work revealed that water catalyzed thiocyanogen's decomposition via hypothiocyanous acid. Moreover, the oxidation potential of thiocyanogen appeared to be 0.769 V, slightly greater than iodine but less than bromine. In 1919, Söderbäck successfully isolated stable thiocyanogen from oxidation of oxidation of plumbous thiocyanate with bromine.
Preparation
Modern syntheses typically differ little from Söderbäck's process. Thiocyanogen synthesis begins when aqueous solutions of lead(II) nitrate and sodium thiocyanate, combined, precipitate plumbous thiocyanate. Treating an anhydrous Pb(SCN)2 suspension in glacial acetic acid with bromine then affords a 0.1M solution of thiocyanogen that is stable for days. Alternatively, a solution of bromine in methylene chloride is added to a suspension of Pb(SCN)2 in methylene chloride at 0 °C.
Pb(SCN)2 + Br2 → (SCN)2 + PbBr2
In either case, the oxidation is exothermic.
An alternative technique is the thermal decomposition of cupric thiocyanate at 35–80 °C:
2Cu(SCN)2 → 2 CuSCN + (SCN)2
Reactions
In general, thiocyanogen is stored in solution, as the pure compound explodes above 20 °C to a red-orange polymer. However, the sulfur atoms disproportionate in water:
3(SCN)2 + 4H2O → H2SO4 + HCN + 5HSCN
Thiocyanogen is a weak electrophile, attacking only highly activated (phenolic or anilinic) or polycyclic arenes. It attacks carbonyls at the α position. Heteratoms are attacked more easily, and the compound thiocyanates sulfur, nitrogen, and various poor metals. Thiocyanogen solutions in nonpolar solvents react almost completely with chlorine to give chlorine thiocyanate; but the corresponding bromine thiocyanate is unstable above −50 °C, forming polymeric thiocyanogen and bromine.
The compound adds trans to alkenes to give 1,2-bis(thiocyanato) compounds; the intermediate thiiranium ion can be trapped with many nucleophiles. Radical polymerization is the most likely side-reaction, and yields improve when cold and dark. However, the addition reaction is slow, and light may be necessary to accelerate the process. Titanacyclopentadienes give (Z,Z)-1,4-bis(thiocyanato)-1,3-butadienes, which in turn can be converted to 1,2-dithiins. Thiocyanogen only adds once to alkynes; the resulting dithioacyloin dicyanate is not particularly olefinic.
Selenocyanogen, (SeCN)2, prepared from reaction of silver selenocyanate with iodine in tetrahydrofuran at 0 °C, reacts in a similar manner to thiocyanogen.
Applications
Thiocyanogen has been used to estimate the degree of unsaturation in fatty acids, similar to the iodine value.
References
Inorganic carbon compounds
Inorganic sulfur compounds
Inorganic nitrogen compounds
Thiocyanates
Pseudohalogens
|
Thiocyanogen
|
[
"Chemistry"
] | 1,143
|
[
"Pseudohalogens",
"Inorganic compounds",
"Functional groups",
"Inorganic sulfur compounds",
"Inorganic nitrogen compounds",
"Inorganic carbon compounds",
"Thiocyanates"
] |
13,162,950
|
https://en.wikipedia.org/wiki/Beta%20Disk%20Interface
|
Beta Disk Interface is a disk interface for ZX Spectrum computers, developed by Technology Research Ltd. (United Kingdom) in 1984 and released in 1985, with a price of £109.25 (or £249.75 with one disk drive).
Beta 128 Disk Interface is a 1987 version, supporting ZX Spectrum 128 machines (due to different access point addresses).
Beta Disk Interfaces were distributed with the TR-DOS operating system in ROM, also attributed to Technology Research Ltd.. The interface was based on the WD1793 chip. Latest firmware version is 5.03 (1986).
The Beta Disk Interface handles single- and double-sided, 40- or 80-track double-density floppy disks, and up to four drives.
Clones
This interface was popular for its simplicity, and the Beta 128 Disk Interface was cloned all around the USSR. The first known USSR clones were ones produced by НПВО "Вариант" (NPVO "Variant", Leningrad) in 1989.
Beta 128 schematics are included in various Soviet/Russian ZX Spectrum clones, but some variants only support two drives. Phase correction of the drive data signal is also implemented differently.
Between 2018 and 2021, Beta Disk clones were produced in the Czech Republic, with the names such as Beta Disk 128C, 128X and 128 mini.
Operating systems support
TR-DOS
iS-DOS
CP/M (various hack versions)
DNA OS
See also
DISCiPLE
References
External links
Virtual TR-DOS
ZX Spectrum
Computer storage devices
|
Beta Disk Interface
|
[
"Technology"
] | 313
|
[
"Computer storage devices",
"Recording devices"
] |
13,163,358
|
https://en.wikipedia.org/wiki/Whole%20number%20rule
|
In chemistry, the whole number rule states that the masses of the isotopes are whole number multiples of the mass of the hydrogen atom. The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. It is also known as the Aston whole number rule after Francis W. Aston who was awarded the Nobel Prize in Chemistry in 1922 "for his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule."
Law of definite proportions
The law of definite proportions was formulated by Joseph Proust around 1800 and states that all samples of a chemical compound will have the same elemental composition by mass. The atomic theory of John Dalton expanded this concept and explained matter as consisting of discrete atoms with one kind of atom for each element combined in fixed proportions to form compounds.
Prout's hypothesis
In 1815, William Prout reported on his observation that the atomic weights of the elements were whole multiples of the atomic weight of hydrogen. He then hypothesized that the hydrogen atom was the fundamental object and that the other elements were a combination of different numbers of hydrogen atoms.
Aston's discovery of isotopes
In 1920, Francis W. Aston demonstrated through the use of a mass spectrometer that apparent deviations from Prout's hypothesis are predominantly due to the existence of isotopes. For example, Aston discovered that neon has two isotopes with masses very close to 20 and 22 as per the whole number rule, and proposed that the non-integer value 20.2 for the atomic weight of neon is due to the fact that natural neon is a mixture of about 90% neon-20 and 10% neon-22). A secondary cause of deviations is the binding energy or mass defect of the individual isotopes.
Discovery of the neutron
During the 1920s, it was thought that the atomic nucleus was made of protons and electrons, which would account for the disparity between the atomic number of an atom and its atomic mass. In 1932, James Chadwick discovered an uncharged particle of approximately the mass as the proton, which he called the neutron. The fact that the atomic nucleus is composed of protons and neutrons was rapidly accepted and Chadwick was awarded the Nobel Prize in Physics in 1935 for his discovery.
The modern form of the whole number rule is that the atomic mass of a given elemental isotope is approximately the mass number (number of protons plus neutrons) times an atomic mass unit (approximate mass of a proton, neutron, or hydrogen-1 atom). This rule predicts the atomic mass of nuclides and isotopes with an error of at most 1%, with most of the error explained by the mass deficit caused by nuclear binding energy.
References
Further reading
External links
1922 Nobel Prize Presentation Speech
Mass spectrometry
Periodic table
|
Whole number rule
|
[
"Physics",
"Chemistry"
] | 602
|
[
"Periodic table",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
13,163,733
|
https://en.wikipedia.org/wiki/Stenotherm
|
A stenotherm (from Greek στενός stenos "narrow" and θέρμη therme "heat") is a species or living organism capable of surviving only within a narrow temperature range. This specialization is often found in organisms that inhabit environments with relatively stable environments, such as deep sea environments or polar regions.
The opposite of a stenotherm is a eurytherm, an organism that can function across a wide range of body temperatures. Eurythermic organisms are typically found in environments with significant temperature variations, such as temperate or tropical regions.
The size, shape, and composition of an organism's body can influence its temperature regulation, with larger organisms generally maintaining a more stable internal temperature than smaller ones.
Examples
Chionoecetes opilio is a stenothermic organism, and temperature significantly affects its biology throughout its life history, from embryo to adult. Small changes in temperature (< 2 °C) can increase the duration of egg incubation for C. opilio by a full year.
See also
Ecotope
References
Ecology
|
Stenotherm
|
[
"Biology"
] | 224
|
[
"Ecology"
] |
13,164,797
|
https://en.wikipedia.org/wiki/Live%20bottom%20trailer
|
A live bottom trailer is a semi-trailer used for hauling loose material such as asphalt, grain, potatoes, sand and gravel. A live bottom trailer is the alternative to a dump truck or an end dump trailer. The typical live bottom trailer has a conveyor belt on the bottom of the trailer tub that pushes the material out of the back of the trailer at a controlled pace. Unlike the conventional dump truck, the tub does not have to be raised to deposit the materials.
Operation
The live bottom trailer is powered by a hydraulic system. When the operator engages the truck hydraulic system, it activates the conveyor belt, moving the load horizontally out of the back trailer.
Uses
Live bottom trailers can haul a variety of products including gravel, potatoes, top soil, grain, carrots, sand, lime, peat moss, asphalt, compost, rip-rap, heavy rocks, biowaste, etc.
Those who work in industries such as the agriculture and construction benefit from the speed of unloading, versatility of the trailer and chassis mount.
Safety
The live bottom trailer eliminates trailer roll over because the tub does not have to be raised in the air to unload the materials. The trailer has a lower centre of gravity which makes it easy for the trailer to unload in an uneven area, compared to dump trailers that have to be on level ground to unload.
Overhead electrical wires are a danger for the conventional dump trailer during unloading, but with a live bottom, wires are not a problem. The trailer can work anywhere that it can drive into because the tub does not have to be raised for unloading. In addition, the truck cannot be accidentally driven with the trailer raised, which has been a cause of a number of accidents, often involving collision with bridges, overpasses, or overhead/suspended traffic signs/lights.
Advantages
The tub empties clean, making it easier for different materials to be transported without having to get inside the tub to clean it out. The conveyor belt allows the material to be dumped at a controlled pace so that the material can be partially unloaded where it is needed.
The rounded tub results in a lower centre of gravity which means a smoother ride and better handling than other trailers. Working under bridges and in confined areas is easier with a live bottom as opposed to a dump trailer because it can fit anywhere it can drive.
Wet or dry materials can be hauled in a live bottom trailer.
In a dump truck, wet materials stick in the top of the tub during unloading and causes trailer roll over. Insurance costs are lower for a live bottom trailer because it does not have to be raised in the air and there are few cases of trailer roll over.
Disadvantages
Some live bottom trailers are not well suited for heavy rock and demolition. However rip-rap, heavy rock, and asphalt can be hauled if built with the appropriate strength steels.
See also
Moving floor, a hydraulically driven conveyance system also used in semi-trailers
External links
Engineering vehicles
|
Live bottom trailer
|
[
"Engineering"
] | 606
|
[
"Engineering vehicles"
] |
13,165,796
|
https://en.wikipedia.org/wiki/Ocean%20heat%20content
|
Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean gives the total ocean heat uptake. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth's excess energy from global heating. The main driver of this increase was caused by humans via their rising greenhouse gas emissions. By 2020, about one third of the added energy had propagated to depths below 700 meters.
In 2023, the world's oceans were again the hottest in the historical record and exceeded the previous 2022 record maximum. The five highest ocean heat observations to a depth of 2000 meters occurred in the period 2019–2023. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years of global measurements. Ocean heat content and sea level rise are important indicators of climate change.
Ocean water can absorb a lot of solar energy because water has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. The net rate of change in the top 2000 meters from 2003 to 2018 was (or annual mean energy gain of 9.3 zettajoules). It is difficult to measure temperatures accurately over long periods while at the same time covering enough areas and depths. This explains the uncertainty in the figures.
Changes in ocean temperature greatly affect ecosystems in oceans and on land. For example, there are multiple impacts on coastal ecosystems and communities relying on their ecosystem services. Direct effects include variations in sea level and sea ice, changes to the intensity of the water cycle, and the migration of marine life.
Calculations
Definition
Ocean heat content is a term used in physical oceanography to describe a type of thermodynamic potential energy that is stored in the ocean. It is defined in coordination with the equation of state of seawater. TEOS-10 is an international standard approved in 2010 by the Intergovernmental Oceanographic Commission.
Calculation of ocean heat content follows that of enthalpy referenced to the ocean surface, also called potential enthalpy. OHC changes are thus made more readily comparable to seawater heat exchanges with ice, freshwater, and humid air. OHC is always reported as a change or as an "anomaly" relative to a baseline. Positive values then also quantify ocean heat uptake (OHU) and are useful to diagnose where most of planetary energy gains from global heating are going.
To calculate the ocean heat content, measurements of ocean temperature from sample parcels of seawater gathered at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles).
The areal density of ocean heat content between two depths is computed as a definite integral:
where is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, is the in-situ seawater density profile, and is the conservative temperature profile. is defined at a single depth h0 usually chosen as the ocean surface. In SI units, has units of Joules per square metre (J·m−2).
In practice, the integral can be approximated by summation using a smooth and otherwise well-behaved sequence of in-situ data; including temperature (t), pressure (p), salinity (s) and their corresponding density (ρ). Conservative temperature are translated values relative to the reference pressure (p0) at h0. A substitute known as potential temperature has been used in earlier calculations.
Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer.
Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions.
Measurements
Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions.
Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle.
Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series altimeters have observed vertically integrated OHC, which is a major component of sea level rise. Since 2002, GRACE and GRACE-FO have remotely monitored ocean changes using gravimetry. The partnership between Argo and satellite measurements has thereby yielded ongoing improvements to estimates of OHC and other global ocean properties.
Causes for heat uptake
Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its heat capacity, and effectively transmits energy according to its heat transfer coefficient. Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean.
Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. It can be computed as an accumulation over time of the observed differences (or imbalances) between total incoming and outgoing radiation.
Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. Increases in planetary heat content for the well-observed 2005–2019 period are thought to exceed measurement uncertainties.
From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction, downwelling, and upwelling. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland. Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy.
From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets.
Recent observations and changes
Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales.
Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700–2000 meter ocean layer.
Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake.
The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored.
A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean.
Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions.
Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins.
Impacts
Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation.
The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion.
It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances.
The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada.
Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020.
The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. Nevertheless the rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s; a scaling proportional to the increase in atmospheric carbon dioxide.
Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there.
See also
References
External links
NOAA Global Ocean Heat and Salt Content
Meteorological concepts
Climate change
Climatology
Earth
Earth sciences
Environmental science
Oceanography
Articles containing video clips
|
Ocean heat content
|
[
"Physics",
"Environmental_science"
] | 2,925
|
[
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"nan"
] |
13,165,926
|
https://en.wikipedia.org/wiki/ControlNet
|
ControlNet is an open industrial network protocol for industrial automation applications, also known as a fieldbus. ControlNet was earlier supported by ControlNet International, but in 2008 support and management of ControlNet was transferred to ODVA, which now manages all protocols in the Common Industrial Protocol family.
Features which set ControlNet apart from other fieldbuses include the built-in support for fully redundant cables and the fact that communication on ControlNet can be strictly scheduled and highly deterministic. Due to the unique physical layer, common network sniffers such as Wireshark cannot be used to sniff ControlNet packets. Rockwell Automation provides ControlNet Traffic Analyzer software to sniff and analyze ControlNet packets.
Version 1, 1.25 and 1.5
Versions 1 and 1.25 were released in quick succession when ControlNet first launched in 1997. Version 1.5 was released in 1998 and hardware produced for each version variant was typically not compatible. Most installations of ControlNet are version 1.5.
Architecture
Physical layer
ControlNet cables consist of RG-6 coaxial cable with BNC connectors, though optical fiber is sometimes used for long distances.
The network topology is a bus structure with short taps. ControlNet also supports a star topology if used with the appropriate hardware.
ControlNet can operate with a single RG-6 coaxial cable bus, or a dual RG-6 coaxial cable bus for cable redundancy. In all cases, the RG-6 should be of quad-shield variety.
Maximum cable length without repeaters is 1000m and maximum number of nodes on the bus is 99. However, there is a tradeoff between number of devices on the bus and total cable length. Repeaters can be used to further extend the cable length. The network can support up to 5 repeaters (10 when used for redundant networks). The repeaters do not utilize network node numbers and are available in copper or fiber optic choices.
The physical layer signaling uses Manchester code at 5 Mbit/s.
Link layer
ControlNet is a scheduled communication network designed for cyclic data exchange. The protocol operates in cycles, known as NUIs, where NUI stands for Network Update Interval.
Each NUI has three phases, the first phase is dedicated to scheduled traffic, where all nodes with scheduled data are guaranteed a transmission opportunity.
The second phase is dedicated to unscheduled traffic. There is no guarantee that every node will get an opportunity to transmit in every unscheduled phase.
The third phase is network maintenance or "guardband". It includes synchronization and a means of determining starting node on the next unscheduled data transfer.
Both the scheduled and unscheduled phase use an implicit token ring media access method.
The amount of time each NUI consists of is known as the NUT, where NUT stands for Network Update Time. It is configurable from 2 to 100 ms. The default NUT on an unscheduled network is 5 ms.
The maximum size of a scheduled or unscheduled ControlNet data frame is 510 Bytes.
Application layer
The ControlNet application layer protocol is based on the Common Industrial Protocol (CIP) layer which is also used in DeviceNet and EtherNet/IP.
References
External links
ODVA website
ControlNet Networks and Communications from Allen-Bradley
Serial buses
Network protocols
Industrial automation
|
ControlNet
|
[
"Technology",
"Engineering"
] | 683
|
[
"Computer network stubs",
"Automation",
"Industrial engineering",
"Computing stubs",
"Industrial automation"
] |
13,167,602
|
https://en.wikipedia.org/wiki/Submersion%20%28coastal%20management%29
|
Submersion is the sustainable cyclic portion of coastal erosion where coastal sediments move from the visible portion of a beach to the submerged nearshore region, and later return to the original visible portion of the beach. The recovery portion of the sustainable cycle of sediment behaviour is named accretion.
Submersion vs erosion
The sediment that is submerged during rough weather forms landforms including storm bars. In calmer weather waves return sediment to the visible part of the beach. Due to longshore drift some sediment can end up further along the beach from where it started. Often coastal areas have developed sustainable coastal positions where the sediment moving off beaches is sustainable submersion. On many inhabited coastlines, anthropogenic interference in coastal processes has meant that erosion is often more permanent than submersion.
Community perception
The term erosion often is associated with undesirable impacts on the environment, whereas submersion is a sustainable part of healthy foreshores. Communities making decisions about coastal management need to develop understanding of the components of beach recession and be able to separate the component that is temporary sustainable submersion from the more serious irreversible anthropogenic or climate change erosion portion.
References
Coastal geography
Geological processes
Physical oceanography
|
Submersion (coastal management)
|
[
"Physics"
] | 248
|
[
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
13,167,630
|
https://en.wikipedia.org/wiki/Accretion%20%28coastal%20management%29
|
Accretion is the process of coastal sediment returning to the visible portion of a beach or foreshore after a submersion event. A sustainable beach or foreshore often goes through a cycle of submersion during rough weather and later accretion during calmer periods.
If a coastline is not in a healthy sustainable state, erosion can be more serious, and accretion does not fully restore the original volume of the visible beach or foreshore, which leads to permanent beach loss.
References
Coastal geography
Deposition (geology)
Physical oceanography
|
Accretion (coastal management)
|
[
"Physics"
] | 110
|
[
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
13,167,800
|
https://en.wikipedia.org/wiki/Central%20Plains%20Water
|
Central Plains Water, or, more fully, the Central Plains Water Enhancement Scheme, is a large-scale proposal for water diversion, damming, reticulation and irrigation for the Central Plains of Canterbury, New Zealand. Construction started on the scheme in 2014.
The original proposal involved diversion of water, the construction of a storage dam, tunnels and a series of canals and water races to supply water for irrigation to an area of 60,000 hectares on the Canterbury Plains. Water will be taken from the Rakaia and Waimakariri Rivers.
In June 2010, resource consents for the scheme were approved in a revised form without the storage dam. From 2010 to 2012, the resource consents were under appeal to the Environment Court. In July 2012, the resource consents for the scheme were finalised by the Environment Court.
The Central Plains Water Enhancement Scheme originated as a feasibility study jointly initiated and funded by Christchurch City Council and Selwyn District Council.
The Central Plains Water Enhancement Scheme is contentious. It is opposed by community, recreation and environment groups, some city and regional councillors, and some corporate dairying interests. The scheme is supported by Christchurch City Council and Selwyn District Council staff and some councillors, irrigation interests, consultants, farming interests, and more recently, some corporate dairying interests.
Scope
Canterbury Regional Council has summarised the scope of the Central Plains Water enhancement scheme as follows;
'The applicants propose to irrigate 60,000 hectares of land between the Rakaia and Waimakariri Rivers from the Malvern foothills to State Highway One. Water will be abstracted at a rate of up to 40 m3/s from two points on the Waimakariri River and one point on the Rakaia River. The water will be irrigated directly from the river and via a storage system. The proposal includes a 55-metre high storage dam within the Waianiwaniwa Valley and associated land use applications for works within watercourses.'
The proposed dam would be about 2 kilometres long, with a maximum height of 55 metres, with a base width of about 250 metres, and 10 m wide crest, with a capacity of 280 million cubic metres. The dam would be 1.5 kilometres north east of the town of Coalgate. The two rivers and the reservoir would be connected by a headrace canal, 53 kilometres long, 5 metres deep and 30 metres wide (40–50 metres including embankments). Water would be delivered to farmers via 460 kilometres of water races, ranging in width from 14 to 27 metres, including the embankments.
A brief history
In 1991, Christchurch City Council and the Selwyn District Council, in their annual planning process, agree on a feasibility study on irrigation of the Central Plains. The two councils provide a budget and set up a joint steering committee. In 2000, the steering committee contracts consulting firm URS New Zealand Limited to prepare a scoping report. In late 2001, the steering committee applies for resource consent to take 40 m3/s of water from the Rakaia River and the Waimakariri River. In January 2002, the steering committee releases the feasibility study and seeks to continue the project.
In 2003, the Central Plains Water Trust was set up to apply for resource consents, and the Trust establishes a company, Central Plains Water Limited, to raise funds from farmers via a share subscription. In 2004 Central Plains Water Limited issued a share prospectus and the share subscription is over-subscribed. In November 2005, further consent applications for land and water use were lodged with Canterbury Regional Council and Central Plains Water Limited becomes a 'requiring authority'. In June 2006, further consent applications for land use and a notice of requirement, the precursor to the use of the Public Works Act to compulsorily acquire land, are lodged with Selwyn District Council.
In July 2007, the trustees of Central Plains Water Trust informed Christchurch City Council that they had run out of money to fund the lawyers and consultants needed for the consent and notice of requirement hearings. Christchurch City Council gave approval for Central Plains Water Limited to borrow up to $4.8 million from corporate dairy farmer Dairy Holdings Limited. The hearing to decide the resource consent applications and submissions and the notice of requirement commenced on 25 February 2008.
In September 2012, Selwyn District Council approved a loan of $5 million to Central Plains Water Limited for the design stage.
Supporters
The Central Plains Water enhancement scheme has had a small but influential group of supporters, some of whom have been involved as steering committee members, trustees and company directors. The supporters have included development-minded council politicians, council staff with water engineering backgrounds, directors of council-owned companies, farmer representatives and consultants. The advancement of the scheme appears to have coincided with career moves and business interests of some of these supporters.
The initial membership of Central Plains Water Enhancement Steering Committee consisted of Councillor Pat Harrow (Christchurch City Council) and Councillors Christiansen and Wild (Selwyn District Council) and Doug Marsh, Jack Searle, John Donkers, Willie Palmer and Doug Catherwood. Christchurch City councillor Denis O'Rourke was soon added and Doug Marsh became chairperson.
Doug Marsh is now the Chairperson of the Central Plains Water Trust and a director of Central Plains Water Limited. He describes himself as a "Christchurch-based professional (company) director" Doug Marsh appears to specialise in council-owned companies. Doug Marsh is also the Chairman of the board of the Directors of the Selwyn Plantation Board Ltd, the Chairman of Plains Laminates Ltd, Chairman of the Canterbury A & P Board, Chairman of Southern Cross Engineering Holdings Ltd, a Director of City Care Ltd, a Director of Electricity Ashburton Ltd and a Director of Hindin Communications Ltd Denis O'Rourke and Doug Catherwood, who were two of the original members of the steering committee, are now Trustees of the Central Plains Water Trust.
Allan Watson, who was the Christchurch City Council Water Services Manager in 1999, had a very important role. Watson wrote most of the reports submitted to the Christchurch City Council strategy and resources committee between late 1999 and 2003. Watson wrote the initial report to the Christchurch City Council strategy and resources committee that set up the Central Plains joint steering committee. Watson wrote the crucial report in February 2002 that recommended that the scheme be considered feasible and that the role of the steering committee be continued.
Watson had previously been the Malvern County Engineer for 10 years. Allan Watson now works for the consulting firm GHD and he has publicly represented GHD as the project managers for the Central Plains Water Enhancement scheme.
In 2000, Walter Lewthwaite was one of the original Christchurch City Council employees supporting the joint Steering Committee. Lewthwaite had 30 years experience in water engineering and 14 years experience in managing irrigation projects. In November 2005, Lewthwaite was a Senior Environmental Engineer employed by URS New Zealand Limited, and the project manager and co-author of the application for resource consents lodged with Canterbury Regional Council. By June 2006, Lewthwaite was an Associate of URS New Zealand Limited. In September 2006, Lewthwaite also prepared information to support the applications to Selwyn District Council.
Opponents
The Central Plains Water Enhancement Scheme is opposed by farmers and community, recreation and environment groups. Opponents include;
individual farmers such as Sheffield Valley farmer Marty Lucas who will lose more than 30% of his property.
the Malvern Hills Protection Society formerly the 'Dam Action Group',
the Water Rights Trust,
the New Zealand Recreational Canoeing Association,
the Christchurch-based White Water Canoe Club,
the Royal Forest and Bird Protection Society of New Zealand,
the Fish and Game Council of New Zealand, and,
the Green Party of Aotearoa New Zealand,
Between 1,192 and 1,316 of public submitters oppose the 64 notified consent applications lodged with Canterbury Regional Council and between 153 and 172 submissions are in support. The range of numbers of submitters given is presumably due to some of the submissions specifying some specific consent applications rather than all of the applications included in the proposal.
Costs
The estimated construction costs of the scheme have doubled since the 2002 'feasibility' study and have increased by 500% since the first scoping study.
In December 2000, the initial scoping study estimated the total cost of the scheme to be $NZ120 million or $1,190.48 per hectare irrigated.
By September 2001, the estimated scheme cost was $NZ201.7 million or $2,400 per hectare irrigated.
In February 2002, when Christchurch City Council and Selwyn District Council were presented with the feasibility study, the estimated scheme cost was $NZ235 million for 84,000 hectares or $2,798 per hectare irrigated.
At 1 April 2004, the estimated scheme cost was $NZ372 million for 60,000 hectares or $6,200 per hectare irrigated.
In January 2006, Central Plains Water Limited director John Donkers stated that the total cost was $NZ367 million for 60,000 hectares or $NZ6,117 per hectare.
In December 2007, the estimate of the total cost of the scheme appeared to be $6,826 per hectare irrigated.
On 19 February 2008, the evidence of Walter Lewthwaite, one of the principal engineering witnesses for Central Plains Water Trust, became available from the Canterbury Regional Council website. Lewthwaite states that in early 2007 he compiled and supplied an estimate of the total scheme cost to Mr Donnelly (the economist) and Mr MacFarlane (the farm management consultant) for their use in providing the economic analysis. The estimate was $NZ409.6 million for a scheme area of 60,000 hectares, or $6,826 per hectare irrigated.
The feasibility study stage
The constitution and terms of reference for the Central Plains Water Enhancement Steering Committee was approved on 14 February 2000. The terms of reference had these two objectives:
to execute feasibility studies into the viability and practicality of water enhancement schemes in the Central Plains area,..
is to undertake feasibility studies for the Central Plains area sufficiently detailed to allow decisions on the advisability of proceeding to resource consent applications and eventual scheme implementation.
The feasibility studies also had a required level of detail:The level of detail of these studies shall be sufficient to allow decisions to be made by the Councils on the advisability of proceeding to resource consent applications and scheme implementation.
By February 2001, the steering committee had identified 27 tasks that would be necessary to complete the feasibility study. The list of tasks is comprehensive; it included the assessment of economic effects, benefits, environmental effects, social effects, cultural effects, risks, planning, land accessibility, and environmental and technical feasibility, and consentability. Item 23 was specifically entitled 'Land Accessibility'.
On 11 February 2002 the Central Plains Water Enhancement Steering Committee presented the URS feasibility report and their own report to a joint meeting of the two 'parent' Councils. On 18 February 2002 the reports were presented to the Strategy and Finance committee of the Christchurch City Council.
The conclusion of the URS feasibility study was stated fairly firmly;"that a water enhancement scheme for the Central Plains can be built, is affordable, will have effects that can be mitigated, and is therefore feasible"
The Steering Committee's conclusion was much less firm."the affordability, bankability and consentability of the proposed scheme has been proved to a degree sufficient to give the Selwyn District Council and Christchurch City Councils confidence to proceed with the project to the next stage."
The Steering Committee had not provided a full conclusion on a number of issues from the list of 27 feasibility study tasks. They had instead simply moved the resolution of a number of the important issues from the feasibility study stage to a new stage to be called 'concept refinement'. The issues to be dealt with later were;
more technical investigations
the scheme's ownership structure
how to acquire land for dams and races
the mitigation of social, environmental and cultural effects.
Court actions with other competing abstractors
Central Plains Water Trust has been in some lengthy litigation with Ngāi Tahu Properties Limited and Synlait. The three entities have resource consents or applications for resource consents to take the same water - the remaining water from the Rakaia and Waimakariri Rivers, allocated for abstraction by the Rakaia Water Conservation Order or the Waimakariri Rivers Regional Plan. The issue before the courts is 'who has first access to limited water? The first to have consent granted? The first to file an application to take water? The first to file all necessary applications? The first to have replied to requests for information so that the application is complete and therefore 'notifiable'? The cases have been appealed up to the Supreme Court.
Ngāi Tahu Properties Limited
On 28 January 2005, Ngāi Tahu Properties Limited had applied for competing resource consents to take 3.96 m³/s of water from the Waimakariri River and use it for irrigation of 5,700 hectares of land to the north of the Waimakariri River. On 17 September 2005 the Ngai Tahu applications were publicly notified. A hearing before independent commissioners was held in February 2006. On 26 and 27 June 2006, Ngāi Tahu Properties Limited sought a declaration from the Environment Court that their application to take water from the Waimakariri River had 'priority' over the 2001 CPWT application and therefore could be granted before the CPWT application.
On 22 August 2006, the Environment Court released a decision that Ngāi Tahu Properties Limited had priority to the remaining 'A' allocation block of water from the Waimakariri River over the Central Plains Water Trust application.
The Central Plains Water Trust then appealed the decision to the High Court on the grounds that as they had applied first their priority to the water should be upheld, in spite of the fact that a decision would be some time in the future. The High Court agreed with the Environment Court that priority to a limited resource went to the applications that were ready to be 'notifiable' first, not the applicant who applied first. That decision confirmed that Ngāi Tahu Properties Limited would be able to take water under their consents from the Waimakariri River at a more optimal minimum flow than any later consent granted to Central Plains Water Trust.
However, Central Plains Water Trust appealed this decision to the Court of Appeal and the case was heard on 28 February 2008. On 19 March 2008, the Court of Appeal released a majority decision, that reversed the Environment Court and High Court decisions and awarded priority to Central Plains Water Trust. Justice Robertson gave a dissenting minority opinion that without the full information, the original CPW application had not been ready for notification in 2001.
On 24 June 2008 the Supreme Court granted Ngai Tahu Property Limited leave to hear an appeal of the Court of Appeal decision.
Synlait
In early 2007, the Central Plains Water Trust and the Ashburton Community Water Trust went to the Environment Court for a declaration that their 2001 consent application for water from the Rakaia River had priority over the consent application made by dairying company Synlait (Robindale) Dairies.
In May 2007, the Environment Court ruled the Central Plains Water Trust application had priority over the Synlait application. Synlait Director Ben Dingle said that the decision was being appealed to the High Court. The High Court heard this appeal on 23 and 24 October 2007. On 13 March 2008, the High Court released its decision to uphold the appeal and to award priority to Synlait. Central Plains Water Limited announced it would lodge an appeal with the Court of Appeal.
The corporate dairying connection
In May 2007, confidential minutes from the March board meeting of Central Plains Water Limited were leaked to media. The minutes stated that the councils (Christchurch and Selwyn District) must agree to a 'bail out' loan or the scheme would be 'killed'. Central Plains Water later confirmed that the corporate dairy farming company, Dairy Holdings Limited, was prepared to offer a large loan to the scheme. Dairy Holdings Limited operates 57 dairy farms and is owned by Timaru millionaire Allan Hubbard and Fonterra board member Colin Armer.
On 5 June 2007, Christchurch City Council was informed that Central Plains Water Limited had 'a shortfall of $NZ1 million' and had run out of money needed to pay for the expenses of the impending hearings on the applications for the various resource consents.
On 7 June 2007, the Christchurch City Council authorised two Council general managers to approve loan agreements for CPWL to borrow up to a maximum of $4.8 million, subject to the Central Plains Water Trust continuing to 'own' the resource consents, as required by the April 2003 Memorandum of Understanding.
The Malvern Hills Protection Society questioned whether the Central Plains resource consent applications had been offered as security for the $NZ4.8million loan and whether such a loan would breach the 2004 CPW Memorandum of Agreement, which forbids transferring or assigning its interest in the resource consents.
Similarly, Ben Dingle, a director of the competing dairying company, Synlait, also questioned the community benefit of the Central Plains project, as the main benefits of irrigation schemes (increased land values and higher-value land-uses) flow to the landowners who have access to the water.
A report to the Christchurch City Council meeting of 13 December 2007 gives the details of the final loan arrangements. On 19 October 2007, two Council general managers signed the loan agreement with Dairy Holdings Limited. The amount initially borrowed from Dairy Holdings Limited is $NZ1.7 million out of a maximum of $4.8 million. The law firm Anthony Harper had certified that the loan was not contrary to the Memorandum of Agreement as the resource consent applications were not used as security. However, the loan agreement grants a sub-licence from CPWL to Dairy Holdings Limited to use the CPW water consents by taking water for irrigation from the Rakaia River. The sub-licence will start from the date the consents are granted to the date that the whole scheme is operational. The Christchurch City councillors voted (eight votes against, five votes for) not to accept the report.
A resource consent is specifically declared by the Resource Management Act 1991 not to be real or personal property. Resource consents are not 'owned'; they are 'held' by 'consent holders'.
The Central Plains Water Trust applications for resource consents may not have been technically used as security for the loan from Dairy Holdings Limited. However, the Christchurch City Council report clarifies that Dairy Holdings Limited, will now get the benefit of the first use of water from the Rakaia River, under the loan arrangement. That benefit will flow from the date the consents are granted, which will be some years before any of the 'ordinary' farmer shareholders in CPWL receive water, once the full scheme is constructed.
The concept of guaranteed public 'ownership' of the resource consents by Central Plains Water Trust, is somewhat of a fiction, given that a private company, Central Plains Water Limited, has an exclusive licence to operate the consents to take and use water for irrigation, and particularly given that Central Plains Water Limited has already granted a sublicence for the Rakaia River water to Dairy Holdings Limited.
Local government elections October 2007
The Central Plains Water enhancement scheme was the second most important issue in the 2007 Christchurch local government elections, according to a poll of 320 people commissioned by the Christchurch newspaper The Press.
Bob Parker, who became the new Mayor of Christchurch, favoured allowing the Central Plains Water scheme to proceed through the hearings into the resource consent applications.
Megan Woods, the unsuccessful Christchurch mayoral candidate, did not support the Central Plains Water scheme.
Sally Buck, a Christchurch City Councillor in the Fendalton Waimairi Ward, strongly opposed the Central Plains Water scheme.
Four new regional councillors elected to Canterbury Regional Council opposed the Central Plains Water scheme. The four were: David Sutherland and Rik Tindall, who stood as "Save Our Water" candidates, and independent candidates Jane Demeter and Eugenie Sage.
Richard Budd, a long-serving regional councillor, who had been a paid consultation facilitator for Central Plains Water,
lost the Christchurch East ward to Sutherland and Tindall.
Defeated regional councillor Elizabeth Cunningham commented that she thought it unlikely that Central Plains Water scheme could be stopped by the new councillors as it was still proceeding to resource consent hearings where the new councillors would have little influence.
Environmental effects
The proposed scheme has a number of environmental effects. The dam would result in a loss of habitat for the endangered Canterbury mudfish. The dam would also affect amenity and landscape values, especially for the settlement of Coalgate. Water abstraction from the rivers will have an effect on ecology and other natural characteristics. The intensification of farming as a result of water being made available by the scheme has led to fears of increased nitrate contamination of the aquifers.
Canterbury mudfish habitat
The Canterbury mudfish is a native freshwater fish of the galaxiid family that is found only in Canterbury. It is an acutely threatened species that is classified as 'Nationally Endangered'.
In October 2002, staff of the National Institute of Water and Atmospheric Research (NIWA), were engaged by Central Plains to survey fish populations in the Waianiwaniwa River catchment as part of the investigation into the potential dam site. The survey identified a large and abundant population of Canterbury mudfish that had previously been unknown. NIWA concluded that the dam would be problematic for the mudfish as their habitat would be replaced by an unsuitable reservoir and the remaining waterways would be opened to predatory eels. Although NIWA did no further work for Central Plains Water, much of NIWA's fish survey was included in the assessment of effects on the environment prepared by URS New Zealand Limited. However, a new approach to the effects on the mudfish was included. Mitigation of the loss of habitat would be further evaluated following consultation with the Department of Conservation.
In July 2006, and in January and February 2007, University of Canterbury researchers surveyed the Waianiwaniwa Valley for mudfish. The fish identified ranged from young recruits to mature adult fish, indicating a healthy population. Canterbury mudfish occur in at least 24 kilometres of the Waianiwaniwa River. Also, sites in the Waianiwaniwa Valley accounted for 47% of all fish database records known for Canterbury mudfish (based on mean catch per unit effort). Therefore, it was concluded that the Waianiwaniwa catchment is the most important known habitat for this species. Forest and bird's expert witness, Ecologist Colin Meurk concluded that the Waiainiwaniwa catchment "represents the largest known Canterbury mudfish habitat and is substantially larger than any other documented mudfish habitats. A rare combination of conditions makes the Waianiwaniwa River a unique ecosystem and creates an important whole catchment refuge for the conservation of this
nationally threatened species".
Angus McIntosh, Associate Professor of Freshwater Ecology in the School of Biological Sciences at the University of Canterbury, presented evidence on behalf of the Department of Conservation. He disagreed with the CPW evidence on mudfish. He made three conclusions:
The Waianiwaniwa Valley population of Canterbury mudfish (Neochanna burrowsius) is the largest and most important population of this nationally endangered fish in existence.
The construction of the dam in the Waianiwaniwa Valley will eliminate the natural population and mudfish will not be able to live in the reservoir or any connected streams.
CPW's proposed measures to mitigate the loss of the Waianiwaniwa population of Canterbury mudfish are inadequate to address the significance and characteristics of the mudfish population that would be lost and are largely undocumented.
The hearing of the applications and submissions
The hearing, to decide the applications for resource consents sought from Canterbury Regional Council and Selwyn District Council and the notice of requirement for designation, commenced on 25 February 2008 and ended on 25 September 2008. The hearing was the largest ever held by Canterbury Regional Council. The hearing panel heard evidence from several hundred submitters on 71 days over a -year period at an expected cost of $2.1 million.
Council Officer’s reports
The summary Canterbury Regional Council report, by Principal Consents Advisor Leo Fietje, did not make a formal recommendation to either grant or decline the applications. However, it concluded, that on the basis of the applicant's evidence and the officer's reviews to date, that some adverse effects cannot be avoided, remedied or mitigated. Uncertainty remains over fish screens, natural character of the Waimakariri River, terrestrial ecology, and effects on lowland streams. Increased nitrate-nitrogen concentrations are considered significant. The loss of endangered Canterbury mudfish habitat due to the dam is considered to be a significant adverse effect. The report notes that any recommendations are not binding on the hearing panel, and that they may reach different conclusions on hearing further evidence.
The summary Selwyn District Council report, by Nick Boyes of Resource Management Group Ltd, recommended declining both the Notice of Requirement and the applications for land use consents. The report also noted that any recommendation was not binding on the hearing panel, and they may reach different conclusions on hearing further evidence. Several reasons for the recommendation were given. CPW has relied on ten management plans to mitigate adverse effects, but has not provided draft copies of any such plans. Insufficient information was provided, despite formal requests, for the Selwyn District Council witnesses to assess the significance of the social effects, the effects on archaeological and heritage values, effects on wetlands and terrestrial ecology, effects on water safety, and the effects on Ngai Tahu statutory acknowledgment areas. The cost-benefit-analysis, which was critical to the farmer-uptake and investment in, and therefore the viability of, the scheme, was considered to lack robustness and to overstate benefits and understate costs.
CPW evidence
In resource consent hearings the burden of proof generally falls on the consent applicant to satisfy a hearing panel that the purpose of the Resource Management Act is met by granting rather than refusing consent. Also, a burden of proof lies on any party who wishes a hearing panel (or the Environment Court) to make a determination of adverse or positive effects. A 'scintilla' of probative evidence may be enough to make an issue of a particular adverse effect 'live' and therefore requiring rebuttal if it is not to be found to be established. The Officers' reports, in noting several adverse effects, have moved the burden of proof for rebuttal onto the witnesses for Central Plains Water Trust.
The opening legal submission for Central Plains Water Trust summarised their technical evidence and concluded that any adverse effects of the scheme will either be adequately mitigated or will be insignificant in light of the positive economic benefits of the scheme.
The expert witnesses for Central Plains have provided many reports of technical evidence.
Interim decision to decline dam
On 3 April 2009, the Commissioners released a minute stating that consents to dam the Wainiwaniwa River were unlikely to be granted and that the hearing would be resumed on 11 May 2009 to decide whether to proceed with a proposal not including water storage. The minute requested legal submissions on that point. Central Plains Water Limited chairman Pat Morrison stated that the most important short-term goal was to get the water takes from the Waimakariri and Rakaia rivers granted.
Implications for the scheme
CPW responded that the hearing should continue to consider the water take and associated canal consents and the notice of requirement. The Department of Conservation, the Fish and Game Council, the Royal Forest and Bird Protection Society and Te Runanga o Ngai Tahu (TRONT) all submitted that the hearing panel should close the hearing and decline all the consents applied for by CPW as these had been presented as an integrated proposal where water storage was fundamental. The Malvern Hills Protection Society recommended declining all applications, noting that CPW had obtained requiring authority status on the basis that the dam and reservoir were essential (para 14). The Society also noted that any water-take consents granted were likely to be ultimately transferred to Dairy Holdings Limited under existing loan agreements (para 29).
Revised divert and irrigate proposal
On 20 May 2009, the Hearing Panel decided that it would continue to hear evidence from CPW on a modified scheme from 5 October 2009. On 30 October 2009, the Commissioners announced that, subject to conditions, they considered they could issue resource consents and grant the Notice of Requirement for the revised scheme. They intended to convene again in early 2010 to finalise consent conditions and to complete a final decision.
Decision June 2010
In June 2010, Environment Canterbury issue a press release stating that the hearing panel had granted 31 consents and the notice of requirement for the revised scheme without the storage dam. The full report of the hearing panel is available on the Environment Canterbury website.
By the end of June 2010, six appeals of the decision had been lodged with the Environment Court. Central Plains Water Trust lodged one of the appeals as applicant in order to change some consent conditions which limit the taking of water to 12 hours a day. Christchurch City Council's appeal was because it considered to much water would be taken from the Waimakariri River which may affect Christchurch's water supply. Fish and Game's appeal was motivated by concern over the Waimakariri River take and 'inadequate' fish screening conditions. Ngāi Tahus appeal concerned the Waimakariri River take and the legality of the change in scope of the consents granted from what had been applied for. Other appellants were a member of the Deans family and some extractors of river gravel.
In July 2012, the resource consents for the scheme were confirmed by the Environment Court.
References
External links
Central Plains Water Trust
Christchurch Library - CPW page
Canterbury Water Management Strategy - an initiative by the Ministry of Agriculture and Forestry, Ministry for the Environment and Environment Canterbury
Environmental issues in New Zealand
Canterbury Region
Water and politics
Irrigation projects
Irrigation in New Zealand
|
Central Plains Water
|
[
"Engineering"
] | 6,082
|
[
"Irrigation projects"
] |
13,168,288
|
https://en.wikipedia.org/wiki/Jackup%20rig
|
A jackup rig or a self-elevating unit is a type of mobile platform that consists of a buoyant hull fitted with a number of movable legs, capable of raising its hull over the surface of the sea. The buoyant hull enables transportation of the unit and all attached machinery to a desired location. Once on location the hull is raised to the required elevation above the sea surface supported by the sea bed. The legs of such units may be designed to penetrate the sea bed, may be fitted with enlarged sections or footings, or may be attached to a bottom mat. Generally jackup rigs are not self-propelled and rely on tugs or heavy lift ships for transportation.
Jackup platforms are almost exclusively used as exploratory oil and gas drilling platforms and as offshore and wind farm service platforms. Jackup rigs can either be triangular in shape with three legs or square in shape with four legs. Jackup platforms have been the most popular and numerous of various mobile types in existence. The total number of jackup drilling rigs in operation numbered about 540 at the end of 2013. The tallest jackup rig built to date is the Noble Lloyd Noble, completed in 2016 with legs 214 metres (702 feet) tall.
Name
Jackup rigs are so named because they are self-elevating with three, four, six and even eight movable legs that can be extended (“jacked”) above or below the hull. Jackups are towed or moved under self propulsion to the site with the hull lowered to the water level, and the legs extended above the hull. The hull is actually a water-tight barge that floats on the water’s surface. When the rig reaches the work site, the crew jacks the legs downward through the water and into the sea floor (or onto the sea floor with mat supported jackups). This anchors the rig and holds the hull well above the waves.
History
An early design was the DeLong platform, designed by Leon B. DeLong. In 1949 he started his own company, DeLong Engineering & Construction Company. In 1950 he constructed the DeLong Rig No. 1 for Magnolia Petroleum, consisting of a barge with six legs. In 1953 DeLong entered into a joint venture with McDermott, which built the DeLong-McDermott No.1 in 1954 for Humble Oil. This was the first mobile offshore drilling platform. This barge had ten legs which had spud cans to prevent them from digging into the seabed too deep. When DeLong-McDermott was taken over by the Southern Natural Gas Company, which formed The Offshore Company, the platform was called Offshore No. 51.
In 1954, Zapata Offshore, owned by George H. W. Bush, ordered the Scorpion. It was designed by R. G. LeTourneau and featured three electro-mechanically-operated lattice type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955. The Scorpion was put into operation in May 1956 off Port Aransas, Texas. The second, also designed by LeTourneau, was called Vinegaroon.
Operation
A jackup rig is a barge fitted with long support legs that can be raised or lowered. The jackup is maneuvered (self-propelled or by towing) into location with its legs up and the hull floating on the water. Upon arrival at the work location, the legs are jacked down onto the seafloor. Then "preloading" takes place, where the weight of the barge and additional ballast water are used to drive the legs securely into the sea bottom so they will not penetrate further while operations are carried out. After preloading, the jacking system is used to raise the entire barge above the water to a predetermined height or "air gap", so that wave, tidal and current loading acts only on the relatively slender legs and not on the barge hull.
Modern jacking systems use a rack and pinion gear arrangement where the pinion gears are driven by hydraulic or electric motors and the rack is affixed to the legs.
Jackup rigs can only be placed in relatively shallow waters, generally less than of water. However, a specialized class of jackup rigs known as premium or ultra-premium jackups are known to have operational capability in water depths ranging from 150 to 190 meters (500 to 625 feet).
Types
Mobile offshore Drilling Units (MODU)
This type of rig is commonly used in connection with oil and/or natural gas drilling. There are more jackup rigs in the worldwide offshore rig fleet than other type of mobile offshore drilling rig. Other types of offshore rigs include semi-submersibles (which float on pontoon-like structures) and drillships, which are ship-shaped vessels with rigs mounted in their center. These rigs drill through holes in the drillship hulls, known as moon pools.
Turbine Installation Vessel (TIV)
This type of rig is commonly used in connection with offshore wind turbine installation.
Barges
Jackup rigs can also refer to specialized barges that are similar to an oil and gas platform but are used as a base for servicing other structures such as offshore wind turbines, long bridges, and drilling platforms.
See also
Crane vessel
Offshore geotechnical engineering
Oil platform
Rack phase difference
TIV Resolution
References
Oil platforms
Ship types
|
Jackup rig
|
[
"Chemistry",
"Engineering"
] | 1,091
|
[
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
14,325,087
|
https://en.wikipedia.org/wiki/Pseudodementia
|
Pseudodementia (otherwise known as depression-related cognitive dysfunction or depressive cognitive disorder) is a condition that leads to cognitive and functional impairment imitating dementia that is secondary to psychiatric disorders, especially depression. Pseudodementia can develop in a wide range of neuropsychiatric disease such as depression, schizophrenia and other psychosis, mania, dissociative disorders, and conversion disorders. The presentations of pseudodementia may mimic organic dementia, but are essentially reversible on treatment and doesn't lead to actual brain degeneration. However, it has been found that some of the cognitive symptoms associated with pseudodementia can persist as residual symptoms and even transform into true neurodegenerative dementia in some cases.
Psychiatric conditions, mainly depression, is the strongest risk factor of pseudodementia rather than age. Even though most of the existing studies focused on older age groups, younger adults can develop pseudodementia if they have depression. While aging does affect the cognition and brain function and making it hard to distinguish depressive cognitive disorder from actual dementia, there are differential diagnostic screenings available. It is crucial to confirm the correct diagnosis since depressive cognitive disorder is reversible with proper treatments.
Pseudodementia typically involves three cognitive components: memory issues, deficits in executive functioning, and deficits in speech and language. Specific cognitive symptoms might include trouble recalling words or remembering things in general, decreased attentional control and concentration, difficulty completing tasks or making decisions, decreased speed and fluency of speech, and impaired processing speed. Since the symptoms of pseudodementia is highly similar to dementia, it is critical complete differential diagnosis to completely exclude dementia. People with pseudodementia are typically very distressed about the cognitive impairment they experience. Currently, the treatment of pseudodementia is mainly focused on treating depression, cognitive impairment, and dementia. And we have seen improvements in cognitive dysfunction with antidepressants such as SSRI (Selective serotonin Reuptake Inhibitors), SNRI (Serotonin-norepinephrine Reuptake Inhibitors), TCAs (Tricyclic Antidepressants), Zolmitriptan, Vortioxetine, and Cholinesterase Inhibitors.
History
Carl Wernicke is often believed to have been the source of the term pseudodementia (in his native German, pseudodemenz). Despite this belief being held by many of his students, Wernicke never actually used the word in any of his written works. It is possible that this misconception comes from Wernicke's discussions on Ganser's syndrome. Instead, the first written instance of pseudodementia was by one of Wernicke's students, Georg Stertz. However the term itself was not linked to the modern understanding of it until 1961 by psychiatrist Leslie Gordon Kiloh, who noticed patients with cognitive symptoms consistent with dementia who improved with treatment. Kiloh believed that the term should be used to describe a person's presentation, rather than an outright diagnosis. Modern research, however, has shown evidence for the term being used in such a way. Reversible causes of true dementia must be excluded. His term was mainly descriptive. The clinical phenomenon, however, has been well-known since the late 19th century as melancholic dementia.
Doubts about the classification and features of the syndrome, and the misleading nature of the name, led to proposals that the term be dropped. However, proponents argue that although it is not a defined singular concept with a precise set of symptoms, it is a practical and useful term that has held up well in clinical practice, and also highlights those who may have a treatable condition.
Presentation
The history of disturbance in pseudodementia is often short and abrupt onset, while dementia is more often insidious. In addition, there is often minor, or an absence of, any abnormal brain patterns seen via imaging which indicate an organic component to the cognitive decline, such as what one would see in dementia. The key symptoms of pseudodementia include: speech impairments, memory deficits, attention problems, emotional control issues, organization difficulties, and decision making. Clinically, people with pseudodementia differ from those with true dementia when their memory is tested. They will often answer that they don't know the answer to a question, and their attention and concentration are often intact. By contrast, those presenting with organic dementia will often have "near-miss" answers rather than stating that they do not know the answer. This can make diagnosis difficult and result in misdiagnosis as a patient might have organic dementia but answer questions in a way that suggests pseudodementia, or vice versa. In addition, people presenting with pseudodementia often lack the gradual mental decline seen in true dementia. They instead tend to remain at the same level of reduced cognitive function throughout. However, for some, pseudodementia can eventually progress to organic dementia and lead to lowered cognitive function. Because of this, some recommend that elderly patients that present with pseudodementia should receive a full screening for dementia, as well as closely monitor cognitive faculties in order to catch the progression to organic dementia early. They may appear upset or distressed, and those with true dementia will often give wrong answers, have poor attention and concentration, and appear indifferent or unconcerned. The symptoms of depression oftentimes mimic dementia even though it may be co-occurring.
Causes
Pseudodementia refers to "behavioral changes that resemble those of the progressive degenerative dementias, but which are attributable to so-called functional causes". The main cause of pseudodementia is depression. Any age group can develop pseudodementia. In depression, processing centers in the brain responsible for cognitive function and memory are affected, including the prefrontal cortex, amygdala, and hippocampus. Reduced function of the hippocampus results in impaired recognition and recall of memories, a symptom commonly associated with dementia. While not as common, other mental health disorders and comorbidities can also cause symptoms that mimic dementia, and thus must be considered when making a diagnosis.
Diagnosis
Differential diagnosis
While there is currently no cure for dementia, other psychiatric disorders that may result in dementia-like symptoms are able to be treated. Thus, it is essential to complete differential diagnosis, where other possibilities are appropriately ruled out to avoid misdiagnosis and inappropriate treatment plans.
The implementation and application of existing collaborative care models, such as DICE (describe, investigate, create, evaluate), can aid in avoiding misdiagnosis. DICE is a method utilized by healthcare workers to evaluate and manage behavioral and psychological symptoms associated with dementia. Comorbidities (such as vascular, infectious, traumatic, autoimmune, idiopathic, or even becoming malnourished) have the potential to mimic symptoms of dementia and thus must be evaluated for, typically through taking a complete patient history and physical exam. For instance, studies have also shown a relationship between depression and its cognitive effects on everyday functioning and distortions of memory.
Since pseudodementia does not cause deterioration of the brain, brain scans can be used to visualize potential deterioration associated with dementia. Investigations such as PET and SPECT imaging of the brain show reduced blood flow in areas of the brain in people with Alzheimer's disease (AD), the most common type of dementia, compared with a more normal blood flow in those with pseudodementia. Reduced blood flow leads to an inadequate oxygen supply that reaches the brain, causing irreversible cell damage and cell death. In addition, MRI results show medial temporal lobe atrophy, which causes impaired recall of facts and events (declarative memory), in individuals with AD.
Pseudodementia vs. dementia
Pseudodementia symptoms can appear similar to dementia. Due to the similar signs and symptoms, it can result in a misdiagnosis of depression, as well as adverse effects from inaccurately prescribed medications.Generally, dementia involves a steady and irreversible cognitive decline while pseudodementia-induced symptoms are reversible. Thus, once the depression is properly treated or the medication therapy has been modified, depression-induced cognitive impairment can be effectively reversed. Commonly within older adults, diminished mental capacity and social withdrawal are identified as dementia symptoms without considering and ruling out depression. As a result, older adult patients are often misdiagnosed due to insufficient testing.
Cognitive symptoms such as memory loss, slowed movement, or reduced/ slowed speech, are sometimes initially misdiagnosed as dementia, however, further investigation determined that these patients were suffering from a major depressive episode. This is an important distinction as the former is untreatable, whereas the latter is treatable using antidepressant therapy, electroconvulsive therapy, or both. In contrast to major depression, dementia is a progressive neurodegenerative syndrome involving a pervasive impairment of higher cortical functions resulting from widespread brain pathology.
A significant overlap in cognitive and neuropsychological dysfunction in dementia and pseudodementia patients increases the difficulty in diagnosis. Differences in the severity of impairment and quality of patients' responses can be observed, and a test of antisaccadic movements may be used to differentiate the two, as pseudodementia patients have poorer performance on this test. Other researchers have suggested additional criteria to differentiate pseudodementia from dementia, based on their studies. However, the sample size for these studies are relatively small so the validity of the studies are limited. A systematic review conducted in 2018 reviewed 18 longitudinal studies about pseudodementia. Among the 284 patients that were studied, 33% of the patients developed irreversible dementia while 53% of the patients no longer met the criteria for dementia during follow-up. Individuals with pseudodementia present considerable cognitive deficits, including disorders in learning, memory and psychomotor performance. Substantial evidences from brain imaging such as CT scanning and positron emission tomography (PET) have also revealed abnormalities in brain structure and function.
A comparison between dementia and pseudodementia is shown below.
Management
Pharmacological
If effective medical treatment for depression is given, this can aid in the distinction between pseudodementia and dementia. Antidepressants have been found to assist in the elimination of cognitive dysfunction associated with depression, whereas cognitive dysfunction associated with true dementia continues along a steady gradient. In cases where antidepressant therapy is not well tolerated, patients can consider electroconvulsive therapy as a possible alternative. However, studies have revealed that patients who displayed cognitive dysfunction related to depression eventually developed dementia later on in their lives.
The development of treatments for dementia has not been as fast as those for depression. Hence, the pharmacological treatments for pseudodementia do not directly treat the condition itself but directly treat dementia, depression, and cognitive impairment. These medications include SSRI (Selective Serotonin Reuptake Inhibitor), SNRI (Serotonin-norepinephrine Reuptake Inhibitors), TCAs (Tricyclic antidepressants), Zolmitriptan, and cholinesterase inhibitors.
SSRI or Selective Serotonin Reuptake Inhibitors belong to the class of antidepressants. Some examples of SSRIs are fluoxetine (Prozac), paroxetine (Paxil), sertraline (Zoloft), citalopram (Celexa), and escitalopram (Lexapro). SSRIs function by inhibiting serotonin reabsorption into neurons, allowing more serotonin to be accessible and improving nerve cell communication. Therefore, SSRIs are considered the first-line agent for pseudodementia due to the rise in serotonin levels, which may assist in alleviating pseudodementia-related depressive symptoms.
SNRI or Serotonin-norepinephrine Reuptake Inhibitors also belong to the class of antidepressants. Some examples of SNRIs are desvenlafaxine (Pristiq), duloxetine (Cymbalta), levomilnacipran (Fetzima), and milnacipran (Savella). In addition to inhibiting serotonin reabsorption, SNRIs also inhibit norepinephrine reabsorption into neurons, allowing more serotonin and norepinephrine to be accessible to nerve cells, improving both nerve cell communication and energy levels. However, SNRIs are considered the second-line agent for pseudodementia due to more severe side effects compared to SSRIs, such as dry mouth and hypertension.
TCAs or Tricyclic Antidepressants are another medications that belong to the class of antidepressants. Some examples of TCA are amitriptyline (Elavil), clomipramine (Anafranil), doxepin (Sinequan), and imipramine (Tofranil). TCAs also function like SNRIs by inhibiting both serotonin and norepinephrine reabsorption into neurons. However, TCAs activate more neurotransmitters or chemical messengers than SNRIs, perhaps causing additional adverse effects. Therefore, TCAs are not recommended for use unless other antidepressants are no longer working.
Zolmitriptan (Zomig) belongs to the class of selective serotonin receptor agonists. The mechanism of action of zolmitriptan is to block pain signals by constricting blood vessels in the brain that cause migraines. In addition to affecting blood vessel constriction, Zolmitriptan indirectly eases depression associated with pseudodementia since it is a selective serotonin receptor agonist.
Cholinesterase Inhibitors belong to the class of drugs that inhibit the breakdown of a neurotransmitter called Acetycholine that helps improve nerve cell communication. Some examples of cholinesterase inhibitors are donepezil (Acricept), rivastigmine (Exelon), and galantamine (Razadyne). All of these cholinesterase inhibitors are FDA-approved to treat all or certain stages of Alzheimer's disease. Since the main cause of psuedodementia is found to be depression, Selective Serotonin Reuptake Inhibitors (SSRIs) are still preferred over other medications.
Non-pharmacological
When pharmacological treatments are ineffective, or in addition to pharmacological treatments, there are a number of non-pharmacological therapies that can be used in the treatment of depression. For some patients, cognitive behavior therapy (This is an effective form of therapy for a wide range of mental illnesses including depression, anxiety disorders, drug abuse problems, etc. that is based on the belief that psychological problems are rooted, in part, in one's own behavior and thought patterns. As such, by changing these patterns using new strategies learned in cognitive behavioral therapy, a patient can learn to better cope.) or interpersonal therapy (This is a form of therapy that has been used in an integrated manner to treat a wide range of psychiatric disorders. It is based on the belief that a patient's relationships in the past and/or present is directly linked to their mental challenges and by improving those relationships, a patient's mental health can be improved.) can be used to delve deeper into their symptoms, ways to manage them, and the root causes of a patient's depression. Patient's can chose to participate in these therapies in individual sessions or in a group setting.
Future Research
Given the limitations and amount of current researches and studies about pseudodementia, there are still many questions left to answer. Future research regarding younger age groups is necessary to better characterize the risk factors, further criteria, and correlation of age and development of pseudodementia. Future study should also incorporate more modern technologies such as genetic sequencing, investigation of possible pseudodementia-related biomarkers, and PET scans to better understand the underlying mechanism of pseudodementia. In addition, future studies should incorporate larger sample size to increase the validity of the study results and any groups with higher risk of developing pseudodementia to extend the scope of the study.
References
Aging-associated diseases
Mood disorders
Psychopathological syndromes
Memory disorders
|
Pseudodementia
|
[
"Biology"
] | 3,410
|
[
"Senescence",
"Aging-associated diseases"
] |
14,325,287
|
https://en.wikipedia.org/wiki/Bluebugging
|
Bluebugging is a form of Bluetooth attack often caused by a lack of awareness. It was developed after the onset of bluejacking and bluesnarfing. Similar to bluesnarfing, bluebugging accesses and uses all phone features but is limited by the transmitting power of class 2 Bluetooth radios, normally capping its range at 10–15 meters. However, the operational range can be increased with the use of a directional antenna.
History
Bluebugging was developed by the German researcher Martin Herfurt in 2004, one year after the advent of bluejacking. Initially a threat against laptops with Bluetooth capability, it later targeted mobile phones and PDAs.
Bluebugging manipulates a target phone into compromising its security, this to create a backdoor attack before returning control of the phone to its owner. Once control of a phone has been established, it is used to call back the hacker who is then able to listen in to conversations, hence the name "bugging". The Bluebug program also has the capability to create a call forwarding application whereby the hacker receives calls intended for the target phone.
A further development of Bluebugging has allowed for the control of target phones through Bluetooth phone headsets, It achieves this by pretending to be the headset and thereby "tricking" the phone into obeying call commands. Not only can a hacker receive calls intended for the target phone, they can send messages, read phonebooks, and examine calendars.
See also
IEEE 802.15
Near-field communication
Personal area network
References
External links
Bluetooth Special Interest Group Site (includes specifications)
Official Bluetooth site aimed at users
Bluetooth/Ethernet Vendor MAC Address Lookup
Bluebugging Video and description
Bluetooth
Hacking (computer security)
|
Bluebugging
|
[
"Technology"
] | 362
|
[
"Wireless networking",
"Bluetooth"
] |
14,325,911
|
https://en.wikipedia.org/wiki/BCAR1
|
Breast cancer anti-estrogen resistance protein 1 is a protein that in humans is encoded by the BCAR1 gene.
Gene
BCAR1 is localized on chromosome 16 on region q, on the negative strand and it consists of seven exons. Eight different gene isoforms have been identified that share the same sequence starting from the second exon onwards but are characterized by different starting sites. The longest isoform is called BCAR1-iso1 (RefSeq NM_001170714.1) and is 916 amino acids long, the other shorter isoforms start with an alternative first exon.
Function
BCAR1 is a ubiquitously expressed adaptor molecule originally identified as the major substrate of v-Src and v-Crk . p130Cas/BCAR1 belongs to the Cas family of adaptor proteins and can act as a docking protein for several signalling partners. Due to its ability to associate with multiple signaling partners, p130Cas/BCAR1 contributes to the regulation to a variety of signaling pathways leading to cell adhesion, migration, invasion, apoptosis, hypoxia and mechanical forces. p130Cas/BCAR1 plays a role in cell transformation and cancer progression and alterations of p130Cas/BCAR1 expression and the resulting activation of selective signalling are determinants for the occurrence of different types of human tumors.
Due to the capacity of p130Cas/BCAR1, as an adaptor protein, to interact with multiple partners and to be regulated by phosphorylation and dephosphorylation, its expression and phosphorylation can lead to a wide range of functional consequences. Among the regulators of p130Cas/BCAR1 tyrosine phosphorylation, receptor tyrosine kinases (RTKs) and integrins play a prominent role. RTK-dependent p130Cas/BCAR1 tyrosine phosphorylation and the subsequent binding with specific downstream signaling molecule modulate cell processes such as actin cytoskeleton remodeling, cell adhesion, proliferation, migration, invasion and survival. Integrin-mediated p130Cas/BCAR1 phosphorylation upon adhesion to extracellular matrix (ECM) induces downstream signaling that is required for allowing cells to spread and migrate on the ECM.
Both RTKs and integrin activation affect p130Cas/BCAR1 tyrosine phosphorylation and represent an efficient means by which cells utilize signals coming from growth factors and integrin activation to coordinate cell responses. Additionally, p130Cas/BCAR1 tyrosine phosphorylation on its substrate domain can be induced by cell stretching subsequent to changes in the rigidity of the extracellular matrix, allowing cells to respond to mechanical force changes in the cell environment.
Cas-Family
p130Cas/BCAR1 is a member of the Cas family (Crk-associated substrate) of adaptor proteins which is characterized by the presence of multiple conserved motifs for protein–protein interactions, and by extensive tyrosine and serine phosphorylations. The Cas family comprises other three members: NEDD9 (Neural precursor cell expressed, developmentally down-regulated 9, also called Human enhancer of filamentation 1, HEF-1 or Cas-L), EFS (Embryonal Fyn-associated substrate), and CASS4 (Cas scaffolding protein family member 4). These Cas proteins have a high structural homology, characterized by the presence of multiple protein interaction domains and phosphorylation motifs through which Cas family members can recruit effector proteins. However, despite the high degree of similarity, their temporal expression, tissue distribution and functional roles are distinct and not overlapping. Notably, the knock-out of p130Cas/BCAR1 in mice is embryonic lethal, suggesting that other family members do not show an overlapping role in development.
Structure
p130Cas/BCAR1 is a scaffold protein characterized by several structural domains. It possesses an amino N-terminal Src-homology 3 domain (SH3) domain, followed by a proline-rich domain (PRR) and a substrate domain (SD). The substrate domain consists of 15 repeats of the YxxP consensus phosphorylation motif for Src family kinases (SFKs). Following the substrate domain is the serine-rich domain, which forms a four-helix bundle. This acts as a protein-interaction motif, similar to those found in other adhesion-related proteins such as focal adhesion kinase (FAK) and vinculin. The remaining carboxy-terminal sequence contains a bipartite Src-binding domain (residues 681–713) able to bind both the SH2 and SH3 domains of Src.
p130Cas/BCAR1 can undergo extensive changes in tyrosine phosphorylation that occur predominantly in the 15 YxxP repeats within the substrate domain and represent the major post-translational modification of p130Cas/BCAR1. p130Cas/BCAR1 tyrosine phosphorylation can result from a diverse range of extracellular stimuli, including growth factors, integrin activation, vasoactive hormones and peptides ligands for G-protein coupled receptors. These stimuli triggers p130Cas/BCAR1 tyrosine phosphorylation and its translocation from cytosol to the cell membrane.
Clinical significance
Given the ability of p130Cas/BCAR1 scaffold protein to convey and integrate different type of signals and subsequently to regulate key cellular functions such as adhesion, migration, invasion, proliferation and survival, the existence of a strong correlation between deregulated p130Cas/BCAR1 expression and cancer was inferred. Deregulated expression of p130Cas/BCAR1 has been identified in several cancer types. Altered levels of p130Cas/BCAR1 expression in cancers can result from gene amplification, transcription upregulation or changes in protein stability. Overexpression of p130Cas/BCAR1 has been detected in human breast cancer, prostate cancer, ovarian cancer, lung cancer, colorectal cancer, hepatocellular carcinoma, glioma, melanoma, anaplastic large cell lymphoma and chronic myelogenous leukaemia. The presence of aberrant levels of hyperphosphorylated p130Cas/BCAR1 strongly promotes cell proliferation, migration, invasion, survival, angiogenesis and drug resistance. It has been demonstrated that high levels of p130Cas/BCAR1 expression in breast cancer correlate with worse prognosis, increased probability to develop metastasis and resistance to therapy. Conversely, lowering the amount of p130Cas/BCAR1 expression in ovarian, breast and prostate cancer is sufficient to block tumor growth and progression of cancer cells.
p130Cas/BCAR1 has potential uses as a diagnostic and prognostic marker for some human cancers. Since lowering p130Cas/BCAR1 in tumor cells is sufficient to halt their transformation and progression, it is conceivable to propose p130Cas/BCAR1 may represent a therapeutic target. However, the non-catalytic nature of p130Cas/BCAR1 makes difficult to develop specific inhibitors.
Notes
References
Further reading
External links
Bcar1 Info with links in the Cell Migration Gateway
Proteins
|
BCAR1
|
[
"Chemistry"
] | 1,576
|
[
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,326,078
|
https://en.wikipedia.org/wiki/Actin%2C%20cytoplasmic%202
|
Actin, cytoplasmic 2, or gamma-actin is a protein that in humans is encoded by the ACTG1 gene. Gamma-actin is widely expressed in cellular cytoskeletons of many tissues; in adult striated muscle cells, gamma-actin is localized to Z-discs and costamere structures, which are responsible for force transduction and transmission in muscle cells. Mutations in ACTG1 have been associated with nonsyndromic hearing loss and Baraitser-Winter syndrome, as well as susceptibility of adolescent patients to vincristine toxicity.
Structure
Human gamma-actin is 41.8 kDa in molecular weight and 375 amino acids in length. Actins are highly conserved proteins that are involved in various types of cell motility, and maintenance of the cytoskeleton. In vertebrates, three main groups of actin paralogs, alpha, beta, and gamma, have been identified.
The alpha actins are found in muscle tissues and are a major constituent of the sarcomere contractile apparatus. The beta and gamma actins co-exist in most cell types as components of the cytoskeleton, and as mediators of internal cell motility. Actin, gamma 1, encoded by this gene, is found in non-muscle cells in the cytoplasm, and in muscle cells at costamere structures, or transverse points of cell-cell adhesion that run perpendicular to the long axis of myocytes.
Function
In myocytes, sarcomeres adhere to the sarcolemma via costameres, which align at Z-discs and M-lines. The two primary cytoskeletal components of costameres are desmin intermediate filaments and gamma-actin microfilaments. It has been shown that gamma-actin interacting with another costameric protein dystrophin is critical for costameres forming mechanically strong links between the cytoskeleton and the sarcolemmal membrane. Additional studies have shown that gamma-actin colocalizes with alpha-actinin and GFP-labeled gamma actin localized to Z-discs, whereas GFP-alpha-actin localized to pointed ends of thin filaments, indicating that gamma actin specifically localizes to Z-discs in striated muscle cells.
During development of myocytes, gamma actin is thought to play a role in the organization and assembly of developing sarcomeres, evidenced in part by its early colocalization with alpha-actinin. Gamma-actin is eventually replaced by sarcomeric alpha-actin isoforms, with low levels of gamma-actin persisting in adult myocytes which associate with Z-disc and costamere domains.
Insights into the function of gamma-actin in muscle have come from studies employing transgenesis. In a skeletal muscle-specific knockout of gamma-actin in mice, these animals showed no detectable abnormalities in development; however, knockout mice showed muscle weakness and fiber necrosis, along with decreased isometric twitch force, disrupted intrafibrillar and interfibrillar connections among myocytes, and myopathy.
Clinical significance
An autosomal dominant mutation in ACTG1 in the DFNA20/26 locus at 17q25-qter was identified in patients with hearing loss. A Thr278Ile mutation was identified in helix 9 of gamma-actin protein, which is predicted to alter protein structure. This study identified the first disease causing mutation in gamma-actin and underlies the importance of gamma-actin as structural elements of the inner ear hair cells. Since then, other ACTG1 mutations have been linked to nonsyndromic hearing loss, including Met305Thr.
A missense mutation in ACTG1 at Ser155Phe has also been identified in patients with Baraitser-Winter syndrome, which is a developmental disorder characterized by congenital ptosis, excessively-arched eyebrows, hypertelorism, ocular colobomata, lissencephaly, short stature, seizures and hearing loss.
Differential expression of ACTG1 mRNA was also identified in patients with Sporadic Amyotrophic Lateral Sclerosis, a devastating disease with unknown causality, using a sophisticated bioinformatics approach employing Affymetrix long-oligonucleotide BaFL methods.
Single nucleotide polymorphisms in ACTG1 have been associated with vincristine toxicity, which is part of the standard treatment regimen for childhood acute lymphoblastic leukemia. Neurotoxicity was more frequent in patients that were ACTG1 Gly310Ala mutation carriers, suggesting that this may play a role in patient outcomes from vincristine treatment.
Interactions
ACTG1 has been shown to interact with:
CAP1,
DMD,
TMSB4X, and
Plectin.
See also
Actin
References
External links
Further reading
Proteins
|
Actin, cytoplasmic 2
|
[
"Chemistry"
] | 1,022
|
[
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
14,326,079
|
https://en.wikipedia.org/wiki/Language%20expectancy%20theory
|
Language expectancy theory (LET) is a theory of persuasion. The theory assumes language is a rules-based system, in which people develop expected norms as to appropriate language usage in given situations. Furthermore, unexpected linguistic usage can affect the receiver's behavior resulting from attitudes towards a persuasive message.
Background
Created by Michael Burgoon, a retired professor of medicine from the University of Arizona, and Gerald R. Miller, the inspiration for LET was sparked by Brooks' work on expectations of language in 1970. Burgoon, Jones and Stewart furthered the discussion with the idea of linguistic strategies and message intensity in an essay published in 1975. The essay linked linguistic strategies, or how a message is framed, to effective persuasive outcomes. The original work for the language expectation theory was published in 1978. Titled "An empirical test of a model of resistance to persuasion", it outlined the theory through 17s.
Expectations
The theory views language expectancies as enduring patterns of anticipated communication behavior which are grounded in a society's psychological and cultural norms. Such societal forces influence language and enable the identification of non-normative use; violations of linguistic, syntactic and semantic expectations will either facilitate or inhibit an audience's receptivity to persuasion. Burgoon claims applications for his theory in management, media, politics and medicine, and declares that his empirical research has shown a greater effect than expectancy violations theory, the domain of which does not extend to the spoken word.
LET argues that typical language behaviors fall within a normative "bandwidth" of expectations determined by a source's perceived credibility, the individual listener's normative expectations and a group's normative social climate, and generally supports a gender-stereotypical reaction to the use of profanity, for example.
Communication expectancies are said to derive from three factors:
The communicator – individual features, such as ethos or source credibility, personality, appearance, social status and gender.
The relationship between a receiver and a communicator, including factors such as attraction, similarity and status equality.
Context; i.e., privacy and formality constraints on interaction.
Violations
Violating social norms can have a positive or negative effect on persuasion. Usually people use language to conform to social norms; but a person's intentional or accidental deviation from expected behavior can have either a positive or negative reaction. Language Expectancy Theory assumes that language is a rule-governed system and people develop expectations concerning the language or message strategies employed by others in persuasive attempts (Burgoon, 1995). Expectations are a function of cultural and sociological norms and preferences arising from cultural values and societal standards or ideals for competent communication.
When observed, behavior is preferred over what was expected or when a listener's initial negative evaluation causes a speaker to conform more closely to the expected behavior. The deviation can be seen as positive, but when language choice or behavior is perceived as unacceptable or inappropriate behavior, the violation is negatively received and can inhibit the receptivity to a persuasive appeal.
Positive violations occur (b) when negatively evaluated sources conform more closely than expected to cultural values or situational norms. This can result in overly positive evaluation of the source and change promoted by the actor (Burgoon, 1995).
Negative violations, resulting from language choices that lie outside socially acceptable behavior in a negative direction, produce no attitude or behavior change in receivers.
Summary of propositions
Language expectancy theory is based on 17 propositions. Those propositions can be summarized as listed below:
1, 2 and 3: People create expectations for language. Those expectations determine whether messages will be accepted or rejected by an individual. Breaking expectations positively results in a behavior change in favor of the persuasive message while a breaking expectations negatively results in no change or an opposite behavior change.
4, 5 and 6: Individuals with perceived credibility (those who hold power in a society) have the freedom in persuasion to select varied language strategies (wide bandwidth). Those with low credibility and those unsure of their perceived credibility are restricted to low aggression or compliance-gaining messages to be persuasive.
7, 8 and 9: Irrelevant fear and anxiety tactics are better received using low-intensity and verbally unaggressive compliance-gaining. Intense and aggressive language use result in lower levels of persuasion.
10, 11 and 12: For the persuader, an individual who is experiencing cognitive stress will use lower intensity messages. If a communicator violates his/her norms of communication, they will experience cognitive stress.
13 and 14: Pretreatments forewarn receivers of the persuasive attacks (supportive, refutational or a combination). When Persuasive messages do not violate expectations created by the pretreatments, resistance to persuasion is conferred. When pretreatment expectations of persuasive messages are violated, receivers are less resistant to persuasion.
15, 16 and 17: Low intensity attack strategies are more effective than high intensity attack strategies when overcoming resistance to persuasion created in pretreatment. The first message in a string of arguments methodically affects the acceptance of the second message. When expectations are positively violated in the first message, the second will be persuasive. When expectations are negatively violated in the first message, the second will not be persuasive.
The role of intensity
These propositions give rise to the impact of language intensity—defined by John Waite Bowers as a quality of language that "indicates the degree to which the speaker's attitude toward a concept deviates from neutrality"—on persuasive messages. Theorists have concentrated on two key areas: (1) intensity of language when it comes to gender roles and (2) credibility.
The perceived credibility of a source can greatly affect a message's persuasiveness. Researchers found that credible sources can enhance their appeal by using intense language; however, less credible speakers are more persuasive with low-intensity appeals. Similarly, females are less persuasive than males when they use intense language because it violates the expected behavior, but are more persuasive when they use low-intensity language. Males, however, are seen as weak when they argue in a less intense manner. Theorists argue further that females and speakers perceived as having low credibility have less freedom in selecting message strategies and that the use of aggressive language negatively violates expectations.
Example
To better explain the theory we look at the expectations and societal norms for a man and a woman on their first date. If the man pushed for further physical intimacy after dinner, the societal expectation of a first date would be violated. The example below with Margret and Steve depicts such a scene.
Margret: "I had a really good time tonight, Steve. We should do it again."
Steve: "Let's cut the crap. Do you want to have sex?"
Margret: "Uhhh..."
Margret's language expectations of a first date were violated. Steve chooses an aggressive linguistic strategy. If Margret views Steve as a credible and appealing source, she may receive the message positively and, thus, the message would be persuasive. If Margret perceives Steve as an ambiguous or low-credible source, Steve will not be persuasive. In such a case, Steve should have used a low-aggressive message in his attempt to win Margret to his idea of having sex.
Criticism
Determining whether a positive or negative violation has occurred can be difficult. When there is no attitude or behavior change it may be concluded that a negative violation has occurred (possibly related to a boomerang effect). Conversely, when an attitude or behavior change does occur it may be too easy to conclude a positive violation of expectations has occurred.
The theory has also been critiqued for being too "grand" in its predictive and explanatory goals. Burgoon counters that practical applications of his research conclusions are compelling enough to negate this criticism.
See also
Physician–patient interaction
Social influence
Notes
References
Bowers, J.W. (1963). Language intensity, social introversion, and attitude change. Speech Monographs, 30, 345–352.
Bowers, J.W. (1964). Some correlates of language intensity. Quarterly Journal of Speech, 50, 415–420.
Burgoon, J.K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 12, 13–21.
Burgoon, M. (1994). Advances in Research in Social Influence: Essays in Honor of Gerald R. Miller. Charles R. Berger and Michael Burgoon (Editors), East Lansing, MI: Michigan State University Press, 1993.
Burgoon, M., Dillard, J.P., & Doran, N. (1984). Friendly or unfriendly persuasion: The effects of violations of expectations by males and females. Human Communication Research, 10, 283–294.
Burgoon, M. Jones, S.B., Stewart, D. (1975). Toward a message-centered theory or persuasion: Three empirical investigations of language intensity. Human Communication Research, 1, 240–256.
Burgoon, M. and Miller, G.R. (1977) Predictors of resistance to persuasion: propensity of persuasive attack, pretreatment language intensity, and expected delay of attack. The Journal of Psychology, 95, 105–110.
Burgoon, M., & Miller, G.R. (1985). An expectancy interpretation of language and persuasion. In H. Giles & R. Clair (Eds.) The social and psychological contexts of language (pp. 199–229). London: Lawrence Erlbaum Associates.
Burgoon, M., Hunsacker, F., & Dawson, E. (1994). Approaches to gaining compliance. Human Communication, (pp. 203–217). Thousand Oaks, CA: Sage.
Dillard, J. P., & Pfau, M. W. (2002). The Persuasion Handbook: Developments in Theory and Practice (1st ed.). Thousand Oaks, CA: SAGE
Behavioral concepts
Scientific theories
|
Language expectancy theory
|
[
"Biology"
] | 2,116
|
[
"Behavior",
"Behavioral concepts",
"Behaviorism"
] |
14,326,527
|
https://en.wikipedia.org/wiki/Flying%20probe
|
Flying probes are test probes used for testing both bare circuit boards and boards loaded with components. Flying probes were introduced in the late 1980’s and can be found in many manufacturing and assembly operations, most often in manufacturing of electronic printed circuit boards. A flying probe tester uses one or more test probes to make contact with the circuit board under test; the probes are moved from place to place on the circuit board to carry out tests of multiple conductors or components. Flying probe testers are a more flexible alternative to bed of nails testers, which use multiple contacts to simultaneously contact the board and which rely on electrical switching to carry out measurements.
One limitation in flying probe test methods is the speed at which measurements can be taken; the probes must be moved to each new test site on the board, and then a measurement must be completed. Bed-of-nails testers touch each test point simultaneously and electronic switching of instruments between test pins is more rapid than movement of probes. Manufacturing of a bed-of-nails testers however is more costly.
Bare board
Loaded board in-circuit test
In the testing of printed circuit boards, a flying probe test or fixtureless in-circuit test (FICT) system may be used for testing low to mid volume production, prototypes, and boards that present accessibility problems. A traditional "bed of nails" tester for testing a PCB requires a custom fixture to hold the PCBA and the Pogo pins which make contact with the PCBA. In contrast, FICT uses two or more flying probes, which may be moved based on software instruction. The flying probes are electro-mechanically controlled to access components on printed circuit assemblies (PCAs). The probes are moved around the board under test using an automatically operated two-axis system, and one or more test probes contact components of the board or test points on the printed circuit board.
The main advantage of flying probe testing is the substantial cost of a bed-of-nails fixture, costing on the order of US $20,000, is not required. The flying probes also allow easy modification of the test fixture when the PCBA design changes. FICT may be used on both bare or assembled PCB's. However, since the tester makes measurements serially, instead of making many measurements at once, the test cycle may become much longer than for a bed-of-nails fixture. A test cycle that may take 30 seconds on such a system, may take an hour with flying probes. Test coverage may not be as comprehensive as a bed of nails tester (assuming similar net access for each), because fewer points are tested at one time.
References
electronic test equipment
hardware testing
nondestructive testing
|
Flying probe
|
[
"Materials_science",
"Technology",
"Engineering"
] | 560
|
[
"Nondestructive testing",
"Materials testing",
"Electronic test equipment",
"Measuring instruments"
] |
14,326,547
|
https://en.wikipedia.org/wiki/Power-off%20testing
|
Power-off testing is often necessary to test the printed circuit assembly (PCA) board due to uncertainty as to the nature of the failure. When the PCA can be further damaged by applying power it is necessary to use power off test techniques to safely examine it. Power off testing includes analog signature analysis, ohmmeter, LCR Meter and optical inspection. This type of testing also lends itself well to troubleshooting circuit boards without the aid of supporting documentation such as schematics.
Typical equipment
Analog signature analysis*
Huntron Tracker*
Automated optical inspection
LCR meter
Machine vision
Ohmmeter
Printed circuit board manufacturing
Nondestructive testing
Hardware testing
Electricity
|
Power-off testing
|
[
"Materials_science",
"Engineering"
] | 135
|
[
"Nondestructive testing",
"Electronic engineering",
"Materials testing",
"Electrical engineering",
"Printed circuit board manufacturing"
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4