Dataset Viewer
Auto-converted to Parquet Duplicate
id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
sequencelengths
1
6
token_count
int64
3
32.2k
subcategories
sequencelengths
0
27
15,828,681
https://en.wikipedia.org/wiki/Dimroth%20rearrangement
The Dimroth rearrangement is a rearrangement reaction taking place with certain 1,2,3-triazoles where endocyclic and exocyclic nitrogen atoms switch place. This organic reaction was discovered in 1909 by Otto Dimroth. With R a phenyl group the reaction takes place in boiling pyridine for 24 hours. This type of triazole has an amino group in the 5 position. After ring-opening to a diazo intermediate, C-C bond rotation is possible with 1,3-migration of a proton. Certain 1-alkyl-2-iminopyrimidines also display this type of rearrangement. In the first step is an addition reaction of water followed by ring-opening of the hemiaminal to the aminoaldehyde followed by ring closure. A known drug example of the Dimroth rearrangement includes in the synthesis of Bemitradine [88133-11-3]. References Rearrangement reactions Name reactions
Dimroth rearrangement
[ "Chemistry" ]
214
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
15,831,300
https://en.wikipedia.org/wiki/Tellegen%27s%20theorem
Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory. The Tellegen theorem is applicable to a multitude of network systems. The basic assumptions for the systems are the conservation of flow of extensive quantities (Kirchhoff's current law, KCL) and the uniqueness of the potentials at the network nodes (Kirchhoff's voltage law, KVL). The Tellegen theorem provides a useful tool to analyze complex network systems including electrical circuits, biological and metabolic networks, pipeline transport networks, and chemical process networks. The theorem Consider an arbitrary lumped network that has branches and nodes. In an electrical network, the branches are two-terminal components and the nodes are points of interconnection. Suppose that to each branch we assign arbitrarily a branch potential difference and a branch current for , and suppose that they are measured with respect to arbitrarily picked associated reference directions. If the branch potential differences satisfy all the constraints imposed by KVL and if the branch currents satisfy all the constraints imposed by KCL, then Tellegen's theorem is extremely general; it is valid for any lumped network that contains any elements, linear or nonlinear, passive or active, time-varying or time-invariant. The generality is extended when and are linear operations on the set of potential differences and on the set of branch currents (respectively) since linear operations don't affect KVL and KCL. For instance, the linear operation may be the average or the Laplace transform. More generally, operators that preserve KVL are called Kirchhoff voltage operators, operators that preserve KCL are called Kirchhoff current operators, and operators that preserve both are simply called Kirchhoff operators. These operators need not necessarily be linear for Tellegen's theorem to hold. The set of currents can also be sampled at a different time from the set of potential differences since KVL and KCL are true at all instants of time. Another extension is when the set of potential differences is from one network and the set of currents is from an entirely different network, so long as the two networks have the same topology (same incidence matrix) Tellegen's theorem remains true. This extension of Tellegen's Theorem leads to many theorems relating to two-port networks. Definitions We need to introduce a few necessary network definitions to provide a compact proof. Incidence matrix: The matrix is called node-to-branch incidence matrix for the matrix elements being A reference or datum node is introduced to represent the environment and connected to all dynamic nodes and terminals. The matrix , where the row that contains the elements of the reference node is eliminated, is called reduced incidence matrix. The conservation laws (KCL) in vector-matrix form: The uniqueness condition for the potentials (KVL) in vector-matrix form: where are the absolute potentials at the nodes to the reference node . Proof Using KVL: because by KCL. So: Applications Network analogs have been constructed for a wide variety of physical systems, and have proven extremely useful in analyzing their dynamic behavior. The classical application area for network theory and Tellegen's theorem is electrical circuit theory. It is mainly in use to design filters in signal processing applications. A more recent application of Tellegen's theorem is in the area of chemical and biological processes. The assumptions for electrical circuits (Kirchhoff laws) are generalized for dynamic systems obeying the laws of irreversible thermodynamics. Topology and structure of reaction networks (reaction mechanisms, metabolic networks) can be analyzed using the Tellegen theorem. Another application of Tellegen's theorem is to determine stability and optimality of complex process systems such as chemical plants or oil production systems. The Tellegen theorem can be formulated for process systems using process nodes, terminals, flow connections and allowing sinks and sources for production or destruction of extensive quantities. A formulation for Tellegen's theorem of process systems: where are the production terms, are the terminal connections, and are the dynamic storage terms for the extensive variables. References In-line references General references Basic Circuit Theory by C.A. Desoer and E.S. Kuh, McGraw-Hill, New York, 1969 "Tellegen's Theorem and Thermodynamic Inequalities", G.F. Oster and C.A. Desoer, J. Theor. Biol 32 (1971), 219–241 "Network Methods in Models of Production", Donald Watson, Networks, 10 (1980), 1–15 External links Circuit example for Tellegen's theorem G.F. Oster and C.A. Desoer, Tellegen's Theorem and Thermodynamic Inequalities Network thermodynamics Circuit theorems Eponymous theorems of physics
Tellegen's theorem
[ "Physics" ]
1,063
[ "Circuit theorems", "Eponymous theorems of physics", "Equations of physics", "Physics theorems" ]
15,832,717
https://en.wikipedia.org/wiki/Computational%20statistics
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is fast developing. The view that the broader concept of computing must be taught as part of general statistical education is gaining momentum. As in traditional statistics the goal is to transform raw data into knowledge, but the focus lies on computer intensive statistical methods, such as cases with very large sample size and non-homogeneous data sets. The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics", and 'computational statistics' as "aiming at the design of algorithm for implementing statistical methods on computers, including the ones unthinkable before the computer age (e.g. bootstrap, simulation), as well as to cope with analytically intractable problems" [sic]. The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation, artificial neural networks and generalized additive models. History Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology. In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution. With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise. Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo. One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards. By the mid-1950s, several articles and patents for devices had been proposed for random number generators. The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions. This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible. Methods Maximum likelihood estimation Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model. Monte Carlo method Monte Carlo is a statistical method that relies on repeated random sampling to obtain numerical results. The concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. Markov chain Monte Carlo The Markov chain Monte Carlo method creates samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, such as its expected value or variance. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution. Bootstrapping The bootstrap is a resampling technique used to generate samples from an empirical probability distribution defined by an original sample of the population. It can be used to find a bootstrapped estimator of a population parameter. It can also be used to estimate the standard error of an estimator as well as to generate bootstrapped confidence intervals. The jackknife is a related technique. Applications Computational biology Computational linguistics Computational physics Computational mathematics Computational materials science Machine Learning Computational statistics journals Communications in Statistics - Simulation and Computation Computational Statistics Computational Statistics & Data Analysis Journal of Computational and Graphical Statistics Journal of Statistical Computation and Simulation Journal of Statistical Software The R Journal The Stata Journal Statistics and Computing Wiley Interdisciplinary Reviews: Computational Statistics Associations International Association for Statistical Computing See also Algorithms for statistical classification Data science Statistical methods in artificial intelligence Free statistical software List of statistical algorithms List of statistical packages Machine learning References Further reading Articles Books External links Associations International Association for Statistical Computing Statistical Computing section of the American Statistical Association Journals Computational Statistics & Data Analysis Journal of Computational & Graphical Statistics Statistics and Computing Numerical analysis Computational fields of study Mathematics of computing
Computational statistics
[ "Mathematics", "Technology" ]
1,073
[ "Computational fields of study", "Computational mathematics", "Mathematical relations", "Computing and society", "Numerical analysis", "Computational statistics", "Approximations" ]
13,160,311
https://en.wikipedia.org/wiki/Airborne%20Real-time%20Cueing%20Hyperspectral%20Enhanced%20Reconnaissance
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance, also known by the acronym ARCHER, is an aerial imaging system that produces ground images far more detailed than plain sight or ordinary aerial photography can. It is the most sophisticated unclassified hyperspectral imaging system available, according to U.S. Government officials. ARCHER can automatically scan detailed imaging for a given signature of the object being sought (such as a missing aircraft), for abnormalities in the surrounding area, or for changes from previous recorded spectral signatures. It has direct applications for search and rescue, counterdrug, disaster relief and impact assessment, and homeland security, and has been deployed by the Civil Air Patrol (CAP) in the US on the Australian-built Gippsland GA8 Airvan fixed-wing aircraft. CAP, the civilian auxiliary of the United States Air Force, is a volunteer education and public-service non-profit organization that conducts aircraft search and rescue in the US. Overview ARCHER is a daytime non-invasive technology, which works by analyzing an object's reflected light. It cannot detect objects at night, underwater, under dense cover, underground, under snow or inside buildings. The system uses a special camera facing down through a quartz glass portal in the belly of the aircraft, which is typically flown at a standard mission altitude of and 100 knots (50 meters/second) ground speed. The system software was developed by Space Computer Corporation of Los Angeles and the system hardware is supplied by NovaSol Corp. of Honolulu, Hawaii specifically for CAP. The ARCHER system is based on hyperspectral technology research and testing previously undertaken by the United States Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). CAP developed ARCHER in cooperation with the NRL, AFRL and the United States Coast Guard Research & Development Center in the largest interagency project CAP has undertaken in its 74-year history. Since 2003, almost US$5 million authorized under the 2002 Defense Appropriations Act has been spent on development and deployment. , CAP reported completing the initial deployment of 16 aircraft throughout the U.S. and training over 100 operators, but had only used the system on a few search and rescue missions, and had not credited it with being the first to find any wreckage. In searches in Georgia and Maryland during 2007, ARCHER located the aircraft wreckage, but both accidents had no survivors, according to Col. Drew Alexa, director of advanced technology, and the ARCHER program manager at CAP. An ARCHER equipped aircraft from the Utah Wing of the Civil Air Patrol was used in the search for adventurer Steve Fossett in September 2007. ARCHER did not locate Mr. Fossett, but was instrumental in uncovering eight previously uncharted crash sites in the high desert area of Nevada, some decades old. Col. Alexa described the system to the press in 2007: "The human eye sees basically three bands of light. The ARCHER sensor sees 50. It can see things that are anomalous in the vegetation such as metal or something from an airplane wreckage." Major Cynthia Ryan of the Nevada Civil Air Patrol, while also describing the system to the press in 2007, stated, "ARCHER is essentially something used by the geosciences. It's pretty sophisticated stuff … beyond what the human eye can generally see," She elaborated further, "It might see boulders, it might see trees, it might see mountains, sagebrush, whatever, but it goes 'not that' or 'yes, that'. The amazing part of this is that it can see as little as 10 per cent of the target, and extrapolate from there." In addition to the primary search and rescue mission, CAP has tested additional uses for ARCHER. For example, an ARCHER equipped CAP GA8 was used in a pilot project in Missouri in August 2005 to assess the suitability of the system for tracking hazardous material releases into the environment, and one was deployed to track oil spills in the aftermath of Hurricane Rita in Texas during September 2005. Since then, in the case of a flight originating in Missouri, the ARCHER system proved its usefulness in October 2006, when it found the wreckage in Antlers, Okla. The National Transportation and Safety Board was extremely pleased with the data ARCHER provided, which was later used to locate aircraft debris spread over miles of rough, wooded terrain. In July 2007, the ARCHER system identified a flood-borne oil spill originating in a Kansas oil refinery, that extended downstream and had invaded previously unsuspected reservoir areas. The client agencies (EPA, Coast Guard, and other federal and state agencies) found the data essential to quick remediation. In September 2008, a Civil Air Patrol GA-8 from Texas Wing searched for a missing aircraft from Arkansas. It was found in Oklahoma, identified simultaneously by ground searchers and the overflying ARCHER system. Rather than a direct find, this was a validation of the system's accuracy and efficacy. In the subsequent recovery, it was found that the ARCHER plotted the debris area with great accuracy. Technical description The major ARCHER subsystem components include: advanced hyperspectral imaging (HSI) system with a resolution of one square meter per pixel. panchromatic high-resolution imaging (HRI) camera with a resolution of per pixel. global positioning system (GPS) integrated with an inertial navigation system (INS) Hyperspectral imager The passive hyperspectral imaging spectroscopy remote sensor observes a target in multi-spectral bands. The HSI camera separates the image spectra into 52 "bins" from 500 nanometers (nm) wavelength at the blue end of the visible spectrum to 1100 nm in the infrared, giving the camera a spectral resolution of 11.5 nm. Although ARCHER records data in all 52 bands, the computational algorithms only use the first 40 bands, from 500 nm to 960 nm because the bands above 960 nm are too noisy to be useful. For comparison, the normal human eye will respond to wavelengths from approximately 400 to 700 nm, and is trichromatic, meaning the eye's cone cells only sense light in three spectral bands. As the ARCHER aircraft flies over a search area, reflected sunlight is collected by the HSI camera lens. The collected light passes through a set of lenses that focus the light to form an image of the ground. The imaging system uses a pushbroom approach to image acquisition. With the pushbroom approach, the focusing slit reduces the image height to the equivalent of one vertical pixel, creating a horizontal line image. The horizontal line image is then projected onto a diffraction grating, which is a very finely etched reflecting surface that disperses light into its spectra. The diffraction grating is specially constructed and positioned to create a two-dimensional (2D) spectrum image from the horizontal line image. The spectra are projected vertically, i.e., perpendicular to the line image, by the design and arrangement of the diffraction grating. The 2D spectrum image projects onto a charge-coupled device (CCD) two-dimensional image sensor, which is aligned so that the horizontal pixels are parallel to the image's horizontal. As a result, the vertical pixels are coincident to the spectra produced from the diffraction grating. Each column of pixels receives the spectrum of one horizontal pixel from the original image. The arrangement of vertical pixel sensors in the CCD divides the spectrum into distinct and non-overlapping intervals. The CCD output consists of electrical signals for 52 spectral bands for each of 504 horizontal image pixels. The on-board computer records the CCD output signal at a frame rate of sixty times each second. At an aircraft altitude of 2,500 ft AGL and a speed of 100 knots, a 60 Hz frame rate equates to a ground image resolution of approximately one square meter per pixel. Thus, every frame captured from the CCD contains the spectral data for a ground swath that is approximately one meter long and 500 meters wide. High-resolution imager A high-resolution imaging (HRI) black-and-white, or panchromatic, camera is mounted adjacent to the HSI camera to enable both cameras to capture the same reflected light. The HRI camera uses a pushbroom approach just like the HSI camera with a similar lens and slit arrangement to limit the incoming light to a thin, wide beam. However, the HRI camera does not have a diffraction grating to disperse the incoming reflected light. Instead, the light is directed to a wider CCD to capture more image data. Because it captures a single line of the ground image per frame, it is called a line scan camera. The HRI CCD is 6,144 pixels wide and one pixel high. It operates at a frame rate of 720 Hz. At ARCHER search speed and altitude (100 knots over the ground at 2,500 ft AGL) each pixel in the black-and-white image represents a 3 inch by 3 inch area of the ground. This high resolution adds the capability to identify some objects. Processing A monitor in the cockpit displays detailed images in real time, and the system also logs the image and Global Positioning System data at a rate of 30 gigabytes (GB) per hour for later analysis. The on-board data processing system performs numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. ARCHER has three methods for locating targets: signature matching where reflected light is matched to spectral signatures anomaly detection using a statistical model of the pixels in the image to determine the probability that a pixel does not match the profile, and change detection which executes a pixel-by-pixel comparison of the current image against ground conditions that were obtained in a previous mission over the same area. In change detection, scene changes are identified, and new, moved or departed targets are highlighted for evaluation. In spectral signature matching, the system can be programmed with the parameters of a missing aircraft, such as paint colors, to alert the operators of possible wreckage. It can also be used to look for specific materials, such as petroleum products or other chemicals released into the environment, or even ordinary items like commonly available blue polyethylene tarpaulins. In an impact assessment role, information on the location of blue tarps used to temporarily repair buildings damaged in a storm can help direct disaster relief efforts; in a counterdrug role, a blue tarp located in a remote area could be associated with illegal activity. References External links NovaSol Corp Space Computer Corporation Civil Air Patrol Spectroscopy Earth observation remote sensors
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance
[ "Physics", "Chemistry" ]
2,169
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
13,163,358
https://en.wikipedia.org/wiki/Whole%20number%20rule
In chemistry, the whole number rule states that the masses of the isotopes are whole number multiples of the mass of the hydrogen atom. The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. It is also known as the Aston whole number rule after Francis W. Aston who was awarded the Nobel Prize in Chemistry in 1922 "for his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule." Law of definite proportions The law of definite proportions was formulated by Joseph Proust around 1800 and states that all samples of a chemical compound will have the same elemental composition by mass. The atomic theory of John Dalton expanded this concept and explained matter as consisting of discrete atoms with one kind of atom for each element combined in fixed proportions to form compounds. Prout's hypothesis In 1815, William Prout reported on his observation that the atomic weights of the elements were whole multiples of the atomic weight of hydrogen. He then hypothesized that the hydrogen atom was the fundamental object and that the other elements were a combination of different numbers of hydrogen atoms. Aston's discovery of isotopes In 1920, Francis W. Aston demonstrated through the use of a mass spectrometer that apparent deviations from Prout's hypothesis are predominantly due to the existence of isotopes. For example, Aston discovered that neon has two isotopes with masses very close to 20 and 22 as per the whole number rule, and proposed that the non-integer value 20.2 for the atomic weight of neon is due to the fact that natural neon is a mixture of about 90% neon-20 and 10% neon-22). A secondary cause of deviations is the binding energy or mass defect of the individual isotopes. Discovery of the neutron During the 1920s, it was thought that the atomic nucleus was made of protons and electrons, which would account for the disparity between the atomic number of an atom and its atomic mass. In 1932, James Chadwick discovered an uncharged particle of approximately the mass as the proton, which he called the neutron. The fact that the atomic nucleus is composed of protons and neutrons was rapidly accepted and Chadwick was awarded the Nobel Prize in Physics in 1935 for his discovery. The modern form of the whole number rule is that the atomic mass of a given elemental isotope is approximately the mass number (number of protons plus neutrons) times an atomic mass unit (approximate mass of a proton, neutron, or hydrogen-1 atom). This rule predicts the atomic mass of nuclides and isotopes with an error of at most 1%, with most of the error explained by the mass deficit caused by nuclear binding energy. References Further reading External links 1922 Nobel Prize Presentation Speech Mass spectrometry Periodic table
Whole number rule
[ "Physics", "Chemistry" ]
602
[ "Periodic table", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,164,797
https://en.wikipedia.org/wiki/Live%20bottom%20trailer
A live bottom trailer is a semi-trailer used for hauling loose material such as asphalt, grain, potatoes, sand and gravel. A live bottom trailer is the alternative to a dump truck or an end dump trailer. The typical live bottom trailer has a conveyor belt on the bottom of the trailer tub that pushes the material out of the back of the trailer at a controlled pace. Unlike the conventional dump truck, the tub does not have to be raised to deposit the materials. Operation The live bottom trailer is powered by a hydraulic system. When the operator engages the truck hydraulic system, it activates the conveyor belt, moving the load horizontally out of the back trailer. Uses Live bottom trailers can haul a variety of products including gravel, potatoes, top soil, grain, carrots, sand, lime, peat moss, asphalt, compost, rip-rap, heavy rocks, biowaste, etc. Those who work in industries such as the agriculture and construction benefit from the speed of unloading, versatility of the trailer and chassis mount. Safety The live bottom trailer eliminates trailer roll over because the tub does not have to be raised in the air to unload the materials. The trailer has a lower centre of gravity which makes it easy for the trailer to unload in an uneven area, compared to dump trailers that have to be on level ground to unload. Overhead electrical wires are a danger for the conventional dump trailer during unloading, but with a live bottom, wires are not a problem. The trailer can work anywhere that it can drive into because the tub does not have to be raised for unloading. In addition, the truck cannot be accidentally driven with the trailer raised, which has been a cause of a number of accidents, often involving collision with bridges, overpasses, or overhead/suspended traffic signs/lights. Advantages The tub empties clean, making it easier for different materials to be transported without having to get inside the tub to clean it out. The conveyor belt allows the material to be dumped at a controlled pace so that the material can be partially unloaded where it is needed. The rounded tub results in a lower centre of gravity which means a smoother ride and better handling than other trailers. Working under bridges and in confined areas is easier with a live bottom as opposed to a dump trailer because it can fit anywhere it can drive. Wet or dry materials can be hauled in a live bottom trailer. In a dump truck, wet materials stick in the top of the tub during unloading and causes trailer roll over. Insurance costs are lower for a live bottom trailer because it does not have to be raised in the air and there are few cases of trailer roll over. Disadvantages Some live bottom trailers are not well suited for heavy rock and demolition. However rip-rap, heavy rock, and asphalt can be hauled if built with the appropriate strength steels. See also Moving floor, a hydraulically driven conveyance system also used in semi-trailers External links Engineering vehicles
Live bottom trailer
[ "Engineering" ]
606
[ "Engineering vehicles" ]
13,165,796
https://en.wikipedia.org/wiki/Ocean%20heat%20content
Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean gives the total ocean heat uptake. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth's excess energy from global heating. The main driver of this increase was caused by humans via their rising greenhouse gas emissions. By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2023, the world's oceans were again the hottest in the historical record and exceeded the previous 2022 record maximum. The five highest ocean heat observations to a depth of 2000 meters occurred in the period 2019–2023. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years of global measurements. Ocean heat content and sea level rise are important indicators of climate change. Ocean water can absorb a lot of solar energy because water has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. The net rate of change in the top 2000 meters from 2003 to 2018 was (or annual mean energy gain of 9.3 zettajoules). It is difficult to measure temperatures accurately over long periods while at the same time covering enough areas and depths. This explains the uncertainty in the figures. Changes in ocean temperature greatly affect ecosystems in oceans and on land. For example, there are multiple impacts on coastal ecosystems and communities relying on their ecosystem services. Direct effects include variations in sea level and sea ice, changes to the intensity of the water cycle, and the migration of marine life. Calculations Definition Ocean heat content is a term used in physical oceanography to describe a type of thermodynamic potential energy that is stored in the ocean. It is defined in coordination with the equation of state of seawater. TEOS-10 is an international standard approved in 2010 by the Intergovernmental Oceanographic Commission. Calculation of ocean heat content follows that of enthalpy referenced to the ocean surface, also called potential enthalpy. OHC changes are thus made more readily comparable to seawater heat exchanges with ice, freshwater, and humid air. OHC is always reported as a change or as an "anomaly" relative to a baseline. Positive values then also quantify ocean heat uptake (OHU) and are useful to diagnose where most of planetary energy gains from global heating are going. To calculate the ocean heat content, measurements of ocean temperature from sample parcels of seawater gathered at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles). The areal density of ocean heat content between two depths is computed as a definite integral: where is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, is the in-situ seawater density profile, and is the conservative temperature profile. is defined at a single depth h0 usually chosen as the ocean surface. In SI units, has units of Joules per square metre (J·m−2). In practice, the integral can be approximated by summation using a smooth and otherwise well-behaved sequence of in-situ data; including temperature (t), pressure (p), salinity (s) and their corresponding density (ρ). Conservative temperature are translated values relative to the reference pressure (p0) at h0. A substitute known as potential temperature has been used in earlier calculations. Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. Measurements Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions. Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle. Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series altimeters have observed vertically integrated OHC, which is a major component of sea level rise. Since 2002, GRACE and GRACE-FO have remotely monitored ocean changes using gravimetry. The partnership between Argo and satellite measurements has thereby yielded ongoing improvements to estimates of OHC and other global ocean properties. Causes for heat uptake Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its heat capacity, and effectively transmits energy according to its heat transfer coefficient. Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean. Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. It can be computed as an accumulation over time of the observed differences (or imbalances) between total incoming and outgoing radiation. Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. Increases in planetary heat content for the well-observed 2005–2019 period are thought to exceed measurement uncertainties. From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction, downwelling, and upwelling. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland. Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy. From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets. Recent observations and changes Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales. Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700–2000 meter ocean layer. Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean. Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions. Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins. Impacts Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation. The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion. It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances. The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada. Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020. The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. Nevertheless the rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s; a scaling proportional to the increase in atmospheric carbon dioxide. Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there. See also References External links NOAA Global Ocean Heat and Salt Content Meteorological concepts Climate change Climatology Earth Earth sciences Environmental science Oceanography Articles containing video clips
Ocean heat content
[ "Physics", "Environmental_science" ]
2,925
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "nan" ]
13,167,602
https://en.wikipedia.org/wiki/Submersion%20%28coastal%20management%29
Submersion is the sustainable cyclic portion of coastal erosion where coastal sediments move from the visible portion of a beach to the submerged nearshore region, and later return to the original visible portion of the beach. The recovery portion of the sustainable cycle of sediment behaviour is named accretion. Submersion vs erosion The sediment that is submerged during rough weather forms landforms including storm bars. In calmer weather waves return sediment to the visible part of the beach. Due to longshore drift some sediment can end up further along the beach from where it started. Often coastal areas have developed sustainable coastal positions where the sediment moving off beaches is sustainable submersion. On many inhabited coastlines, anthropogenic interference in coastal processes has meant that erosion is often more permanent than submersion. Community perception The term erosion often is associated with undesirable impacts on the environment, whereas submersion is a sustainable part of healthy foreshores. Communities making decisions about coastal management need to develop understanding of the components of beach recession and be able to separate the component that is temporary sustainable submersion from the more serious irreversible anthropogenic or climate change erosion portion. References Coastal geography Geological processes Physical oceanography
Submersion (coastal management)
[ "Physics" ]
248
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
13,168,288
https://en.wikipedia.org/wiki/Jackup%20rig
A jackup rig or a self-elevating unit is a type of mobile platform that consists of a buoyant hull fitted with a number of movable legs, capable of raising its hull over the surface of the sea. The buoyant hull enables transportation of the unit and all attached machinery to a desired location. Once on location the hull is raised to the required elevation above the sea surface supported by the sea bed. The legs of such units may be designed to penetrate the sea bed, may be fitted with enlarged sections or footings, or may be attached to a bottom mat. Generally jackup rigs are not self-propelled and rely on tugs or heavy lift ships for transportation. Jackup platforms are almost exclusively used as exploratory oil and gas drilling platforms and as offshore and wind farm service platforms. Jackup rigs can either be triangular in shape with three legs or square in shape with four legs. Jackup platforms have been the most popular and numerous of various mobile types in existence. The total number of jackup drilling rigs in operation numbered about 540 at the end of 2013. The tallest jackup rig built to date is the Noble Lloyd Noble, completed in 2016 with legs 214 metres (702 feet) tall. Name Jackup rigs are so named because they are self-elevating with three, four, six and even eight movable legs that can be extended (“jacked”) above or below the hull. Jackups are towed or moved under self propulsion to the site with the hull lowered to the water level, and the legs extended above the hull. The hull is actually a water-tight barge that floats on the water’s surface. When the rig reaches the work site, the crew jacks the legs downward through the water and into the sea floor (or onto the sea floor with mat supported jackups). This anchors the rig and holds the hull well above the waves. History An early design was the DeLong platform, designed by Leon B. DeLong. In 1949 he started his own company, DeLong Engineering & Construction Company. In 1950 he constructed the DeLong Rig No. 1 for Magnolia Petroleum, consisting of a barge with six legs. In 1953 DeLong entered into a joint venture with McDermott, which built the DeLong-McDermott No.1 in 1954 for Humble Oil. This was the first mobile offshore drilling platform. This barge had ten legs which had spud cans to prevent them from digging into the seabed too deep. When DeLong-McDermott was taken over by the Southern Natural Gas Company, which formed The Offshore Company, the platform was called Offshore No. 51. In 1954, Zapata Offshore, owned by George H. W. Bush, ordered the Scorpion. It was designed by R. G. LeTourneau and featured three electro-mechanically-operated lattice type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955. The Scorpion was put into operation in May 1956 off Port Aransas, Texas. The second, also designed by LeTourneau, was called Vinegaroon. Operation A jackup rig is a barge fitted with long support legs that can be raised or lowered. The jackup is maneuvered (self-propelled or by towing) into location with its legs up and the hull floating on the water. Upon arrival at the work location, the legs are jacked down onto the seafloor. Then "preloading" takes place, where the weight of the barge and additional ballast water are used to drive the legs securely into the sea bottom so they will not penetrate further while operations are carried out. After preloading, the jacking system is used to raise the entire barge above the water to a predetermined height or "air gap", so that wave, tidal and current loading acts only on the relatively slender legs and not on the barge hull. Modern jacking systems use a rack and pinion gear arrangement where the pinion gears are driven by hydraulic or electric motors and the rack is affixed to the legs. Jackup rigs can only be placed in relatively shallow waters, generally less than of water. However, a specialized class of jackup rigs known as premium or ultra-premium jackups are known to have operational capability in water depths ranging from 150 to 190 meters (500 to 625 feet). Types Mobile offshore Drilling Units (MODU) This type of rig is commonly used in connection with oil and/or natural gas drilling. There are more jackup rigs in the worldwide offshore rig fleet than other type of mobile offshore drilling rig. Other types of offshore rigs include semi-submersibles (which float on pontoon-like structures) and drillships, which are ship-shaped vessels with rigs mounted in their center. These rigs drill through holes in the drillship hulls, known as moon pools. Turbine Installation Vessel (TIV) This type of rig is commonly used in connection with offshore wind turbine installation. Barges Jackup rigs can also refer to specialized barges that are similar to an oil and gas platform but are used as a base for servicing other structures such as offshore wind turbines, long bridges, and drilling platforms. See also Crane vessel Offshore geotechnical engineering Oil platform Rack phase difference TIV Resolution References Oil platforms Ship types
Jackup rig
[ "Chemistry", "Engineering" ]
1,091
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
14,325,911
https://en.wikipedia.org/wiki/BCAR1
Breast cancer anti-estrogen resistance protein 1 is a protein that in humans is encoded by the BCAR1 gene. Gene BCAR1 is localized on chromosome 16 on region q, on the negative strand and it consists of seven exons. Eight different gene isoforms have been identified that share the same sequence starting from the second exon onwards but are characterized by different starting sites. The longest isoform is called BCAR1-iso1 (RefSeq NM_001170714.1) and is 916 amino acids long, the other shorter isoforms start with an alternative first exon. Function BCAR1 is a ubiquitously expressed adaptor molecule originally identified as the major substrate of v-Src and v-Crk . p130Cas/BCAR1 belongs to the Cas family of adaptor proteins and can act as a docking protein for several signalling partners. Due to its ability to associate with multiple signaling partners, p130Cas/BCAR1 contributes to the regulation to a variety of signaling pathways leading to cell adhesion, migration, invasion, apoptosis, hypoxia and mechanical forces. p130Cas/BCAR1 plays a role in cell transformation and cancer progression and alterations of p130Cas/BCAR1 expression and the resulting activation of selective signalling are determinants for the occurrence of different types of human tumors. Due to the capacity of p130Cas/BCAR1, as an adaptor protein, to interact with multiple partners and to be regulated by phosphorylation and dephosphorylation, its expression and phosphorylation can lead to a wide range of functional consequences. Among the regulators of p130Cas/BCAR1 tyrosine phosphorylation, receptor tyrosine kinases (RTKs) and integrins play a prominent role. RTK-dependent p130Cas/BCAR1 tyrosine phosphorylation and the subsequent binding with specific downstream signaling molecule modulate cell processes such as actin cytoskeleton remodeling, cell adhesion, proliferation, migration, invasion and survival. Integrin-mediated p130Cas/BCAR1 phosphorylation upon adhesion to extracellular matrix (ECM) induces downstream signaling that is required for allowing cells to spread and migrate on the ECM. Both RTKs and integrin activation affect p130Cas/BCAR1 tyrosine phosphorylation and represent an efficient means by which cells utilize signals coming from growth factors and integrin activation to coordinate cell responses. Additionally, p130Cas/BCAR1 tyrosine phosphorylation on its substrate domain can be induced by cell stretching subsequent to changes in the rigidity of the extracellular matrix, allowing cells to respond to mechanical force changes in the cell environment. Cas-Family p130Cas/BCAR1 is a member of the Cas family (Crk-associated substrate) of adaptor proteins which is characterized by the presence of multiple conserved motifs for protein–protein interactions, and by extensive tyrosine and serine phosphorylations. The Cas family comprises other three members: NEDD9 (Neural precursor cell expressed, developmentally down-regulated 9, also called Human enhancer of filamentation 1, HEF-1 or Cas-L), EFS (Embryonal Fyn-associated substrate), and CASS4 (Cas scaffolding protein family member 4). These Cas proteins have a high structural homology, characterized by the presence of multiple protein interaction domains and phosphorylation motifs through which Cas family members can recruit effector proteins. However, despite the high degree of similarity, their temporal expression, tissue distribution and functional roles are distinct and not overlapping. Notably, the knock-out of p130Cas/BCAR1 in mice is embryonic lethal, suggesting that other family members do not show an overlapping role in development. Structure p130Cas/BCAR1 is a scaffold protein characterized by several structural domains. It possesses an amino N-terminal Src-homology 3 domain (SH3) domain, followed by a proline-rich domain (PRR) and a substrate domain (SD). The substrate domain consists of 15 repeats of the YxxP consensus phosphorylation motif for Src family kinases (SFKs). Following the substrate domain is the serine-rich domain, which forms a four-helix bundle. This acts as a protein-interaction motif, similar to those found in other adhesion-related proteins such as focal adhesion kinase (FAK) and vinculin. The remaining carboxy-terminal sequence contains a bipartite Src-binding domain (residues 681–713) able to bind both the SH2 and SH3 domains of Src. p130Cas/BCAR1 can undergo extensive changes in tyrosine phosphorylation that occur predominantly in the 15 YxxP repeats within the substrate domain and represent the major post-translational modification of p130Cas/BCAR1. p130Cas/BCAR1 tyrosine phosphorylation can result from a diverse range of extracellular stimuli, including growth factors, integrin activation, vasoactive hormones and peptides ligands for G-protein coupled receptors. These stimuli triggers p130Cas/BCAR1 tyrosine phosphorylation and its translocation from cytosol to the cell membrane. Clinical significance Given the ability of p130Cas/BCAR1 scaffold protein to convey and integrate different type of signals and subsequently to regulate key cellular functions such as adhesion, migration, invasion, proliferation and survival, the existence of a strong correlation between deregulated p130Cas/BCAR1 expression and cancer was inferred. Deregulated expression of p130Cas/BCAR1 has been identified in several cancer types. Altered levels of p130Cas/BCAR1 expression in cancers can result from gene amplification, transcription upregulation or changes in protein stability. Overexpression of p130Cas/BCAR1 has been detected in human breast cancer, prostate cancer, ovarian cancer, lung cancer, colorectal cancer, hepatocellular carcinoma, glioma, melanoma, anaplastic large cell lymphoma and chronic myelogenous leukaemia. The presence of aberrant levels of hyperphosphorylated p130Cas/BCAR1 strongly promotes cell proliferation, migration, invasion, survival, angiogenesis and drug resistance. It has been demonstrated that high levels of p130Cas/BCAR1 expression in breast cancer correlate with worse prognosis, increased probability to develop metastasis and resistance to therapy. Conversely, lowering the amount of p130Cas/BCAR1 expression in ovarian, breast and prostate cancer is sufficient to block tumor growth and progression of cancer cells. p130Cas/BCAR1 has potential uses as a diagnostic and prognostic marker for some human cancers. Since lowering p130Cas/BCAR1 in tumor cells is sufficient to halt their transformation and progression, it is conceivable to propose p130Cas/BCAR1 may represent a therapeutic target. However, the non-catalytic nature of p130Cas/BCAR1 makes difficult to develop specific inhibitors. Notes References Further reading External links Bcar1 Info with links in the Cell Migration Gateway Proteins
BCAR1
[ "Chemistry" ]
1,576
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,078
https://en.wikipedia.org/wiki/Actin%2C%20cytoplasmic%202
Actin, cytoplasmic 2, or gamma-actin is a protein that in humans is encoded by the ACTG1 gene. Gamma-actin is widely expressed in cellular cytoskeletons of many tissues; in adult striated muscle cells, gamma-actin is localized to Z-discs and costamere structures, which are responsible for force transduction and transmission in muscle cells. Mutations in ACTG1 have been associated with nonsyndromic hearing loss and Baraitser-Winter syndrome, as well as susceptibility of adolescent patients to vincristine toxicity. Structure Human gamma-actin is 41.8 kDa in molecular weight and 375 amino acids in length. Actins are highly conserved proteins that are involved in various types of cell motility, and maintenance of the cytoskeleton. In vertebrates, three main groups of actin paralogs, alpha, beta, and gamma, have been identified. The alpha actins are found in muscle tissues and are a major constituent of the sarcomere contractile apparatus. The beta and gamma actins co-exist in most cell types as components of the cytoskeleton, and as mediators of internal cell motility. Actin, gamma 1, encoded by this gene, is found in non-muscle cells in the cytoplasm, and in muscle cells at costamere structures, or transverse points of cell-cell adhesion that run perpendicular to the long axis of myocytes. Function In myocytes, sarcomeres adhere to the sarcolemma via costameres, which align at Z-discs and M-lines. The two primary cytoskeletal components of costameres are desmin intermediate filaments and gamma-actin microfilaments. It has been shown that gamma-actin interacting with another costameric protein dystrophin is critical for costameres forming mechanically strong links between the cytoskeleton and the sarcolemmal membrane. Additional studies have shown that gamma-actin colocalizes with alpha-actinin and GFP-labeled gamma actin localized to Z-discs, whereas GFP-alpha-actin localized to pointed ends of thin filaments, indicating that gamma actin specifically localizes to Z-discs in striated muscle cells. During development of myocytes, gamma actin is thought to play a role in the organization and assembly of developing sarcomeres, evidenced in part by its early colocalization with alpha-actinin. Gamma-actin is eventually replaced by sarcomeric alpha-actin isoforms, with low levels of gamma-actin persisting in adult myocytes which associate with Z-disc and costamere domains. Insights into the function of gamma-actin in muscle have come from studies employing transgenesis. In a skeletal muscle-specific knockout of gamma-actin in mice, these animals showed no detectable abnormalities in development; however, knockout mice showed muscle weakness and fiber necrosis, along with decreased isometric twitch force, disrupted intrafibrillar and interfibrillar connections among myocytes, and myopathy. Clinical significance An autosomal dominant mutation in ACTG1 in the DFNA20/26 locus at 17q25-qter was identified in patients with hearing loss. A Thr278Ile mutation was identified in helix 9 of gamma-actin protein, which is predicted to alter protein structure. This study identified the first disease causing mutation in gamma-actin and underlies the importance of gamma-actin as structural elements of the inner ear hair cells. Since then, other ACTG1 mutations have been linked to nonsyndromic hearing loss, including Met305Thr. A missense mutation in ACTG1 at Ser155Phe has also been identified in patients with Baraitser-Winter syndrome, which is a developmental disorder characterized by congenital ptosis, excessively-arched eyebrows, hypertelorism, ocular colobomata, lissencephaly, short stature, seizures and hearing loss. Differential expression of ACTG1 mRNA was also identified in patients with Sporadic Amyotrophic Lateral Sclerosis, a devastating disease with unknown causality, using a sophisticated bioinformatics approach employing Affymetrix long-oligonucleotide BaFL methods. Single nucleotide polymorphisms in ACTG1 have been associated with vincristine toxicity, which is part of the standard treatment regimen for childhood acute lymphoblastic leukemia. Neurotoxicity was more frequent in patients that were ACTG1 Gly310Ala mutation carriers, suggesting that this may play a role in patient outcomes from vincristine treatment. Interactions ACTG1 has been shown to interact with: CAP1, DMD, TMSB4X, and Plectin. See also Actin References External links Further reading Proteins
Actin, cytoplasmic 2
[ "Chemistry" ]
1,022
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,527
https://en.wikipedia.org/wiki/Flying%20probe
Flying probes are test probes used for testing both bare circuit boards and boards loaded with components. Flying probes were introduced in the late 1980’s and can be found in many manufacturing and assembly operations, most often in manufacturing of electronic printed circuit boards. A flying probe tester uses one or more test probes to make contact with the circuit board under test; the probes are moved from place to place on the circuit board to carry out tests of multiple conductors or components. Flying probe testers are a more flexible alternative to bed of nails testers, which use multiple contacts to simultaneously contact the board and which rely on electrical switching to carry out measurements. One limitation in flying probe test methods is the speed at which measurements can be taken; the probes must be moved to each new test site on the board, and then a measurement must be completed. Bed-of-nails testers touch each test point simultaneously and electronic switching of instruments between test pins is more rapid than movement of probes. Manufacturing of a bed-of-nails testers however is more costly. Bare board Loaded board in-circuit test In the testing of printed circuit boards, a flying probe test or fixtureless in-circuit test (FICT) system may be used for testing low to mid volume production, prototypes, and boards that present accessibility problems. A traditional "bed of nails" tester for testing a PCB requires a custom fixture to hold the PCBA and the Pogo pins which make contact with the PCBA. In contrast, FICT uses two or more flying probes, which may be moved based on software instruction. The flying probes are electro-mechanically controlled to access components on printed circuit assemblies (PCAs). The probes are moved around the board under test using an automatically operated two-axis system, and one or more test probes contact components of the board or test points on the printed circuit board. The main advantage of flying probe testing is the substantial cost of a bed-of-nails fixture, costing on the order of US $20,000, is not required. The flying probes also allow easy modification of the test fixture when the PCBA design changes. FICT may be used on both bare or assembled PCB's. However, since the tester makes measurements serially, instead of making many measurements at once, the test cycle may become much longer than for a bed-of-nails fixture. A test cycle that may take 30 seconds on such a system, may take an hour with flying probes. Test coverage may not be as comprehensive as a bed of nails tester (assuming similar net access for each), because fewer points are tested at one time. References electronic test equipment hardware testing nondestructive testing
Flying probe
[ "Materials_science", "Technology", "Engineering" ]
560
[ "Nondestructive testing", "Materials testing", "Electronic test equipment", "Measuring instruments" ]
14,326,894
https://en.wikipedia.org/wiki/Iris%20Bay%20%28Dubai%29
The Iris Bay is a 32-floor commercial tower in the Business Bay in Dubai, United Arab Emirates that is known for "its oval, crescent moon type shape." The tower has a total structural height of 170 m (558 ft). Construction of the Iris Bay was expected to be completed in 2008 but progress stopped in 2011. The building was completed 2015. The tower is designed in the shape of an ovoid and comprises two identical double curved pixelated shells which are rotated and cantilevered over the podium. The rear elevation is a continuous vertical curve punctuated by balconies while the front elevation is made up of seven zones of rotated glass. The podium comprises 4 stories with a double height ground level and houses retail and commercial space totaling 36,000 m2. See also List of buildings in Dubai Notes External links Buildings and structures under construction in Dubai High-tech architecture Postmodern architecture Skyscraper office buildings in Dubai
Iris Bay (Dubai)
[ "Engineering" ]
189
[ "Postmodern architecture", "Architecture" ]
14,331,278
https://en.wikipedia.org/wiki/Hildebrand%20solubility%20parameter
The Hildebrand solubility parameter (δ) provides a numerical estimate of the degree of interaction between materials and can be a good indication of solubility, particularly for nonpolar materials such as many polymers. Materials with similar values of δ are likely to be miscible. Definition The Hildebrand solubility parameter is the square root of the cohesive energy density: The cohesive energy density is the amount of energy needed to completely remove a unit volume of molecules from their neighbours to infinite separation (an ideal gas). This is equal to the heat of vaporization of the compound divided by its molar volume in the condensed phase. In order for a material to dissolve, these same interactions need to be overcome, as the molecules are separated from each other and surrounded by the solvent. In 1936 Joel Henry Hildebrand suggested the square root of the cohesive energy density as a numerical value indicating solvency behavior. This later became known as the "Hildebrand solubility parameter". Materials with similar solubility parameters will be able to interact with each other, resulting in solvation, miscibility or swelling. Uses and limitations Its principal utility is that it provides simple predictions of phase equilibrium based on a single parameter that is readily obtained for most materials. These predictions are often useful for nonpolar and slightly polar (dipole moment < 2 debyes) systems without hydrogen bonding. It has found particular use in predicting solubility and swelling of polymers by solvents. More complicated three-dimensional solubility parameters, such as Hansen solubility parameters, have been proposed for polar molecules. The principal limitation of the solubility parameter approach is that it applies only to associated solutions ("like dissolves like" or, technically speaking, positive deviations from Raoult's law); it cannot account for negative deviations from Raoult's law that result from effects such as solvation or the formation of electron donor–acceptor complexes. Like any simple predictive theory, it can inspire overconfidence; it is best used for screening with data used to verify the predictions. Units The conventional units for the solubility parameter are (calories per cm3)1/2, or cal1/2 cm−3/2. The SI units are J1/2 m−3/2, equivalent to the pascal1/2. 1 calorie is equal to 4.184 J. 1 cal1/2 cm−3/2 = (523/125 J)1/2 (10−2 m)−3/2 = (4.184 J)1/2 (0.01 m)−3/2 = 2.045483 103 J1/2 m−3/2 = 2.045483 (106 J/m3)1/2= 2.045483 MPa1/2. Given the non-exact nature of the use of δ, it is often sufficient to say that the number in MPa1/2 is about twice the number in cal1/2 cm−3/2. Where the units are not given, for example, in older books, it is usually safe to assume the non-SI unit. Examples From the table, poly(ethylene) has a solubility parameter of 7.9 cal1/2 cm−3/2. Good solvents are likely to be diethyl ether and hexane. (However, PE only dissolves at temperatures well above 100 °C.) Poly(styrene) has a solubility parameter of 9.1 cal1/2 cm−3/2, and thus ethyl acetate is likely to be a good solvent. Nylon 6,6 has a solubility parameter of 13.7 cal1/2 cm−3/2, and ethanol is likely to be the best solvent of those tabulated. However, the latter is polar, and thus we should be very cautions about using just the Hildebrand solubility parameter to make predictions. See also Solvent Hansen solubility parameters References Notes Bibliography External links Abboud J.-L. M., Notario R. (1999) Critical compilation of scales of solvent parameters. part I. pure, non-hydrogen bond donor solvents – technical report. Pure Appl. Chem. 71(4), 645–718 (IUPAC document with large table (1b) of Hildebrand solubility parameter (δH)) Polymer chemistry 1936 introductions
Hildebrand solubility parameter
[ "Chemistry", "Materials_science", "Engineering" ]
933
[ "Materials science", "Polymer chemistry" ]
14,331,851
https://en.wikipedia.org/wiki/Boundedly%20generated%20group
In mathematics, a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups. The property of bounded generation is also closely related with the congruence subgroup problem (see ). Definitions A group G is called boundedly generated if there exists a finite subset S of G and a positive integer m such that every element g of G can be represented as a product of at most m powers of the elements of S: where and are integers. The finite set S generates G, so a boundedly generated group is finitely generated. An equivalent definition can be given in terms of cyclic subgroups. A group G is called boundedly generated if there is a finite family C1, …, CM of not necessarily distinct cyclic subgroups such that G = C1…CM as a set. Properties Bounded generation is unaffected by passing to a subgroup of finite index: if H is a finite index subgroup of G then G is boundedly generated if and only if H is boundedly generated. Bounded generation goes to extension: if a group G has a normal subgroup N such that both N and G/N are boundedly generated, then so is G itself. Any quotient group of a boundedly generated group is also boundedly generated. A finitely generated torsion group must be finite if it is boundedly generated; equivalently, an infinite finitely generated torsion group is not boundedly generated. A pseudocharacter on a discrete group G is defined to be a real-valued function f on a G such that f(gh) − f(g) − f(h) is uniformly bounded and f(gn) = n·f(g). The vector space of pseudocharacters of a boundedly generated group G is finite-dimensional. Examples If n ≥ 3, the group SLn(Z) is boundedly generated by its elementary subgroups, formed by matrices differing from the identity matrix only in one off-diagonal entry. In 1984, Carter and Keller gave an elementary proof of this result, motivated by a question in algebraic . A free group on at least two generators is not boundedly generated (see below). The group SL2(Z) is not boundedly generated, since it contains a free subgroup with two generators of index 12. A Gromov-hyperbolic group is boundedly generated if and only if it is virtually cyclic (or elementary), i.e. contains a cyclic subgroup of finite index. Free groups are not boundedly generated Several authors have stated in the mathematical literature that it is obvious that finitely generated free groups are not boundedly generated. This section contains various obvious and less obvious ways of proving this. Some of the methods, which touch on bounded cohomology, are important because they are geometric rather than algebraic, so can be applied to a wider class of groups, for example Gromov-hyperbolic groups. Since for any n ≥ 2, the free group on 2 generators F2 contains the free group on n generators Fn as a subgroup of finite index (in fact n − 1), once one non-cyclic free group on finitely many generators is known to be not boundedly generated, this will be true for all of them. Similarly, since SL2(Z) contains F2 as a subgroup of index 12, it is enough to consider SL2(Z). In other words, to show that no Fn with n ≥ 2 has bounded generation, it is sufficient to prove this for one of them or even just for SL2(Z) . Burnside counterexamples Since bounded generation is preserved under taking homomorphic images, if a single finitely generated group with at least two generators is known to be not boundedly generated, this will be true for the free group on the same number of generators, and hence for all free groups. To show that no (non-cyclic) free group has bounded generation, it is therefore enough to produce one example of a finitely generated group which is not boundedly generated, and any finitely generated infinite torsion group will work. The existence of such groups constitutes Golod and Shafarevich's negative solution of the generalized Burnside problem in 1964; later, other explicit examples of infinite finitely generated torsion groups were constructed by Aleshin, Olshanskii, and Grigorchuk, using automata. Consequently, free groups of rank at least two are not boundedly generated. Symmetric groups The symmetric group Sn can be generated by two elements, a 2-cycle and an n-cycle, so that it is a quotient group of F2. On the other hand, it is easy to show that the maximal order M(n) of an element in Sn satisfies log M(n) ≤ n/e where e is Euler's number (Edmund Landau proved the more precise asymptotic estimate log M(n) ~ (n log n)1/2). In fact if the cycles in a cycle decomposition of a permutation have length N1, ..., Nk with N1 + ··· + Nk = n, then the order of the permutation divides the product N1 ··· Nk, which in turn is bounded by (n/k)k, using the inequality of arithmetic and geometric means. On the other hand, (n/x)x is maximized when x = e. If F2 could be written as a product of m cyclic subgroups, then necessarily n! would have to be less than or equal to M(n)m for all n, contradicting Stirling's asymptotic formula. Hyperbolic geometry There is also a simple geometric proof that G = SL2(Z) is not boundedly generated. It acts by Möbius transformations on the upper half-plane H, with the Poincaré metric. Any compactly supported 1-form α on a fundamental domain of G extends uniquely to a G-invariant 1-form on H. If z is in H and γ is the geodesic from z to g(z), the function defined by satisfies the first condition for a pseudocharacter since by the Stokes theorem where Δ is the geodesic triangle with vertices z, g(z) and h−1(z), and geodesics triangles have area bounded by π. The homogenized function defines a pseudocharacter, depending only on α. As is well known from the theory of dynamical systems, any orbit (gk(z)) of a hyperbolic element g has limit set consisting of two fixed points on the extended real axis; it follows that the geodesic segment from z to g(z) cuts through only finitely many translates of the fundamental domain. It is therefore easy to choose α so that fα equals one on a given hyperbolic element and vanishes on a finite set of other hyperbolic elements with distinct fixed points. Since G therefore has an infinite-dimensional space of pseudocharacters, it cannot be boundedly generated. Dynamical properties of hyperbolic elements can similarly be used to prove that any non-elementary Gromov-hyperbolic group is not boundedly generated. Brooks pseudocharacters Robert Brooks gave a combinatorial scheme to produce pseudocharacters of any free group Fn; this scheme was later shown to yield an infinite-dimensional family of pseudocharacters (see ). Epstein and Fujiwara later extended these results to all non-elementary Gromov-hyperbolic groups. Gromov boundary This simple folklore proof uses dynamical properties of the action of hyperbolic elements on the Gromov boundary of a Gromov-hyperbolic group. For the special case of the free group Fn, the boundary (or space of ends) can be identified with the space X of semi-infinite reduced words g1 g2 ··· in the generators and their inverses. It gives a natural compactification of the tree, given by the Cayley graph with respect to the generators. A sequence of semi-infinite words converges to another such word provided that the initial segments agree after a certain stage, so that X is compact (and metrizable). The free group acts by left multiplication on the semi-infinite words. Moreover, any element g in Fn has exactly two fixed points g ±∞, namely the reduced infinite words given by the limits of g&hairsp;n as n tends to ±∞. Furthermore, g&hairsp;n·w tends to g ±∞ as n tends to ±∞ for any semi-infinite word w; and more generally if wn tends to w ≠ g ±∞, then g&hairsp;n·wn tends to g&hairsp;+∞ as n tends to ∞. If Fn were boundedly generated, it could be written as a product of cyclic groups Ci generated by elements hi. Let X0 be the countable subset given by the finitely many Fn-orbits of the fixed points hi ±∞, the fixed points of the hi and all their conjugates. Since X is uncountable, there is an element of g with fixed points outside X0 and a point w outside X0 different from these fixed points. Then for some subsequence (gm) of (gn) gm = h1n(m,1) ··· hkn(m,k), with each n(m,i&hairsp;) constant or strictly monotone. On the one hand, by successive use of the rules for computing limits of the form h&hairsp;n·wn, the limit of the right hand side applied to x is necessarily a fixed point of one of the conjugates of the hi's. On the other hand, this limit also must be g&hairsp;+∞, which is not one of these points, a contradiction. References (see pages 222-229, also available on the Cornell archive) . Group theory Geometric group theory
Boundedly generated group
[ "Physics", "Mathematics" ]
2,076
[ "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Symmetry" ]
14,334,415
https://en.wikipedia.org/wiki/Grzegorczyk%20hierarchy
The Grzegorczyk hierarchy (, ), named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels. Definition First we introduce an infinite set of functions, denoted Ei for some natural number i. We define is the addition function, and is a unary function which squares its argument and adds two. Then, for each n greater than 1, , i.e. the x-th iterate of evaluated at 2. From these functions we define the Grzegorczyk hierarchy. , the n-th set in the hierarchy, contains the following functions: Ek for k < n the zero function (Z(x) = 0); the successor function (S(x) = x + 1); the projection functions (); the (generalized) compositions of functions in the set (if h, g1, g2, ... and gm are in , then is as well); and the results of limited (primitive) recursion applied to functions in the set, (if g, h and j are in and for all t and , and further and , then f is in as well). In other words, is the closure of set with respect to function composition and limited recursion (as defined above). Properties These sets clearly form the hierarchy because they are closures over the 's and . They are strict subsets. In other words because the hyperoperation is in but not in . includes functions such as x+1, x+2, ... Every unary function f(x) in is upper bounded by some x+n. However, also includes more complicated functions like x∸1, x∸y (where ∸ is the monus sign defined as x∸y = max(x-y, 0)), , etc. provides all addition functions, such as x+y, 4x, ... provides all multiplication functions, such as xy, x4 provides all exponentiation functions, such as xy, 222x, and is exactly the elementary recursive functions. provides all tetration functions, and so on. Notably, both the function and the characteristic function of the predicate from the Kleene normal form theorem are definable in a way such that they lie at level of the Grzegorczyk hierarchy. This implies in particular that every recursively enumerable set is enumerable by some -function. Relation to primitive recursive functions The definition of is the same as that of the primitive recursive functions, , except that recursion is limited ( for some j in ) and the functions are explicitly included in . Thus the Grzegorczyk hierarchy can be seen as a way to limit the power of primitive recursion to different levels. It is clear from this fact that all functions in any level of the Grzegorczyk hierarchy are primitive recursive functions (i.e. ) and thus: It can also be shown that all primitive recursive functions are in some level of the hierarchy, thus and the sets partition the set of primitive recursive functions, . Meyer and Ritchie introduced another hierarchy subdividing the primitive recursive functions, based on the nesting depth of loops needed to write a LOOP program that computes the function. For a natural number , let denote the set of functions computable by a LOOP program with LOOP and END commands nested no deeper than levels. Fachini and Maggiolo-Schettini showed that coincides with for all integers .p.63 Extensions The Grzegorczyk hierarchy can be extended to transfinite ordinals. Such extensions define a fast-growing hierarchy. To do this, the generating functions must be recursively defined for limit ordinals (note they have already been recursively defined for successor ordinals by the relation ). If there is a standard way of defining a fundamental sequence , whose limit ordinal is , then the generating functions can be defined . However, this definition depends upon a standard way of defining the fundamental sequence. suggests a standard way for all ordinals α < ε0. The original extension was due to Martin Löb and Stan S. Wainer and is sometimes called the Löb–Wainer hierarchy. See also ELEMENTARY Fast-growing hierarchy Ordinal analysis Notes References Bibliography Computability theory Hierarchy of functions
Grzegorczyk hierarchy
[ "Mathematics" ]
979
[ "Computability theory", "Mathematical logic" ]
4,047,274
https://en.wikipedia.org/wiki/Gold-containing%20drugs
Gold-containing drugs are pharmaceuticals that contain gold. Sometimes these species are referred to as "gold salts". "Chrysotherapy" and "aurotherapy" are the applications of gold compounds to medicine. Research on the medicinal effects of gold began in 1935, primarily to reduce inflammation and to slow disease progression in patients with rheumatoid arthritis. The use of gold compounds has decreased since the 1980s because of numerous side effects and monitoring requirements, limited efficacy, and very slow onset of action. Most chemical compounds of gold, including some of the drugs discussed below, are not salts, but are examples of metal thiolate complexes. Use in rheumatoid arthritis Investigation of medical applications of gold began at the end of the 19th century, when gold cyanide demonstrated efficacy in treating Mycobacterium tuberculosis in vitro. Indications The use of injected gold compound is indicated for rheumatoid arthritis. Its uses have diminished with the advent of newer compounds such as methotrexate and because of numerous side effects. The efficacy of orally administered gold is more limited than injecting the gold compounds. Mechanism in arthritis The mechanism by which gold drugs affect arthritis is unknown. Administration Gold-containing drugs for rheumatoid arthritis are administered by intramuscular injection but can also be administered orally (although the efficacy is low). Regular urine tests to check for protein, indicating kidney damage, and blood tests are required. Efficacy A 1997 review (Suarez-Almazor ME, et al) reports that treatment with intramuscular gold (parenteral gold) reduces disease activity and joint inflammation. Gold-containing drugs taken by mouth are less effective than by injection. Three to six months are often required before gold treatment noticeably improves symptoms. Side effects Chrysiasis A noticeable side-effect of gold-based therapy is skin discoloration, in shades of mauve to a purplish dark grey when exposed to sunlight. Skin discoloration occurs when gold salts are taken on a regular basis over a long period of time. Excessive intake of gold salts while undergoing chrysotherapy results – through complex redox processes – in the saturation by relatively stable gold compounds of skin tissue and organs (as well as teeth and ocular tissue in extreme cases) in a condition known as chrysiasis. This condition is similar to argyria, which is caused by exposure to silver salts and colloidal silver. Chrysiasis can ultimately lead to acute kidney injury (such as tubular necrosis, nephrosis, glomerulitis), severe heart conditions, and hematologic complications (leukopenia, anemia). While some effects can be healed with moderate success, the skin discoloration is considered permanent. Other side effects Other side effects of gold-containing drugs include kidney damage, itching rash, and ulcerations of the mouth, tongue, and pharynx. Approximately 35% of patients discontinue the use of gold salts because of these side effects. Kidney function must be monitored continuously while taking gold compounds. Types Disodium aurothiomalate Sodium aurothiosulfate (Gold sodium thiosulfate) Sodium aurothiomalate (Gold sodium thiomalate) (UK) Auranofin (UK & US) Aurothioglucose (Gold thioglucose) (US) References External links "Gold salts for juvenile rheumatoid arthritis". BCHealthGuide.org "Gold salts information". DiseasesDatabase.com "HMS researchers find how gold fights arthritis: Sheds light on how medicinal metal function against rheumatoid arthritis and other autoimmune diseases." Harvard University Gazette (2006) "Aurothioglucose is a gold salt used in treating inflammatory arthritis". MedicineNet.com "About gold treatment: What is it? Gold treatment includes different forms of gold salts used to treat arthritis." Washington.edu University of Washington (December 30, 2004) Gold compounds Hepatotoxins Antirheumatic products Coordination complexes Nephrotoxins
Gold-containing drugs
[ "Chemistry" ]
854
[ "Coordination chemistry", "Coordination complexes" ]
4,047,871
https://en.wikipedia.org/wiki/Hexachlorobenzene
Hexachlorobenzene, or perchlorobenzene, is an aryl chloride and a six-substituted chlorobenzene with the molecular formula C6Cl6. It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt. Its use has been banned globally under the Stockholm Convention on Persistent Organic Pollutants. Physical and chemical properties Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid, other chlorinated compounds (such as phosgene), carbon monoxide, and carbon dioxide. History Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride, he then suggested that his compound was the same as Julin's chloride of carbon. Müller previously also believed it was the same compound as Michael Faraday's "perchloride of carbon" (Hexachloroethane), obtained a small sample of Julin's chloride of carbon to send to Richard Phillips and Faraday for investigation. In 1867, Henry Bassett proved that the compound produced from benzene and antimony was the same as Julian's carbon chloride and named it "hexachlorobenzene". Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre. Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube. Synthesis Large-scale manufacture for use as a fungicide was developed by using the residue remaining after purification of the mixture of isomers of hexachlorocyclohexane, from which the insecticide lindane (the γ-isomer) had been removed, leaving the unwanted α- and β- isomers. This mixture is produced when benzene is reacted with chlorine in the presence of ultraviolet light (e.g. from sunlight). However, manufacture is no longer practiced following the compound's ban. Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorobenzenes. A typical catalyst is ferric chloride. Much milder reagents than chlorine (e.g. dichlorine monoxide, iodine in chlorosulfonic acid) also suffice, and the various hexachlorocyclohexanes can substitute for benzene as well. Usage Hexachlorobenzene was used in agriculture to control the fungus tilletia caries (common bunt of wheat). It is also effective on tilletia controversa, dwarf bunt. The compound was introduced in 1947, normally formulated as a seed dressing but is now banned in many countries. A minor industrial phloroglucinol synthesis nucleophilically substitutes hexachlorobenzene with alkoxides, followed by acidic workup. Environmental considerations In the 1970 HCB was produced at a level of 100,000 tons/y. Since then usage has decline steadily, production being 23-90 tons/y in "mid 1990s". The half-life in the soil is estimated to be 9 years. The mechanism of its toxicity and other adverse effects remain under study. Safety Hexachlorobenzene can react violently with dimethylformamide, particularly in the presence of catalytic transition-metal salts. Toxicology Oral LD50 (rat): 10,000 mg/kg Oral LD50 (mice): 4,000 mg/kg Inhalation LC50 (rat): 3600 mg/m3 Material has relatively low acute toxicity but is toxic because of its persistent and cumulative nature in body tissues in rich lipid content. Hexachlorobenzene is an animal carcinogen and is considered to be a probable human carcinogen. After its introduction as a fungicide in 1945, for crop seeds, this toxic chemical was found in all food types. Hexachlorobenzene was banned from use in the United States in 1966. This material has been classified by the International Agency for Research on Cancer (IARC) as a Group 2B carcinogen (possibly carcinogenic to humans). Animal carcinogenicity data for hexachlorobenzene show increased incidences of liver, kidney (renal tubular tumours) and thyroid cancers. Chronic oral exposure in humans has been shown to give rise to a liver disease (porphyria cutanea tarda), skin lesions with discoloration, ulceration, photosensitivity, thyroid effects, bone effects and loss of hair. Neurological changes have been reported in rodents exposed to hexachlorobenzene. Hexachlorobenzene may cause embryolethality and teratogenic effects. Human and animal studies have demonstrated that hexachlorobenzene crosses the placenta to accumulate in foetal tissues and is transferred in breast milk. HCB is very toxic to aquatic organisms. It may cause long term adverse effects in the aquatic environment. Therefore, release into waterways should be avoided. It is persistent in the environment. Ecological investigations have found that biomagnification up the food chain does occur. Hexachlorobenzene has a half life in the soil of between 3 and 6 years. Risk of bioaccumulation in an aquatic species is high. Anatolian porphyria In Anatolia, Turkey between 1955 and 1959, during a period when bread wheat was unavailable, 500 people were fatally poisoned and more than 4,000 people fell ill by eating bread made with HCB-treated seed that was intended for agriculture use. Most of the sick were affected with a liver condition called porphyria cutanea tarda, which disturbs the metabolism of hemoglobin and results in skin lesions. Almost all breastfeeding children under the age of two, whose mothers had eaten tainted bread, died from a condition called "pembe yara" or "pink sore", most likely from high doses of HCB in the breast milk. In one mother's breast milk the HCB level was found to be 20 parts per million in lipid, approximately 2,000 times the average levels of contamination found in breast-milk samples around the world. Follow-up studies 20 to 30 years after the poisoning found average HCB levels in breast milk were still more than seven times the average for unexposed women in that part of the world (56 specimens of human milk obtained from mothers with porphyria, average value was 0.51 ppm in HCB-exposed patients compared to 0.07 ppm in unexposed controls), and 150 times the level allowed in cow's milk. In the same follow-up study of 252 patients (162 males and 90 females, avg. current age of 35.7 years), 20–30 years' postexposure, many subjects had dermatologic, neurologic, and orthopedic symptoms and signs. The observed clinical findings include scarring of the face and hands (83.7%), hyperpigmentation (65%), hypertrichosis (44.8%), pinched faces (40.1%), painless arthritis (70.2%), small hands (66.6%), sensory shading (60.6%), myotonia (37.9%), cogwheeling (41.9%), enlarged thyroid (34.9%), and enlarged liver (4.8%). Urine and stool porphyrin levels were determined in all patients, and 17 have at least one of the porphyrins elevated. Offspring of mothers with three decades of HCB-induced porphyria appear normal. See also Chlorobenzenes—different numbers of chlorine substituents Pentachlorobenzenethiol References Cited works Additional references International Agency for Research on Cancer. In: IARC Monographs on the Evaluation of Carcinogenic Risk to Humans. World Health Organisation, Vol 79, 2001pp 493–567 Registry of Toxic Effects of Chemical Substances. Ed. D. Sweet, US Dept. of Health & Human Services: Cincinnati, 2005. Environmental Health Criteria No 195; International Programme on Chemical Safety, World health Organization, Geneva, 1997. Toxicological Profile for Hexachlorobenzene (Update), US Dept of Health & Human Services, Sept 2002. Merck Index, 11th Edition, 4600 External links Obsolete pesticides Chlorobenzenes Endocrine disruptors Fungicides Hazardous air pollutants IARC Group 2B carcinogens Persistent organic pollutants under the Stockholm Convention Suspected teratogens Suspected embryotoxicants Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Perchlorocarbons
Hexachlorobenzene
[ "Chemistry", "Biology" ]
2,013
[ "Fungicides", "Persistent organic pollutants under the Stockholm Convention", "Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution", "Endocrine disruptors", "Biocides" ]
4,048,455
https://en.wikipedia.org/wiki/Mechanically%20interlocked%20molecular%20architectures
In chemistry, mechanically interlocked molecular architectures (MIMAs) are molecules that are connected as a consequence of their topology. This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, and molecular Borromean rings. Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa, Jean-Pierre Sauvage, and J. Fraser Stoddart. The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both "supramolecular assemblies" and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots, cyclotides or lasso-peptides such as microcin J25 which are proteins, and a variety of peptides. Residual topology Residual topology is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds, while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes, catenanes with covalently linked rings (so-called pretzelanes), and open knots (pseudoknots) which are abundant in proteins. The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules. History Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiadane) pushed the boundaries of the complexity of MIMAs that could be synthesized their success was confirmed in 1996 by a solid‐state structure analysis conducted by David Williams. Mechanical bonding and chemical reactivity The introduction of a mechanical bond alters the chemistry of the sub components of rotaxanes and catenanes. Steric hindrance of reactive functionalities is increased and the strength of non-covalent interactions between the components are altered. Mechanical bonding effects on non-covalent interactions The strength of non-covalent interactions in a mechanically interlocked molecular architecture increases as compared to the non-mechanically bonded analogues. This increased strength is demonstrated by the necessity of harsher conditions to remove a metal template ion from catenanes as opposed to their non-mechanically bonded analogues. This effect is referred to as the "catenand effect". The augmented non-covalent interactions in interlocked systems compared to non-interlocked systems has found utility in the strong and selective binding of a range of charged species, enabling the development of interlocked systems for the extraction of a range of salts. This increase in strength of non-covalent interactions is attributed to the loss of degrees of freedom upon the formation of a mechanical bond. The increase in strength of non-covalent interactions is more pronounced on smaller interlocked systems, where more degrees of freedom are lost, as compared to larger mechanically interlocked systems where the change in degrees of freedom is lower. Therefore, if the ring in a rotaxane is made smaller the strength of non-covalent interactions increases, the same effect is observed if the thread is made smaller as well. Mechanical bonding effects on chemical reactivity The mechanical bond can reduce the kinetic reactivity of the products, this is ascribed to the increased steric hindrance. Because of this effect hydrogenation of an alkene on the thread of a rotaxane is significantly slower as compared to the equivalent non interlocked thread. This effect has allowed for the isolation of otherwise reactive intermediates. The ability to alter reactivity without altering covalent structure has led to MIMAs being investigated for a number of technological applications. Applications of mechanical bonding in controlling chemical reactivity The ability for a mechanical bond to reduce reactivity and hence prevent unwanted reactions has been exploited in a number of areas. One of the earliest applications was in the protection of organic dyes from environmental degradation. Examples Olympiadane References Further reading Supramolecular chemistry Molecular topology
Mechanically interlocked molecular architectures
[ "Chemistry", "Materials_science", "Mathematics" ]
1,104
[ "Molecular topology", "Topology", "nan", "Nanotechnology", "Supramolecular chemistry" ]
4,049,168
https://en.wikipedia.org/wiki/Glass%20cloth
Glass cloth is a textile material woven from glass fiber yarn. Home and garden Glass cloth was originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants. Glass cloth is also a term for a type of tea towel suited for polishing glass. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth. In the Southern Plains during the Dust Bowl, states' health officials recommended attaching translucent glass cloth to the inside frames of windows to help in keeping the dust out of buildings, although people also used paperboard, canvas or blankets. Eyewitness accounts indicate they were not completely successful. Use in technology Given the properties of glass - in particular, its heat resistance and inability to ignite - glass is often used to create fire barriers in hazardous environments, such as those inside racecars. Due to its poor flexibility and ability to cause skin irritation, glass fibers are typically inadequate for use in apparel. However, the bi-directional strength of glass cloth has found utility in some fiberglass reinforced plastics. The Rutan VariEze homebuilt aircraft uses a moldless glass-cloth/epoxy composite, which acts as a protective skin. Glass cloth is also commonly used as a reinforcing lattice for pre-pregs. See also G-10 (material) Glass fiber References Woven fabrics Linens Fiberglass Composite materials Fibre-reinforced polymers Glass applications
Glass cloth
[ "Physics", "Chemistry", "Materials_science" ]
355
[ "Composite materials", "Fiberglass", "Materials", "Polymer chemistry", "Matter" ]
4,049,625
https://en.wikipedia.org/wiki/Noise%20barrier
A noise barrier (also called a soundwall, noise wall, sound berm, sound barrier, or acoustical barrier) is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway, railway, and industrial noise sources – other than cessation of the source activity or use of source controls. In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, and choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s. History Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries. The best of these early computer models considered the effects of roadway geometry, topography, vehicle volumes, vehicle speeds, truck mix, road surface type, and micro-meteorology. Several U.S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California; the ESL Inc. group in Sunnyvale, California; the Bolt, Beranek and Newman group in Cambridge, Massachusetts, and a research team at the University of Florida. Possibly the earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California. Numerous case studies across the U.S. soon addressed dozens of different existing and planned highways. Most were commissioned by state highway departments and conducted by one of the four research groups mentioned above. The U.S. National Environmental Policy Act, enacted in 1970, effectively mandated the quantitative analysis of noise pollution from every Federal-Aid Highway Act Project in the country, propelling noise barrier model development and application. With passage of the Noise Control Act of 1972, demand for noise barrier design soared from a host of noise regulation spinoff. By the late 1970s, more than a dozen research groups in the U.S. were applying similar computer modeling technology and addressing at least 200 different locations for noise barriers each year. , this technology is considered a standard in the evaluation of noise pollution from highways. The nature and accuracy of the computer models used is nearly identical to the original 1970s versions of the technology. Small and purposeful gaps exist in most noise barriers to allow firefighters to access nearby fire hydrants and pull through fire hoses, which are usually denoted by a sign indicating the nearest cross street, and a pictogram of a fire hydrant, though some hydrant gaps channel the hoses through small culvert channels beneath the wall. Design The acoustical science of noise barrier design is based upon treating an airway or railway as a line source. The theory is based upon blockage of sound ray travel toward a particular receptor; however, diffraction of sound must be addressed. Sound waves bend (downward) when they pass an edge, such as the apex of a noise barrier. Barriers that block line of sight of a highway or other source will therefore block more sound. Further complicating matters is the phenomenon of refraction, the bending of sound rays in the presence of an inhomogeneous atmosphere. Wind shear and thermocline produce such inhomogeneities. The sound sources modeled must include engine noise, tire noise, and aerodynamic noise, all of which vary by vehicle type and speed. The noise barrier may be constructed on private land, on a public right-of-way, or on other public land. Because sound levels are measured using a logarithmic scale, a reduction of nine decibels is equivalent to elimination of approximately 86 percent of the unwanted sound power. Materials Several different materials may be used for sound barriers, including masonry, earthwork (such as earth berm), steel, concrete, wood, plastics, insulating wool, or composites. Walls that are made of absorptive material mitigate sound differently than hard surfaces. It is also possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while also reducing traffic noise. A wall with porous surface material and sound-dampening content material can be absorptive where little or no noise is reflected back towards the source or elsewhere. Hard surfaces such as masonry or concrete are considered to be reflective where most of the noise is reflected back towards the noise source and beyond. Noise barriers can be effective tools for noise pollution abatement, but certain locations and topographies are not suitable for use of noise barriers. Cost and aesthetics also play a role in the choice of noise barriers. In some cases, a roadway is surrounded by a noise abatement structure or dug into a tunnel using the cut-and-cover method. Disadvantages Potential disadvantages of noise barriers include: Blocked vision for motorists and rail passengers. Glass elements in noise screens can reduce visual obstruction, but require regular cleaning Aesthetic impact on land- and townscape An expanded target for graffiti, unsanctioned guerilla advertising, and vandalism Creation of spaces hidden from view and social control (e.g. at railway stations) Possibility of bird–window collisions for large and clear barriers Effects on air pollution Roadside noise barriers have been shown to reduce the near-road air pollution concentration levels. Within 15–50 m from the roadside, air pollution concentration levels at the lee side of the noise barriers may be reduced by up to 50% compared to open road values. Noise barriers force the pollution plumes coming from the road to move up and over the barrier creating the effect of an elevated source and enhancing vertical dispersion of the plume. The deceleration and the deflection of the initial flow by the noise barrier force the plume to disperse horizontally. A highly turbulent shear zone characterized by slow velocities and a re-circulation cavity is created in the lee of the barrier which further enhances the dispersion; this mixes ambient air with the pollutants downwind behind the barrier. See also Health effects from noise Noise control Safety barrier Soundproofing References External links Environmental engineering Noise pollution Noise control Road infrastructure Acoustics Sound 1970s introductions
Noise barrier
[ "Physics", "Chemistry", "Engineering" ]
1,322
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Civil engineering", "Environmental engineering" ]
4,051,468
https://en.wikipedia.org/wiki/Plutonium-238
Plutonium-238 ( or Pu-238) is a radioactive isotope of plutonium that has a half-life of 87.7 years. Plutonium-238 is a very powerful alpha emitter; as alpha particles are easily blocked, this makes the plutonium-238 isotope suitable for usage in radioisotope thermoelectric generators (RTGs) and radioisotope heater units. The density of plutonium-238 at room temperature is about 19.8 g/cc. The material will generate about 0.57 watts per gram of 238Pu. The bare sphere critical mass of metallic plutonium-238 is not precisely known, but its calculated range is between 9.04 and 10.07 kilograms. History Initial production Plutonium-238 was the first isotope of plutonium to be discovered. It was synthesized by Glenn Seaborg and associates in December 1940 by bombarding uranium-238 with deuterons, creating neptunium-238. + → + 2 The neptunium isotope then undergoes β− decay to plutonium-238, with a half-life of 2.12 days: → + + Plutonium-238 naturally decays to uranium-234 and then further along the radium series to lead-206. Historically, most plutonium-238 has been produced by Savannah River in their weapons reactor, by irradiating neptunium-237 (half life ) with neutrons. + → Neptunium-237 is a by-product of the production of plutonium-239 weapons-grade material, and when the site was shut down in 1988, 238Pu was mixed with about 16% 239Pu. Manhattan Project Plutonium was first synthesized in 1940 and isolated in 1941 by chemists at the University of California, Berkeley. The Manhattan Project began shortly after the discovery, with most early research (pre-1944) carried out using small samples manufactured using the large cyclotrons at the Berkeley Rad Lab and Washington University in St. Louis. Much of the difficulty encountered during the Manhattan Project regarded the production and testing of nuclear fuel. Both uranium and plutonium were eventually determined to be fissile, but in each case they had to be purified to select for the isotopes suitable for an atomic bomb. With World War II underway, the research teams were pressed for time. Micrograms of plutonium were made by cyclotrons in 1942 and 1943. In the fall of 1943 Robert Oppenheimer is quoted as saying "there's only a twentieth of a milligram in existence." By his request, the Rad Lab at Berkeley made available 1.2 mg of plutonium by the end of October 1943, most of which was taken to Los Alamos for theoretical work there. The world's second reactor, the X-10 Graphite Reactor built at a secret site at Oak Ridge, would be fully operational in 1944. In November 1943, shortly after its initial start-up, it was able to produce a minuscule 500 mg. However, this plutonium was mixed with large amounts of uranium fuel and destined for the nearby chemical processing pilot plant for isotopic separation (enrichment). Gram amounts of plutonium would not be available until spring of 1944. Industrial-scale production of plutonium only began in March 1945 when the B Reactor at the Hanford Site began operation. Plutonium-238 and human experimentation While samples of plutonium were available in small quantities and being handled by researchers, no one knew what health effects this might have. Plutonium handling mishaps occurred in 1944, causing alarm in the Manhattan Project leadership as contamination inside and outside the laboratories was becoming an issue. In August 1944, chemist Donald Mastick was sprayed in the face with a solution of plutonium chloride, causing him to accidentally swallow some. Nose swipes taken of plutonium researchers indicated that plutonium was being breathed in. Lead Manhattan Project chemist Glenn Seaborg, discoverer of many transuranium elements including plutonium, urged that a safety program be developed for plutonium research. In a memo to Robert Stone at the Chicago Met Lab, Seaborg wrote "that a program to trace the course of plutonium in the body be initiated as soon as possible ... [with] the very highest priority." This memo was dated January 5, 1944, prior to many of the contamination events of 1944 in Building D where Mastick worked. Seaborg later claimed that he did not at all intend to imply human experimentation in this memo, nor did he learn of its use in humans until far later due to the compartmentalization of classified information. With bomb-grade enriched plutonium-239 destined for critical research and for atomic weapon production, plutonium-238 was used in early medical experiments as it is unusable as atomic weapon fuel. However, 238Pu is far more dangerous than 239Pu due to its short half-life and being a strong alpha-emitter. It was soon found that plutonium was being excreted at a very slow rate, accumulating in test subjects involved in early human experimentation. This led to severe health consequences for the patients involved. From April 10, 1945, to July 18, 1947, eighteen people were injected with plutonium as part of the Manhattan Project. Doses administered ranged from 0.095 to 5.9 microcuries (μCi). Albert Stevens, after a (mistaken) terminal cancer diagnosis which seemed to include many organs, was injected in 1945 with plutonium without his informed consent. He was referred to as patient CAL-1 and the plutonium consisted of 3.5 μCi 238Pu, and 0.046 μCi 239Pu, giving him an initial body burden of 3.546 μCi (131 kBq) total activity. The fact that he had the highly radioactive plutonium-238 (produced in the 60-inch cyclotron at the Crocker Laboratory by deuteron bombardment of natural uranium) contributed heavily to his long-term dose. Had all of the plutonium given to Stevens been the long-lived 239Pu as used in similar experiments of the time, Stevens's lifetime dose would have been significantly smaller. The short half-life of 87.7 years of 238Pu means that a large amount of it decayed during its time inside his body, especially when compared to the 24,100 year half-life of 239Pu. After his initial "cancer" surgery removed many non-cancerous "tumors", Stevens survived for about 20 years after his experimental dose of plutonium before succumbing to heart disease; he had received the highest known accumulated radiation dose of any human patient. Modern calculations of his lifetime absorbed dose give a significant 64 Sv (6400 rem) total. Weapons The first application of 238Pu was its use in nuclear weapon components made at Mound Laboratories for Lawrence Radiation Laboratory (now Lawrence Livermore National Laboratory). Mound was chosen for this work because of its experience in producing the polonium-210-fueled Urchin initiator and its work with several heavy elements in a Reactor Fuels program. Two Mound scientists spent 1959 at Lawrence in joint development while the Special Metallurgical Building was constructed at Mound to house the project. Meanwhile, the first sample of 238Pu came to Mound in 1959. The weapons project called for the production of about 1 kg/year of 238Pu over a 3-year period. However, the 238Pu component could not be produced to the specifications despite a 2-year effort beginning at Mound in mid-1961. A maximum effort was undertaken with 3 shifts a day, 6 days a week, and ramp-up of Savannah River's 238Pu production over the next three years to about 20 kg/year. A loosening of the specifications resulted in productivity of about 3%, and production finally began in 1964. Use in radioisotope thermoelectric generators Beginning on January 1, 1957, Mound Laboratories RTG inventors Jordan & Birden were working on an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1961, Capt. R. T. Carpenter had chosen 238Pu as the fuel for the first RTG (radioisotope thermoelectric generator) to be launched into space as auxiliary power for the Transit IV Navy navigational satellite. By January 21, 1963, the decision had yet to be made as to what isotope would be used to fuel the large RTGs for NASA programs. Early in 1964, Mound Laboratories scientists developed a different method of fabricating the weapon component that resulted in a production efficiency of around 98%. This made available the excess Savannah River 238Pu production for Space Electric Power use just in time to meet the needs of the SNAP-27 RTG on the Moon, the Pioneer spacecraft, the Viking Mars landers, more Transit Navy navigation satellites (precursor to today's GPS) and two Voyager spacecraft, for which all of the 238Pu heat sources were fabricated at Mound Laboratories. The radioisotope heater units were used in space exploration beginning with the Apollo Radioisotope Heaters (ALRH) warming the Seismic Experiment placed on the Moon by the Apollo 11 mission and on several Moon and Mars rovers, to the 129 LWRHUs warming the experiments on the Galileo spacecraft. An addition to the Special Metallurgical building weapon component production facility was completed at the end of 1964 for 238Pu heat source fuel fabrication. A temporary fuel production facility was also installed in the Research Building in 1969 for Transit fuel fabrication. With completion of the weapons component project, the Special Metallurgical Building, nicknamed "Snake Mountain" because of the difficulties encountered in handling large quantities of 238Pu, ceased operations on June 30, 1968, with 238Pu operations taken over by the new Plutonium Processing Building, especially designed and constructed for handling large quantities of 238Pu. Plutonium-238 is given the highest relative hazard number (152) of all 256 radionuclides evaluated by Karl Z. Morgan et al. in 1963. Nuclear powered pacemakers In the United States, when plutonium-238 became available for non-military uses, numerous applications were proposed and tested, including the cardiac pacemaker program that began on June 1, 1966, in conjunction with NUMEC. The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete. , there were nine living people with nuclear-powered pacemakers in the United States, out of an original 139 recipients. When these individuals die, the pacemaker is supposed to be removed and shipped to Los Alamos where the plutonium will be recovered. In a letter to the New England Journal of Medicine discussing a woman who received a Numec NU-5 decades ago that is continuously operating, despite an original $5,000 price tag equivalent to $23,000 in 2007 dollars, the follow-up costs have been about $19,000 compared with $55,000 for a battery-powered pacemaker. Another nuclear powered pacemaker was the Medtronics “Laurens-Alcatel Model 9000”. Approximately 1600 nuclear-powered cardiac pacemakers and/or battery assemblies have been located across the United States, and are eligible for recovery by the Off-Site Source Recovery Project (OSRP) Team at Los Alamos National Laboratory (LANL). Production Reactor-grade plutonium from spent nuclear fuel contains various isotopes of plutonium. 238Pu makes up only one or two percent, but it may be responsible for much of the short-term decay heat because of its short half-life relative to other plutonium isotopes. Reactor-grade plutonium is not useful for producing 238Pu for RTGs because difficult isotopic separation would be needed. Pure plutonium-238 is prepared by neutron irradiation of neptunium-237, one of the minor actinides that can be recovered from spent nuclear fuel during reprocessing, or by the neutron irradiation of americium in a reactor. The targets are purified chemically, including dissolution in nitric acid to extract the plutonium-238. A 100 kg sample of light water reactor fuel that has been irradiated for three years contains only about 700 grams (0.7% by weight) of neptunium-237, which must be extracted and purified. Significant amounts of pure 238Pu could also be produced in a thorium fuel cycle. In the US, the Department of Energy's Space and Defense Power Systems Initiative of the Office of Nuclear Energy processes 238Pu, maintains its storage, and develops, produces, transports and manages safety of radioisotope power and heating units for both space exploration and national security spacecraft. As of March 2015, a total of of 238Pu was available for civil space uses. Out of the inventory, remained in a condition meeting NASA specifications for power delivery. Some of this pool of 238Pu was used in a multi-mission radioisotope thermoelectric generator (MMRTG) for the 2020 Mars Rover mission and two additional MMRTGs for a notional 2024 NASA mission. would remain after that, including approximately just barely meeting the NASA specification. Since isotope content in the material is lost over time to radioactive decay while in storage, this stock could be brought up to NASA specifications by blending it with a smaller amount of freshly produced 238Pu with a higher content of the isotope, and therefore energy density. U.S. production ceases and resumes The United States stopped producing bulk 238Pu with the closure of the Savannah River Site reactors in 1988. Since 1993, all of the 238Pu used in American spacecraft has been purchased from Russia. From 1992 to 1994, 10 kilograms were purchased by the US Department of Energy from Russia's Mayak Production Association. Via agreement with Minatom, the US must use plutonium for uncrewed NASA missions, and Russia must use the currency for environmental and social investment in the Chelyabinsk region, affected by long-term radioactive contamination such as the Kyshtym disaster. In total, have been purchased, but Russia is no longer producing 238Pu, and their own supply is reportedly running low. In February 2013, a small amount of 238Pu was successfully produced by Oak Ridge's High Flux Isotope Reactor, and on December 22, 2015, they reported the production of of 238Pu. In March 2017, Ontario Power Generation (OPG) and its venture arm, Canadian Nuclear Partners, announced plans to produce 238Pu as a second source for NASA. Rods containing neptunium-237 will be fabricated by Pacific Northwest National Laboratory (PNNL) in Washington State and shipped to OPG's Darlington Nuclear Generating Station in Clarington, Ontario, Canada where they will be irradiated with neutrons inside the reactor's core to produce 238Pu. In January 2019, it was reported that some automated aspects of its production were implemented at Oak Ridge National Laboratory in Tennessee, that are expected to triple the number of plutonium pellets produced each week. The production rate is now expected to increase from 80 pellets per week to about 275 pellets per week, for a total production of about 400 grams per year. The goal now is to optimize and scale-up the processes in order to produce an average of per year by 2025. Applications The main application of 238Pu is as the heat source in radioisotope thermoelectric generators (RTGs). The RTG was invented in 1954 by Mound scientists Ken Jordan and John Birden, who were inducted into the National Inventors Hall of Fame in 2013. They immediately produced a working prototype using a 210Po heat source, and on January 1, 1957, entered into an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1966, a study reported by SAE International described the potential for the use of plutonium-238 in radioisotope power subsystems for applications in space. This study focused on employing power conversions through the Rankine cycle, Brayton cycle, thermoelectric conversion and thermionic conversion with plutonium-238 as the primary heating element. The heat supplied by the plutonium-238 heating element was consistent between the 400 °C and 1000 °C regime but future technology could reach an upper limit of 2000 °C, further increasing the efficiency of the power systems. The Rankine cycle study reported an efficiency between 15 and 19% with inlet turbine temperatures of , whereas the Brayton cycle offered efficiency greater than 20% with an inlet temperature of . Thermoelectric converters offered low efficiency (3-5%) but high reliability. Thermionic conversion could provide similar efficiencies to the Brayton cycle if proper conditions reached. RTG technology was first developed by Los Alamos National Laboratory during the 1960s and 1970s to provide radioisotope thermoelectric generator power for cardiac pacemakers. Of the 250 plutonium-powered pacemakers Medtronic manufactured, twenty-two were still in service more than twenty-five years later, a feat that no battery-powered pacemaker could achieve. This same RTG power technology has been used in spacecraft such as Pioneer 10 and 11, Voyager 1 and 2, Cassini–Huygens and New Horizons, and in other devices, such as the Mars Science Laboratory and Mars 2020 Perseverance Rover, for long-term nuclear power generation. See also Atomic battery Plutonium-239 Polonium-210 References External links Story of Seaborg's discovery of Pu-238, especially pages 34–35. NLM Hazardous Substances Databank – Plutonium, Radioactive Fertile materials Isotopes of plutonium Radioisotope fuels Fissile materials
Plutonium-238
[ "Chemistry" ]
3,715
[ "Explosive chemicals", "Fissile materials", "Isotopes of plutonium", "Isotopes" ]
4,051,670
https://en.wikipedia.org/wiki/Secular%20resonance
A secular resonance is a type of orbital resonance between two bodies with synchronized precessional frequencies. In celestial mechanics, secular refers to the long-term motion of a system, and resonance is periods or frequencies being a simple numerical ratio of small integers. Typically, the synchronized precessions in secular resonances are between the rates of change of the argument of the periapses or the rates of change of the longitude of the ascending nodes of two system bodies. Secular resonances can be used to study the long-term orbital evolution of asteroids and their families within the asteroid belt. Description Secular resonances occur when the precession of two orbits is synchronised (a precession of the perihelion, with frequency g, or the ascending node, with frequency s, or both). A small body (such as a small Solar System body) in secular resonance with a much larger one (such as a planet) will precess at the same rate as the large body. Over relatively short time periods (a million years or so), a secular resonance will change the eccentricity and the inclination of the small body. One can distinguish between: linear secular resonances between a body (no subscript) and a single other large perturbing body (e.g. a planet, subscript as numbered from the Sun), such as the ν6 = g − g6 secular resonance between asteroids and Saturn; and nonlinear secular resonances, which are higher-order resonances, usually combination of linear resonances such as the z1 = (g − g6) + (s − s6), or the ν6 + ν5 = 2g − g6 − g5 resonances. ν6 resonance A prominent example of a linear resonance is the ν6 secular resonance between asteroids and Saturn. Asteroids that approach Saturn have their eccentricity slowly increased until they become Mars-crossers, when they are usually ejected from the asteroid belt by a close encounter with Mars. The resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU and at inclinations of about 20°. See also Orbital resonance Asteroid belt References Orbital perturbations Orbital resonance
Secular resonance
[ "Physics", "Chemistry", "Astronomy" ]
449
[ "Scattering stubs", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Scattering" ]
7,057,416
https://en.wikipedia.org/wiki/S-Adenosyl-L-homocysteine
{{DISPLAYTITLE:S-Adenosyl-L-homocysteine}} S-Adenosyl-L-homocysteine (SAH) is the biosynthetic precursor to homocysteine. SAH is formed by the demethylation of S-adenosyl-L-methionine. Adenosylhomocysteinase converts SAH into homocysteine and adenosine. Biological role DNA methyltransferases are inhibited by SAH. Two S-adenosyl-L-homocysteine cofactor products can bind the active site of DNA methyltransferase 3B and prevent the DNA duplex from binding to the active site, which inhibits DNA methylation. References External links BioCYC E.Coli K-12 Compound: S-adenosyl-L-homocysteine Nucleosides Purines Alpha-Amino acids Amino acid derivatives
S-Adenosyl-L-homocysteine
[ "Chemistry", "Biology" ]
202
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,057,811
https://en.wikipedia.org/wiki/Microcontact%20printing
Microcontact printing (or μCP) is a form of soft lithography that uses the relief patterns on a master polydimethylsiloxane (PDMS) stamp or Urethane rubber micro stamp to form patterns of self-assembled monolayers (SAMs) of ink on the surface of a substrate through conformal contact as in the case of nanotransfer printing (nTP). Its applications are wide-ranging including microelectronics, surface chemistry and cell biology. History Both lithography and stamp printing have been around for centuries. However, the combination of the two gave rise to the method of microcontact printing. The method was first introduced by George M. Whitesides and Amit Kumar at Harvard University. Since its inception many methods of soft lithography have been explored. Procedure Preparing the master Creation of the master, or template, is done using traditional photolithography techniques. The master is typically created on silicon, but can be done on any solid patterned surface. Photoresist is applied to the surface and patterned by a photomask and UV light. The master is then baked, developed and cleaned before use. In typical processes the photoresist is usually kept on the wafer to be used as a topographic template for the stamp. However, the unprotected silicon regions can be etched, and the photoresist stripped, which would leave behind a patterned wafer for creating the stamp. This method is more complex but creates a more stable template. Creating the PDMS stamp After fabrication the master is placed in a walled container, typically a petri dish, and the stamp is poured over the master. The PDMS stamp, in most applications, is a 10:1 ratio of silicone elastomer and a silicone elastomer curing agent. This mixture consists of a short hydrosilane crosslinker that contains a catalyst made from a platinum complex. After pouring, the PDMS is cured at elevated temperatures to create a solid polymer with elastomeric properties. The stamp is then peeled off and cut to the proper size. The stamp replicates the opposite of the master. Elevated regions of the stamp correspond to indented regions of the master. Some commercial services for procuring PDMS stamps and micropatterned samples exist such as Research Micro Stamps. Inking the stamp Inking of the stamp occurs through the application of a thiol solution either by immersion or coating the stamp with a Q-tip. The highly hydrophobic PDMS material allows the ink to be diffused into the bulk of the stamp, which means the thiols reside not only on the surface, but also in the bulk of the stamp material. This diffusion into the bulk creates an ink reservoir for multiple prints. The stamp is let dry until no liquid is visible and an ink reservoir is created. Applying the stamp to the substrate Direct contact Applying the stamp to the substrate is easy and straightforward which is one of the main advantages of this process. The stamp is brought into physical contact with the substrate and the thiol solution is transferred to the substrate. The thiol is area-selectively transferred to the surface based on the features of the stamp. During the transfer the carbon chains of the thiol align with each other to create a hydrophobic self-assembling monolayer (SAM). Other application techniques Printing of the stamp onto the substrate, although not used as often, can also take place with a rolling stamp onto a planar substrate or a curved substrate with a planar stamp. Advantages Microcontact Printing has several advantages including: The simplicity and ease of creating patterns with micro-scale features Can be done in a traditional laboratory without the constant use of a cleanroom (cleanroom is needed only to create the master). Multiple stamps can be created from a single master Individual stamps can be used several times with minimal degradation of performance A cheaper technique for fabrication that uses less energy than conventional techniques Some materials have no other micro patterning method available Disadvantages After this technique became popular various limitations and problems arose, all of which affected patterning and reproducibility. Stamp Deformation During direct contact one must be careful because the stamp can easily be physically deformed causing printed features that are different from the original stamp features. Horizontally stretching or compressing the stamp will cause deformations in the raised and recessed features. Also, applying too much vertical pressure on the stamp during printing can cause the raised relief features to flatten against the substrate. These deformations can yield submicron features even though the original stamp has a lower resolution. Deformation of the stamp can occur during removal from the master and during the substrate contacting process. When the aspect ratio of the stamp is high buckling of the stamp can occur. When the aspect ratio is low roof collapse can occur. Substrate contamination During the curing process some fragments can potentially be left uncured and contaminate the process. When this occurs the quality of the printed SAM is decreased. When the ink molecules contain certain polar groups the transfer of these impurities is increased. Shrinking/swelling of the stamp During the curing process the stamp can potentially shrink in size leaving a difference in desired dimensions of the substrate patterning. Swelling of the stamp may also occur. Most organic solvents induce swelling of the PDMS stamp. Ethanol in particular has a very small swelling effect, but many other solvents cannot be used for wet inking because of high swelling. Because of this the process is limited to apolar inks that are soluble in ethanol. Ink mobility Ink diffusion from the PDMS bulk to the surface occurs during the formation of the patterned SAM on the substrate. This mobility of the ink can cause lateral spreading to unwanted regions. Upon the transfer this spreading can influence the desired pattern. Applications Depending on the type of ink used and the subsequent substrate the microcontact printing technique has many different applications Micromachining Microcontact printing has great applications in micromachining. For this application inking solutions commonly consist of a solution of alkanethiol. This method uses metal substrates with the most common metal being gold. However, silver, copper, and palladium have been proven to work as well. Once the ink has been applied to the substrate the SAM layer acts as a resist to common wet etching techniques allowing for the creation of high resolution patterning. The patterned SAMs layer is a step in a series of steps to create complex microstructures. For example, applying the SAM layer on top of gold and etching creates microstructures of gold. After this step etched areas of gold exposes the substrate which can further be etched using traditional anisotropic etch techniques. Because of the microcontact printing technique no traditional photolithography is needed to accomplish these steps. Patterning proteins The patterning of proteins has helped the advancement of biosensors., cell biology research, and tissue engineering. Various proteins have been proven to be suitable inks and are applied to various substrates using the microcontact printing technique. Polylysine, immunoglobulin antibody, and different enzymes have been successfully placed onto surfaces including glass, polystyrene, and hydrophobic silicon. Patterning cells Microcontact printing has been used to advance the understanding of how cells interact with substrates. This technique has helped improve the study of cell patterning that was not possible with traditional cell culture techniques. Patterning DNA Successful patterning of DNA has also been done using this technique. The reduction in time and DNA material are the critical advantages for using this technique. The stamps were able to be used multiple times that were more homogeneous and sensitive than other techniques. Making Microchambers To learn about micro organisms, scientists need adaptable ways to capture and record the behavior of motile single-celled organisms across a diverse range of species. PDMS stamps can mold growth material into micro chambers that then capture single-celled organisms for imaging. Technique improvements To help overcome the limitations set by the original technique several alternatives have been developed. High-Speed printing: Successful contact printing was done on a gold substrate with a contact time in the range of milliseconds. This printing time is three orders of magnitude shorter than the normal technique, yet successfully transformed the pattern. The process of contact was automated to achieve these speeds through a piezoelectric actuator. At these low contact times the surface spreading of thiol did not occur, greatly improving the pattern uniformity Submerged Printing: By submerging the stamp in a liquid medium stability was greatly increased. By printing hydrophobic long-chain thiols underwater the common problem of vapor transport of the ink is greatly reduced. PDMS aspect ratios of 15:1 were achieved using this method, which was not accomplished before Lift-off Nanocontact printing: By first using Silicon lift-off stamps and later low cost polymer lift-off stamps and contacting these with an inked flat PDMS stamp, nanopatterns of multiple proteins or of complex digital nanodot gradients with dot spacing ranging from 0 nm to 15 um apart were achieved for immunoassays and cell assays. Implementation of this approach led to the patterning of a 100 digital nanodot gradient array, composed of more than 57 million protein dots 200 nm in diameter printed in 10 minutes in a 35 mm2 area. Contact Inking: as opposed to wet inking this technique does not permeate the PDMS bulk. The ink molecules only contact the protruding areas of the stamp that are going to be used for the patterning. The absence of ink on the rest of the stamp reduces the amount of ink transferred through the vapor phase that can potentially affect the pattern. This is done by the direct contact of a feature stamp and a flat PDMS substrate that has ink on it. New Stamp Materials: To create uniform transfer of the ink the stamp needs to be both mechanically stable and also be able to create conformal contact well. These two characteristics are juxtaposed because high stability requires a high Young's modulus while efficient contact requires an increase in elasticity. A composite, thin PDMS stamp with a rigid back support has been used for patterning to help solve this problem. Magnetic field assisted micro contact printing: to apply a homogeneous pressure during the printing step, a magnetic force is used. For that, the stamp is sensitive to a magnetic field by injecting iron powder into a second layer of PDMS. This force can be adjusted for nano and micro-patterns [13][12][12][12]. Multiplexing : the macrostamp: the main drawback of microcontact printing for biomedical application is that it is not possible to print different molecules with one stamp. To print different (bio)molecules in one step, a new concept is proposed : the macrostamp. It is a stamp composed of dots. The space between the dots corresponds to the space between the wells of a microplate. Then, it is possible to ink, dry and print in one step different molecules. General references www.microcontactprinting.net : a website dealing with microcontact printing (articles, patents, thesis, tips, education, ...) www.researchmicrostamps.com: a service that provides micro stamps via simple online sales. Footnotes Lithography (microfabrication)
Microcontact printing
[ "Materials_science" ]
2,339
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
7,060,924
https://en.wikipedia.org/wiki/Hankinson%27s%20equation
Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of parallel to the grain and perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle to the grain is given by Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form where the exponent can take values between 1.5 and 2. The stress wave velocity at angle to the grain at the elastic limit can similarly be obtained from the Hankinson formula where is the velocity parallel to the grain, is the velocity perpendicular to the grain and is the grain angle. See also Material failure theory Linear elasticity Hooke's law Orthotropic material Transverse isotropy References Materials science Solid mechanics Equations
Hankinson's equation
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
286
[ "Solid mechanics", "Applied and interdisciplinary physics", "Mathematical objects", "Materials science", "Equations", "Mechanics", "nan" ]
7,062,643
https://en.wikipedia.org/wiki/Oil%20burner%20%28engine%29
An oil burner engine is a steam engine that uses oil as its fuel. The term is usually applied to a locomotive or ship engine that burns oil to heat water, to produce the steam which drives the pistons, or turbines, from which the power is derived. This is mechanically very different from diesel engines, which use internal combustion, although they are sometimes colloquially referred to as oil burners. History A variety of experimental oil powered steam boilers were patented in the 1860s. Most of the early patents used steam to spray atomized oil into the steam boilers furnace. Attempts to burn oil from a free surface were unsuccessful due to the inherently low rates of combustion from the available surface area. On 21 April 1868, the steam yacht Henrietta made a voyage down the river Clyde powered by an oil fired boiler designed and patented by a Mr Donald of George Miller & Co. Donald's design used a jet of dry steam to spray oil into a furnace lined with fireproof bricks. Prior to the Henrietta’s oil burner conversion, George Miller & Co was recorded as having used oil to power their works in Glasgow for a “considerable time”. During the late 19th century numerous burner designs were patented using combinations of steam, compressed air and injection pumps to spray oil into boiler furnaces. Most of the early oil burner designs were commercial failures due to the high cost of oil (relative to coal) rather than any technical issues with the burners themselves. During the early 20th century, marine and large oil burning steam engines generally used electric motor or steam driven injection pumps. Oil would be draw from a storage tank through suction strainers and across viscosity-reducing oil heaters. The oil would then be pumped through discharge strainers before entering the burners as a whirling mist. Combustion air was introduced through special furnace-fronts, which were fitted with dampers to regulate the supply. Smaller land-based oil burning steam engines typically used steam jets fed from the main boiler to blast atomized oil into the burner nozzles. Steam ships In the 1870s, Caspian steamships began using mazut, a residual fuel oil which at that time was produced as a waste stream by the many oil refineries located in the Absheron peninsula. During the late 19th century Mazut remained cheap and plentiful in the Caspian region. In 1870, either the SS Iran or SS Constantine (depending on source) became the first ship to convert to burning fuel oil, both were Caspian based merchant steamships. During the 1870s, the Imperial Russian Navy converted the ships of the Caspian fleet to oil burners starting with the Khivenets in 1874. In 1894, the oil tanker SS Baku Standard became the first oil burning vessel to cross the Atlantic Ocean. In 1903, the Red Star Liner SS Kensington became the first passenger liner to make the Atlantic crossing with boilers fired by fuel oil. Fuel oil has a higher energy density than coal and oil powered ships did not need to employ stokers however coal remained the dominant power source for marine boilers throughout the 19th century primarily due to the relatively high cost of fuel oil. Oil was used in marine boilers to a greater extent during the early 20th century. By 1939, about half the world’s ships burned fuel oil, of these about half had steam engines and the other half used diesel engines. Steam locomotives Oil burners designed by Thomas Urquhart were fitted to the locomotives of the Gryazi-Tsaritsyn railway in southern Russia. Thomas Urquhart, who was employed as a Locomotive Superintendent by the Gryazi-Tsaritsyn Railway Company, began his experiments in 1874. By 1885 all the locomotives of the Gryazi-Tsaritsyn Railway had been converted to run on fuel oil. In Great Britain, an early pioneer of oil burning railway locomotives was James Holden, of the Great Eastern Railway. In James Holden's system, steam was raised by burning coal before the oil fuel was turned on. Holden's first oil burning locomotive Petrolea, was a class T19 2-4-0. Built in 1893, Petrolea burned waste oil that the railway had previously been discharging into the River Lea. Due to the relatively low cost of coal, oil was rarely used on Britain's stream trains and in most cases only where there was a shortage of coal. In the United States, the first oil burning steam locomotive was in service on the Southern Pacific railroad by 1900. By 1915 there were 4,259 oil burning steam locomotives in the United States, which represented 6.5% of all the locomotives then in service. Most oil burners were operated in areas west of the Mississippi where oil was abundant. American usage of oil burning steam locomotives peaked in 1945 when they were responsible for around 20% of all the fuel consumed (measured by energy content) during rail freight operations. After WW2, both oil and coal burning steam locomotives were replaced by more efficient diesel engines and had been almost entirely phased out of service by 1960. Notable early oil-fired steamships Passenger liners SS Kensington NMS Regele Carol I (one oil fired + one coal fired boiler) SS Tenyo Maru SS George Washington Warships Re Umberto-class - Italian ironclad battleships equipped to burn a mix of coal and oil Rostislav - Russian battleship HMS Spiteful - British Royal Navy destroyer Paulding-class destroyers - US Navy Notable oil-fired steam locomotives General Most cab forward locomotives Some Fairlie locomotives Some steam locomotives used on heritage railways Advanced steam technology locomotives Australia NSWGR D55 Class NSWGR D59 Class VR J Class VR R Class WAGR U Class WAGR Pr Class India Darjeeling Himalayan Railway Nilgiri Mountain Railway Great Britain GER Class T19 GER Class P43 WD Austerity 2-10-0 (3672 converted in preservation). GWR oil burning steam locomotives (4965 Rood Ashton Hall to be converted during overhaul in preservation). New Zealand NZR JA class (North British-built locomotives only) NZR JB class NZR K class (1932) - converted from coal 1947-53 NZR KA class - converted from coal 1947-53 North America ('*' symbol indicates locomotive was converted or is being converted from coal-burning to oil-burning in either revenue service or excursion service) Sierra Railway 3 - Part of Railtown 1897 State Historic Park (Jamestown, CA) Sierra Railway 28 - Part of Railtown 1897 State Historic Park (Jamestown, CA) McCloud Railway 25 - Oregon Coast Scenic Railroad (Garibaldi, OR) Polson Logging Co. 2 - Albany & Eastern Railroad (Albany, OR) California Western 45 - California Western Railroad (Fort Bragg, CA) US Army Transportation Corps 1702* - Great Smoky Mountains Railroad (Bryson City, NC) Southern Railway 722* - Great Smoky Mountains Railroad (Bryson City, NC) Union Pacific 844* - UP Heritage Fleet (Cheyenne, WY) Union Pacific 4014* - UP Heritage Fleet (Cheyenne, WY) Union Pacific 3985* - Railroading Heritage of Midwest America (Silvis, IL) Union Pacific 5511 - Railroading Heritage of Midwest America (Silvis, IL) Union Pacific 737* - Double-T Agricultural Museum (Stevinson, CA) White Pass & Yukon Route 73 - White Pass & Yukon Route (Skagway, AK) Alaska Railroad 557* - Engine 557 Restoration Company (Anchorage, AK) Santa Fe 5000 - Amarillo, TX Santa Fe 3759* - Locomotive Park (Kingman, AZ) Santa Fe 3751* - San Bernardino Railroad Historical Society (San Bernardino, CA) Santa Fe 3450* - RailGiants Train Museum (Pomona, CA) Santa Fe 3415* - Abilene & Smoky Valley Railroad (Abilene, KS) Santa Fe 2926 - New Mexico Steam Locomotive & Railroad Historical Society (Albuquerque, NM) Santa Fe 1316* - Texas State Railroad (Palestine, TX) Texas & Pacific 610 - Texas State Railroad (Palestine, TX) Southern Pine Lumber Co. 28 - Texas State Railroad (Palestine, TX) Tremont & Gulf 30/Magma Arizona 7 - Texas State Railroad (Palestine, TX) Lake Superior & Ishpeming 18* - Colebrookdale Railroad (Boyertown, PA) Florida East Coast 148 - US Sugar Corporation (Clewiston, FL) Atlantic Coast Line 1504* - US Sugar Corporation (Clewiston, FL) Grand Canyon Railway 29* - Grand Canyon Railway (Williams, AR) Grand Canyon Railway 4960* - Grand Canyon Railway (Williams, AR) Oregon Railroad & Navigation 197 - Oregon Rail Heritage Center (Portland, OR) Spokane, Portland & Seattle 700 - Oregon Rail Heritage Center (Portland, OR) Southern Pacific 4449 - Oregon Rail Heritage Center (Portland, OR) Southern Pacific 4460 - National Museum of Transportation (Kirkwood, MO) Southern Pacific 4294 - California State Railroad Museum (Sacramento, CA) Southern Pacific 2479 - Niles Canyon Railway (Sunol, CA) Southern Pacific 2472 - Golden Gate Railroad Museum Southern Pacific 2467 - Pacific Locomotive Association Southern Pacific 2353 - Pacific Southwest Railway Museum (Campo, CA) Southern Pacific 1744 - Niles Canyon Railway (Sunol, CA) Southern Pacific 786 - Austin Steam Train Association, Inc. (Cedar Park, TX) Southern Pacific 745 - Louisiana Steam Train Association, Inc. (Jefferson, LA) Southern Pacific 18 - Eastern California Museum (Independence, CA) Cotton Belt 819 - Arkansas Railroad Museum (Pine Bluff, AR) Frisco 1522 - National Museum of Transportation (Kirkwood, MO) Southern Railway 401* - Monticello Railway Museum (Monticello, IL) Reading 2100* - American Steam Railroad Preservation Association (Cleveland, OH) See also Oil refinery Steam power during the Industrial Revolution Timeline of steam power References External links Fuel energy & steam traction Engine technology Energy conversion Combustion engineering Steam engine technology
Oil burner (engine)
[ "Technology", "Engineering" ]
2,022
[ "Engine technology", "Industrial engineering", "Combustion engineering", "Engines" ]
6,899,646
https://en.wikipedia.org/wiki/Lead-bismuth%20eutectic
Lead-Bismuth Eutectic or LBE is a eutectic alloy of lead (44.5 at%) and bismuth (55.5 at%) used as a coolant in some nuclear reactors, and is a proposed coolant for the lead-cooled fast reactor, part of the Generation IV reactor initiative. It has a melting point of 123.5 °C/254.3 °F (pure lead melts at 327 °C/621 °F, pure bismuth at 271 °C/520 °F) and a boiling point of 1,670 °C/3,038 °F. Lead-bismuth alloys with between 30% and 75% bismuth all have melting points below 200 °C/392 °F. Alloys with between 48% and 63% bismuth have melting points below 150 °C/302 °F. While lead expands slightly on melting and bismuth contracts slightly on melting, LBE has negligible change in volume on melting. History The Soviet Alfa-class submarines used LBE as a coolant for their nuclear reactors throughout the Cold War. OKB Gidropress (the Russian developers of the VVER-type Light-water reactors) has expertise in LBE reactors. The SVBR-75/100, a modern design of this type, is one example of the extensive Russian experience with this technology. Gen4 Energy (formerly Hyperion Power Generation), a United States firm connected with Los Alamos National Laboratory, announced plans in 2008 to design and deploy a uranium nitride fueled small modular reactor cooled by lead-bismuth eutectic for commercial power generation, district heating, and desalinization. The proposed reactor, called the Gen4 Module, was planned as a 70 MWth reactor of the sealed modular type, factory assembled and transported to site for installation, and transported back to the factory for refuelling. Gen4 Energy ceased operations in 2018. Advantages As compared to sodium-based liquid metal coolants such as liquid sodium or NaK, lead-based coolants have significantly higher boiling points, meaning a reactor can be operated without risk of coolant boiling at much higher temperatures. This improves thermal efficiency and could potentially allow hydrogen production through thermochemical processes. Lead and LBE also do not react readily with water or air, in contrast to sodium and NaK which ignite spontaneously in air and react explosively with water. This means that lead- or LBE-cooled reactors, unlike sodium-cooled designs, would not need an intermediate coolant loop, which reduces the capital investment required for a plant. Both lead and bismuth are also an excellent radiation shield, absorbing gamma radiation while simultaneously being virtually transparent to neutrons. In contrast, sodium forms the potent gamma emitter sodium-24 (half-life 15 hours) following intense neutron radiation, requiring a large radiation shield for the primary cooling loop. As heavy nuclei, lead and bismuth can be used as spallation targets for non-fission neutron production, as in accelerator transmutation of waste (see energy amplifier). Both lead-based and sodium-based coolants have the advantage of relatively high boiling points as compared to water, meaning it is not necessary to pressurise the reactor even at high temperatures. This improves safety as it reduces the probability of a loss of coolant accident (LOCA), and allows for passively safe designs. The thermodynamic cycle (Carnot cycle) is also more efficient with a larger difference of temperature. However, a disadvantage of higher temperatures is also the higher corrosion rate of metallic structural components in LBE due to their increased solubility in liquid LBE with temperature (formation of amalgam) and to liquid metal embrittlement. Limitations Lead and LBE coolant are more corrosive to steel than sodium, and this puts an upper limit on the velocity of coolant flow through the reactor due to safety considerations. Furthermore, the higher melting points of lead and LBE (327 °C and 123.5 °C respectively) may mean that solidification of the coolant may be a greater problem when the reactor is operated at lower temperatures. Finally, upon neutron radiation bismuth-209, the main isotope of bismuth present in LBE coolant, undergoes neutron capture and subsequent beta decay, forming polonium-210, a potent alpha emitter. The presence of radioactive polonium in the coolant would require special precautions to control alpha contamination during refueling of the reactor and handling components in contact with LBE. See also Subcritical reactor (accelerator-driven system) References External links NEA 2015 LBE Handbook Fusible alloys Nuclear reactor coolants Nuclear materials Bismuth
Lead-bismuth eutectic
[ "Physics", "Chemistry", "Materials_science" ]
958
[ "Lead alloys", "Metallurgy", "Fusible alloys", "Materials", "Nuclear materials", "Alloys", "Matter" ]
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3