text
string
id
string
dump
string
url
string
file_path
string
language
string
language_score
float64
token_count
int64
score
float64
int_score
int64
tags
list
matched_keywords
dict
match_summary
dict
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer. The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy. "Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!" About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface. SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits. Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets. Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star. "Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team." SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function. For information about SOHO on the Internet, visit: Explore further: Long-term warming, short-term variability: Why climate change is still an issue
<urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3>
CC-MAIN-2013-20
http://phys.org/news4969.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943417
663
4
4
[ "climate" ]
{ "climate": [ "climate change" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
- Yes, this is a good time to plant native grass seed in the ground. You may have to supplement with irrigation if the rains stop before the seeds have germinated and made good root growth. - Which grasses should I plant? The wonderful thing about California is that we have so many different ecosystems; the challenging thing about California is that we have so many different ecosystems. It’s impossible for us to know definitively which particular bunchgrasses used to grow or may still grow at your particular site, but to make the best guesses possible, we recommend the following: - Bestcase scenario is to have bunchgrasses already on the site that you can augment through proper mowing or grazing techniques. - Next best is to have a nearby site with native bunchgrasses and similar elevation, aspect, and soils, that you can use as a model. - After that, go to sources such as our pamphlet Distribution of Native Grasses of California, by Alan Beetle, $7.50. - Also reference local floras of your area, available through the California Native Plant Society. Container growing: We grow seedlings in pots throughout the season, but ideal planning for growing your own plants in pots is to sow six months before you want to put them in the ground. Though restorationists frequently use plugs and liners (long narrow containers), and they may be required for large areas, we prefer growing them the horticultural way: first in flats, then transplanting into 4" pots, and when they are sturdy little plants, into the ground. Our thinking is that since they are not tap-rooted but fibrous-rooted (one of their main advantages as far as deep erosion control is concerned) square 4" pots suit them, and so far our experiences have borne this out. In future newsletters, we will be reporting on the experiences and opinions of Marin ranchers Peggy Rathmann and John Wick, who are working with UC Berkeley researcher Wendy Silver on a study of carbon sequestration and bunchgrasses. So far, it’s very promising. But more on that later. For now, I’ll end with a quote from Peggy, who grows, eats, nurtures, lives, and sleeps bunchgrasses, for the health of their land and the benefit of their cows. “It takes a while. But it’s so worth it.”
<urn:uuid:c183066d-32a9-42eb-91b6-191fdb0980c2>
CC-MAIN-2013-20
http://judithlarnerlowry.blogspot.com/2009/02/simplifying-california-native.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956731
495
2.515625
3
[ "climate", "nature" ]
{ "climate": [ "carbon sequestration" ], "nature": [ "ecosystems" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
When he shot President Lincoln, John Wilkes Booth was 26 years old, and one of the nation’s most famous actors. (Charles DeForest Fredericks/National Portrait Gallery) John Wilkes Booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes. The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, acting against orders to bring him in alive. Several months later, Charles Creighton Hazewell, a frequent contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government.—Sage Stossel The assassination of President Lincoln threw a whole nation into mourning … Of all our Presidents since Washington, Mr. Lincoln had excited the smallest amount of that feeling which places its object in personal danger. He was a man who made a singularly favorable impression on those who approached him, resembling in that respect President Jackson, who often made warm friends of bitter foes, when circumstances had forced them to seek his presence; and it is probable, that, if he and the honest chiefs of the Rebels could have been brought face to face, there never would have been civil war,—at least, any contest of grand proportions; for he would not have failed to convince them that all that they had any right to claim, and therefore all that they could expect their fellow-citizens to fight for, would be more secure under his government than it had been under the governments of such men as Pierce and Buchanan, who made use of sectionalism and slavery to promote the selfish interests of themselves and their party … Ignorance was the parent of the civil war, as it has been the parent of many other evils,—ignorance of the character and purpose of the man who was chosen President in 1860–61, and who entered upon official life with less animosity toward his opponents than ever before or since had been felt by a man elected to a great place after a bitter and exciting contest … That one of the most insignificant of [the secessionists’] number should have murdered the man whose election they declared to be cause for war is nothing strange, being in perfect keeping with their whole course. The wretch who shot the chief magistrate of the Republic is of hardly more account than was the weapon which he used. The real murderers of Mr. Lincoln are the men whose action brought about the civil war. Booth’s deed was a logical proceeding, following strictly from the principles avowed by the Rebels, and in harmony with their course during the last five years. The fall of a public man by the hand of an assassin always affects the mind more strongly than it is affected by the fall of thousands of men in battle; but in strictness, Booth, vile as his deed was, can be held to have been no worse, morally, than was that old gentleman who insisted upon being allowed the privilege of firing the first shot at Fort Sumter. Ruffin’s act is not so disgusting as Booth’s; but of the two men, Booth exhibited the greater courage,—courage of the basest kind, indeed, but sure to be attended with the heaviest risks, as the hand of every man would be directed against its exhibitor. Had the Rebels succeeded, Ruffin would have been honored by his fellows; but even a successful Southern Confederacy would have been too hot a country for the abode of a wilful murderer. Such a man would have been no more pleasantly situated even in South Carolina than was Benedict Arnold in England. And as he chose to become an assassin after the event of the war had been decided, and when his victim was bent upon sparing Southern feeling so far as it could be spared without injustice being done to the country, Booth must have expected to find his act condemned by every rational Southern man as a worse than useless crime, as a blunder of the very first magnitude. Had he succeeded in getting abroad, Secession exiles would have shunned him, and have treated him as one who had brought an ineffaceable stain on their cause, and also had rendered their restoration to their homes impossible. The pistol-shot of Sergeant Corbett saved him from the gallows, and it saved him also from the denunciations of the men whom he thought to serve. He exhibited, therefore, a species of courage that is by no means common; for he not only risked his life, and rendered it impossible for honorable men to sympathize with him, but he ran the hazard of being denounced and cast off by his own party … All Secessionists who retain any self-respect must rejoice that one whose doings brought additional ignominy on a cause that could not well bear it has passed away and gone to his account. It would have been more satisfactory to loyal men, if he had been reserved for the gallows; but even they must admit that it is a terrible trial to any people who get possession of an odious criminal, because they may be led so to act as to disgrace themselves, and to turn sympathy in the direction of the evil-doer … Therefore the shot of Sergeant Corbett is not to be regretted, save that it gave too honorable a form of death to one who had earned all that there is of disgraceful in that mode of dying to which a peculiar stigma is attached by the common consent of mankind. Whether Booth was the agent of a band of conspirators, or was one of a few vile men who sought an odious immortality, it is impossible to say. We have the authority of a high Government official for the statement that “the President’s murder was organized in Canada and approved at Richmond”; but the evidence in support of this extraordinary announcement is, doubtless for the best of reasons, withheld at the time we write. There is nothing improbable in the supposition that the assassination plot was formed in Canada, as some of the vilest miscreants of the Secession side have been allowed to live in that country … But it is not probable that British subjects had anything to do with any conspiracy of this kind. The Canadian error was in allowing the scum of Secession to abuse the “right of hospitality” through the pursuit of hostile action against us from the territory of a neutral … That a plan to murder President Lincoln should have been approved at Richmond is nothing strange; and though such approval would have been supremely foolish, what but supreme folly is the chief characteristic of the whole Southern movement? If the seal of Richmond’s approval was placed on a plan formed in Canada, something more than the murder of Mr. Lincoln was intended. It must have been meant to kill every man who could legally take his place, either as President or as President pro tempore. The only persons who had any title to step into the Presidency on Mr. Lincoln’s death were Mr. Johnson, who became President on the 15th of April, and Mr. Foster, one of the Connecticut Senators, who is President of the Senate … It does not appear that any attempt was made on the life of Mr. Foster, though Mr. Johnson was on the list of those doomed by the assassins; and the savage attack made on Mr. Seward shows what those assassins were capable of. But had all the members of the Administration been struck down at the same time, it is not at all probable that “anarchy” would have been the effect, though to produce that must have been the object aimed at by the conspirators. Anarchy is not so easily brought about as persons of an anarchical turn of mind suppose. The training we have gone through since the close of 1860 has fitted us to bear many rude assaults on order without our becoming disorderly. Our conviction is, that, if every man who held high office at Washington had been killed on the 14th of April, things would have gone pretty much as we have seen them go, and that thus the American people would have vindicated their right to be considered a self-governing race. It would not be a very flattering thought, that the peace of the country is at the command of any dozen of hardened ruffians who should have the capacity to form an assassination plot, the discretion to keep silent respecting their purpose, and the boldness and the skill requisite to carry it out to its most minute details: for the neglect of one of those details might be fatal to the whole project. Society does not exist in such peril as that. john wilkes booth, a Maryland native, spent the war performing in theatrical productions. But the conflict was never far from his mind. In a letter to his mother, he expressed chagrin that he hadn’t joined the Confederate army, writing, “I have … begun to deem myself a coward, and to despise my own existence.” He was outraged by the reelection of Lincoln, whom he viewed as the instigator of all the country’s woes. The month after the inauguration, Booth learned that Lincoln would be attending a performance at Ford’s Theatre on April 14. That night, he crept into Lincoln’s theater box and shot him in the back of the head. It was the first time a president had been murdered. “Wanted” posters were issued for Booth, and on April 26, he was cornered in a tobacco barn and shot by a federal sergeant, who acted against orders to bring him in alive. Several months later, Charles Creighton Hazewell, a frequent Atlantic contributor, sought to make sense of the assassination—speculating that the plot may have been hatched in Canada (where a number of secessionist schemes had originated) and hinting at evidence that the plan had been endorsed at the highest levels of the Confederate government. Read the full text of this article here. This article available online at:
<urn:uuid:b48891ec-4670-49b3-85a7-ec1a2ad95bf5>
CC-MAIN-2013-20
http://www.theatlantic.com/magazine/print/2012/02/assassination/308804/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986341
2,194
3.8125
4
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
by Piter Kehoma Boll Let’s expand the universe of Friday Fellow by presenting a plant for the first time! And what could be a better choice to start than the famous Grandidier’s Baobab? Belonging to the species Adansonia grandidieri, this tree is one of the trademarks of Madagascar, being the biggest species of this genus found in the island. Reaching up to 30 m in height and having a massive trunk only branched at the very top, it has a unique look and is found only at southwestern Madagascar. However, despite being so attractive and famous, it is classified as an endangered species by IUCN Red List, with a declining population threatened by agriculture expansion. This tree is also heavily exploited, having vitamin C-rich fruits which can be consumed fresh and seeds used to extract oil. Its bark can also be used to make ropes and many trees are found with scars due to the extraction of part of the bark. Having a fibrous trunk, baoabs are able to deal with drought by apparently storaging water inside them. There are no seed dispersors, which can be due to the extiction of the original dispersor by human activities. Originally occuring close to temporary water bodies in the dry deciduous forest, today many large trees are found in always dry terrains. This probably is due to human impact that changed the local ecosystem, letting it to become drier than it was. Those areas have no or very poor ability to regenerate and probably will never go back to what they were and, once the old trees die, there will be no more baobabs there. - – - Baum, D. A. (1995). A Systematic Revision of Adansonia (Bombacaceae) Annals of the Missouri Botanical Garden, 82, 440-470 DOI: 10.2307/2399893 Wikipedia. Adamsonia grandidieri. Available online at <http://en.wikipedia.org/wiki/Adansonia_grandidieri>. Access on October 02, 2012. World Conservation Monitoring Centre 1998. Adansonia grandidieri. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.1. <www.iucnredlist.org>. Access on October 02, 2012.
<urn:uuid:10459212-d96b-47fa-9ead-4447c5ba731f>
CC-MAIN-2013-20
http://earthlingnature.wordpress.com/2012/10/05/friday-fellow-grandidiers-baobab/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.923648
488
3.703125
4
[ "climate", "nature" ]
{ "climate": [ "drought" ], "nature": [ "conservation", "ecosystem", "endangered species" ] }
{ "strong": 4, "weak": 0, "total": 4, "decision": "accepted_strong" }
Ki Tisa (Mitzvot) For more teachings on this portion, see the archives to this blog, below at March 2006. This week’s parasha is best known for the dramatic and richly meaningful story of the Golden Calf and the Divine anger, of Moses’ pleading on behalf of Israel, and the eventual reconciliation in the mysterious meeting of Moses with God in the Cleft of the Rock—subjects about which I’ve written at length, from various aspects, in previous years. Yet the first third of the reading (Exod 30:11-31:17) is concerned with various practical mitzvot, mostly focused on the ritual worship conducted in the Temple, which tend to be skimmed over in light of the intense interest of the Calf story. As this year we are concerned specifically with the mitzvot in each parasha, I shall focus on this section. These include: the giving by each Israelite [male] of a half-shekel to the Temple; the making of the laver, from which the priests wash their hands and feet before engaging in Divine service; the compounding of the incense and of the anointing oil; and the Shabbat. I shall focus here upon the washing of the hands. Hand-washing is a familiar Jewish ritual: it is, in fact, the first act performed by pious Jews upon awakening in the morning (some people even keep a cup of water next to their beds, so that they may wash their hands before taking even a single step); one performs a ritual washing of the hands before eating bread; before each of the daily prayers; etc. The section here dealing with the laver in the Temple (Exod 30:17-21) is also one of the four portions from the Torah recited by many each morning, as part of the section of the liturgy known as korbanot, chapters of Written and Oral Torah reminiscent of the ancient sacrificial system, that precede Pesukei de-Zimra. Sefer ha-Hinukh, at §106, explains the washing of hands as an offshoot of the honor due to the Temple and its service—one of many laws intended to honor, magnify, and glorify the Temple. Even if the priest was pure and clean, he must wash (literally, “sanctify”) his hands before engaging in avodah. This simple gesture of purification served as a kind of separation between the Divine service and everyday life. It added a feeling of solemnity, of seriousness, a sense that one was engaged in something higher, in some way separate from the mundane activities of regular life. (One hand-washing by kohanim, in the morning, was sufficient, unless they left the Temple grounds or otherwise lost the continuity of their sacred activity.) Our own netilat yadaim, whether before prayer or breaking bread, may be seen as a kind of halakhic carryover from the Temple service, albeit on the level of Rabbinic injunction. What is the symbolism of purifying one’s hands? Water, as a flowing element, as a solvent that washes away many of the things with which it comes in contact, is at once a natural symbol of both purity, and of the renewal of life. Mayim Hayyim—living waters—is an age old association. Torah is compared to water; water, constantly flowing, is constantly returning to its source. At the End of Days, “the land will be filled with knowledge of the Lord, like waters going down to the sea.” A small part of this is hinted in this simple, everyday gesture. “See that this nation is Your people” But I cannot pass over Ki Tisa without some comment on the incident of the Golden Calf and its ramifications. This week, reading through the words of the parasha in preparation for a shiur (what Ruth Calderon, founder of Alma, a secularist-oriented center for the study of Judaism in Tel Aviv, called “barefoot reading”—that is, naïve, without preconceptions), I discovered something utterly simple that I had never noticed before in quite the same way. At the beginning of the Calf incident, God tells Moses, who has been up on the mountain with Him, “Go down, for your people have spoiled” (32:7). A few verses later, when God asks leave of Moses (!) to destroy them, Moses begs for mercy on behalf of the people with the words “Why should Your anger burn so fiercely against Your people…” (v. 11). That is, God calls them Moses’ people, while Moses refers to them as God’s people. Subsequent to this exchange, each of them refers to them repeatedly in the third person, as “the people” or “this people” (העם; העם הזה). Neither of them refers to them, as God did in the initial revelation to Moses at the burning bush (Exodus 3:7 and passim) as “my people,” or with the dignified title, “the children of Israel”—as if both felt a certain alienation, of distance from this tumultuous, capricious bunch. Only towards the end, after God agrees not to destroy them, but still states “I will not go up with them,” but instead promises to send an angel, does Moses says “See, that this nation is Your people” (וראה כי עמך הגוי הזה; 33:13). What does all this signify? Reading the peshat carefully, there is one inevitable conclusion: that God wished to nullify His covenant with the people Israel. It is in this that there lies the true gravity, and uniqueness, of the Golden Calf incident. We are not speaking here, as we read elsewhere in the Bible—for example, in the two great Imprecations (tokhahot) in Lev 26 and Deut 28, or in the words of the prophets during the First Temple—merely of threats of punishment, however harsh, such as drought, famine, pestilence, enemy attacks, or even exile and slavery. There, the implicit message is that, after a period of punishment, a kind of moral purgation through suffering, things will be restored as they were. Here, the very covenant itself, the very existence of an intimate connection with God, hangs in the balance. God tells Moses, “I shall make of you a people,” i.e., instead of them. This, it seems to me, is the point of the second phase of this story. Moses breaks the tablets; he and his fellow Levites go through the camp killing all those most directly implicated in worshipping the Calf; God recants and agrees not to destroy the people. However, “My angel will go before them” but “I will not go up in your midst” (33:2, 3). This should have been of some comfort; yet this tiding is called “this bad thing,” the people mourn, and remove the ornaments they had been wearing until then. Evidently, they understood the absence of God’s presence or “face” as a grave step; His being with them was everything. That is the true importance of the Sanctuary in the desert and the Tent of Meeting, where Moses speaks with God in the pillar of cloud (33:10). God was present with them there in a tangible way, in a certain way continuing the epiphany at Sinai. All that was threatened by this new declaration. Moses second round of appeals to God, in Exod 33:12-23, focuses on bringing God, as it were, to a full reconciliation with the people. This is the significance of the Thirteen Qualities of Mercy, of what I have called the Covenant in the Cleft of the Rock, the “faith of Yom Kippur” as opposed to that of Shavuot (see HY I: Ki Tisa; and note Prof. Jacob Milgrom’s observation that this chapter stands in the exact center, in a literary sense, of the unit known as the Hextateuch—Torah plus the Book of Joshua). But I would add two important points. One, that this is the first place in the Torah where we read about sin followed by reconciliation. After Adam and Eve ate of the fruit of the Garden, they were punished without hope of reprieve; indeed, their “punishment “ reads very much like a description of some basic aspects of the human condition itself. Cain, after murdering Abel, was banished, made to wander the face of the earth. The sin of the brothers in selling Joseph, and their own sense of guilt, is a central factor in their family dynamic from then on, but there is nary a word of God’s response or intervention. It would appear that God’s initial expectation in the covenant at Sinai was one of total loyalty and fidelity. The act of idolatry was an unforgivable breach of the covenant—much as adultery is generally perceived as a fundamental violation of the marital bond. Moses, in persuading God to recant of His jealousy and anger, to give the faithless people another chance, is thus introducing a new concept: of a covenant that includes the possibility of even the most serious transgressions being forgiven; of the knowledge that human beings are fallible, and that teshuvah and forgiveness are essential components of any economy of men living before a demanding God. The second, truly astonishing point is the role played by Moses in all this. Moshe Rabbenu, “the man of God,” is not only the great teacher of Israel, the channel through which they learn the Divine Torah, but also, as it were, one who teaches God Himself. It is God who “reveals His Qualities of Mercy” at the Cleft of the Rock; but without Moses cajoling, arguing, persuading (and note the numerous midrashim around this theme), “were it not for my servant Moses who stood in the breach,” all this would not have happened. It was Moses who elicited this response and who, so to speak, pushed God Himself to this new stage in his relation with Israel—to give up His expectations of perfection from His covenanted people, and to understand that living within a covenant means, not rigid adherence to a set of laws, but a living relationship with real people, taking the bad with the good. (Again, the parallel to human relationships is obvious)
<urn:uuid:c4c19472-691a-44c6-a55b-21fbb183475b>
CC-MAIN-2013-20
http://hitzeiyehonatan.blogspot.com/2008_02_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966594
2,269
2.671875
3
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
“A remote Indian village is responding to global warming-induced water shortages by creating large masses of ice, or “artificial glaciers,” to get through the dry spring months. (See a map of the region.) Located on the western edge of the Tibetan plateau, the village of Skara in the Ladakh region of India is not a common tourist destination. “It’s beautiful, but really remote and difficult to get to,” said Amy Higgins, a graduate student at the Yale School of Forestry & Environmental Studies who worked on the artificial glacier project. “A lot of people, when I met them in Delhi and I said I was going to Ladakh, they looked at me like I was going to the moon,” said Higgins, who is also a National Geographic grantee. People in Skara and surrounding villages survive by growing crops such as barley for their own consumption and for sale in neighboring towns. In the past, water for the crops came from meltwater originating in glaciers high in the Himalaya.” Read more: National Geographic
<urn:uuid:5050ac83-4770-4e9c-9b44-38ba46d2466e>
CC-MAIN-2013-20
http://peakwater.org/2012/02/artificial-glaciers-water-crops-in-indian-highlands/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.973301
226
3.78125
4
[ "climate" ]
{ "climate": [ "global warming" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
America's oil and natural gas industry is committed to protecting the environment and to continuously improving its hurricane preparation and response plans. After any hurricane or tropical storm, the goal is to return to full operations as quickly and as safely as possible. For the 2012 hurricane season, the industry continues to build upon critical lessons learned from 2008's major hurricanes, Gustav and Ike, as well as other powerful storms, such as 2005's Katrina and Rita and 2004's Ivan. API plays two primary roles for the industry in preparing for hurricanes. First, it helps the industry gain a better understanding of the environmental conditions in and around the Gulf of Mexico during hurricane or tropical storm activity and then assists industry in using that knowledge to make offshore and onshore facilities less vulnerable. Second, API collaborates with member companies, other industries and with federal, state and local governments to prepare for hurricanes and return operations as quickly and as safely as possible. API member companies also independently work to improve preparedness for hurricanes and other natural or manmade disasters. They have, for example, reviewed and updated emergency response plans, established redundant communication paths and made pre-arrangements with suppliers to help ensure they have adequate resources during an emergency. The API Subcommittee on Offshore Stuctures, the International Association of Drilling Contractors, and the Offshore Operators Committee, serve as a liaison to regulatory agencies, coordinate industry review of critical design standards and provide a forum for sharing lessons learned from previous hurricanes. These combined efforts are critical since the Gulf of Mexico accounted for about 23 percent of the oil and 8 percent of total natural gas produced in the United States (approximately 82 percent of the oil supply comes from deepwater facilities), and the Gulf Coast region is home to almost half of the U.S. refining capacity. Upstream (Exploration and Production) During the major 2005 hurricanes, waves were higher and winds were stronger than anticipated in deeper parts of the Gulf so the industry moved away from viewing it as a uniform body of water. Evaluating the effects of those and other storms, helped scientists discover that the Central Gulf of Mexico was more prone to hurricanes because it acts as a gathering spot for warm currents that can strengthen a storm. The revised wind, wave and water current measurements ("metocean" data) prompted API to reassess its recommended practices (RPs) for industry operations in the region. - The upstream segment continues to integrate the updated environmental (metocean) data on how powerful storms affect conditions in the Gulf of Mexico into its offshore structure design standards. This effort led to the publication in 2008 of an update to RP 2SK, Design and Analysis of Stationkeeping Systems for Floating Structures, that provides guidance for design and operation of Mobile Offshore Drilling Unit (MODU) mooring systems in the Gulf of Mexico during the hurricane season. API RP 95J, Gulf of Mexico Jack-up Operations for Hurricane Season, which recommends locating jack-up rigs on more stable areas of the sea floor, and positioning platform decks higher above the sea surface, was also updated. API publications are available at our (Search and Order API in the past six years also has issued a number of bulletins to help better prepare for and bring production back online after Gulf hurricanes. These include: Production and Hurricanes (steps industry takes to prepare for and return after a storm) - Bulletin 2TD, Guidelines for Tie-downs on Offshore Production Facilities for Hurricane Season, which is aimed at better-securing separate platform equipment. - Bulletin 2INT-MET, Interim Guidance on Hurricane Conditions in the Gulf of Mexico, which provides updated metocean data for four regions of the Gulf, including wind velocities, deepwater wave conditions, ocean current information, and surge and tidal data. - Bulletin 2INT-DG, Interim Guidance for Design of Offshore Structures for Hurricane Conditions, which explains how to apply the updated metocean data during design. - Bulletin 2INT-EX, Interim Guidance for Assessment of Existing Offshore Structures for Hurricane Conditions, which assists owners/operators and engineers with existing facilities. - Bulletin 2HINS, Guidance on Post-hurricane Structural Inspection of Offshore Structures, which provides guidance on determining if a structure sustained hurricane-induced damage that affects the safety of personnel, the primary structural integrity, or its ability to perform the purpose for which it was intended. Refineries and Pipelines - Days in advance of a tropical storm or hurricane moving toward or near their drilling and production operations, companies will evacuate all non-essential personnel and begin the process of shutting down production. - As the storm gets closer, all personnel will be evacuated from the drilling rigs and platforms, and production is shut down. Drillships may relocate to a safe location. Operations in areas not forecast to take a direct hit from the storm often will be shut down as well because storms can change direction with little notice. - After a storm has passed and it is safe to fly, operators will initiate "flyovers" of onshore and offshore facilities to evaluate damage from the air. For onshore facilities, these "flyovers" can identify flooding, facility damage, road or other infrastructure problems, and spills. Offshore "flyovers" look for damaged drilling rigs, platform damage, spills, and possible pipeline damage. - Many offshore drilling rigs are equipped with GPS locator systems, which allow federal officials and drilling contractors to remotely monitor the rigs' location before, during and after a hurricane. If a rig is pulled offsite by the storm, locator systems allow crews to find and recover the rig as quickly and as safely as possible. - Once safety concerns are addressed, operators will send assessment crews to offshore facilities to physically assess the facilities for damage. - If facilities are undamaged, and ancillary facilities, like pipelines that carry the oil and natural gas, are undamaged and ready to accept shipments, operators will begin restarting production. Drilling rigs will commence operations. Despite sustaining unprecedented damage and supply outages during the 2005 and 2008 hurricanes, the industry quickly and safely brought refining and pipeline operations back online, delivering to consumers near-record levels of gasoline and record levels of distillate (diesel and heating oil) in 2008. The oil and oil-product pipelines operating on or near the Gulf of Mexico continue to review their assets and operations to minimize the potential impacts of storms and shorten the time it takes to recover. While there have been some shortages caused by hurricanes, supply disruptions have been temporary despite extensive damage to supporting infrastructure, such as electric power generation and distribution, production shut-ins and refinery shutdowns. Pipelines need a steady supply of crude oil or refined products to keep product flowing to its intended destinations. To prepare for future severe storms, refiners and pipeline companies have Refineries and hurricanes (steps industry takes to prepare for and return after a storm) - Worked with utilities to clarify priorities for electric power restoration critical to restarting operations and to help minimize significant disruptions to fuel distribution and delivery. - Secured backup power generation equipment and worked with federal, state and local governments to ensure that pipelines and refineries are considered "critical" infrastructure for back-up power purposes. - Established redundant communications systems to support continuity of operations and locate employees. - Worked with vendors to pre-position food, water and transportation, and updated emergency plans to secure other emergency supplies and services. - Provided additional training for employees who have participated in various exercises and drills. - Reexamined and improved emergency response and business continuity plans. - Strengthened onshore buildings and elevated equipment where appropriate to minimize potential flood damage. - Worked with the states and local emergency management officials to provide documentation and credentials for employees who need access to disaster sites where access is restricted during an emergency. - Participated in industry conferences to share best practices and improvement opportunities. Pipelines and hurricanes (steps industry takes to prepare for and return after a storm) - Refiners, in the hours before a large storm makes landfall, will usually evacuate all non-essential personnel and begin shutting down or reducing operations. - Operations in areas not forecast to take a direct hit from the storm often are shut down or curtailed as a precaution because storms can change direction with little notice. - Once safe, teams come in to assess damage. If damage or flooding has occurred, it must be repaired and dealt with before the refinery can be brought back on-line. - Other factors that can cause delays in restarting refineries include the availability of crude oil, electricity to run the plant and water used for cooling the process units. - Refineries are complex. It takes more than a flip of a switch to get a refinery back up and running. Once a decision has been made that it is safe to restart, it can take several days before the facility is back to full operating levels. This is because the process units and associated equipment must be returned to operation in a staged manner to ensure a safe and successful startup. - If facilities are undamaged or necessary repairs have been made, and ancillary facilities - like pipelines that carry the oil and natural gas - are undamaged and ready to accept shipments, operators will begin restarting production. - Pipeline operations can be impacted by storms, primarily through power outages, but also by direct damage. - Offshore pipelines damaged require the hiring of divers, repairs and safety inspections before supplies can flow. Damaged onshore pipelines must be assessed, repaired and inspected before resuming operations. - Without power, crude oil and petroleum products cannot be moved through pipelines. Operators routinely hold or lease back-up generators but need time to get them onsite. - If there is no product put into pipelines because Gulf Coast/Gulf of Mexico crude or natural gas production has been curtailed, or because of refinery shutdowns, the crude and products already in the pipelines cannot be pushed out the other end. - Wind damage to above ground tanks at storage terminals can also impact supplies into the pipeline. : The 2008 hurricane season was very active, with 16 named storms, of which eight became hurricanes and five of those were major hurricanes. For the U.S. oil and natural gas industry, the two most serious storms of 2008 were Hurricane Ike, which made landfall in mid-September near Baytown, Texas, and Hurricane Gustav, which made landfall on September 1 in Louisiana. Hurricane Gustav, a strong Category 2 storm, kept off-line oil and natural gas delivery systems and production platforms that had not yet been fully restored from a smaller storm two weeks earlier, and brought significant flooding as far north as Baton Rouge. Hurricane Ike, another strong Category 2 hurricane, caused significant portions of the production, processing, and pipeline infrastructure along the Gulf Coast in East Texas and Louisiana to shut down. Ike caused significant destruction to electric transmission and distribution lines, and these damages delayed the restart of major processing plants, pipelines, and refineries. As many as 3.7 million customers were without electric power following the storm, with about 2.5 million in Texas alone. At the peak of disruptions, more than 20 percent of total U.S. refinery capacity was idled. The Minerals Management Service - now called Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE) estimated that 2,127 of the 3,800 total oil and natural gas production platforms in the Gulf of Mexico were exposed to hurricane conditions, with winds greater than 74 miles per hour, from Hurricanes Gustav and Ike. A total of 60 platforms were destroyed as a result of Hurricanes Gustav and Ike. Some platforms which had been previously reported as having extensive damage were reassessed and determined to be destroyed. The destroyed platforms produced 13,657 barrels of oil and 96.5 million cubic feet of natural gas daily or 1.05 percent of the oil and 1.3 percent of the natural gas produced daily in the Gulf of Mexico. : The 2005 hurricane season was the most active in recorded history, shattering previous records. According to the Department of Energy, refineries in the path of hurricanes Katrina and Rita accounting for about 29 percent of U.S. refining capacity were shut down at the peak of disruptions. Offshore, the Minerals Management Service (MMS) estimated 22,000 of the 33,000 miles of pipelines and 3,050 of the 4,000 platforms in the Gulf were in the direct paths of the two Category 5 storms. Together the storms destroyed 115 platforms and damaged 52 others. Even so, there was no loss of life among industry workers and contractors. An MMS report found "no accounts of spills from facilities on the federal Outer Continental Shelf that reached the shoreline; oiled birds or mammals; or involved any discoveries of oil to be collected or cleaned up". : Hurricane Ivan was the strongest hurricane of the 2004 season and among one of the most powerful Atlantic hurricanes on record. It moved across the Gulf of Mexico to make landfall in Alabama. Ivan then looped across Florida and back into the Gulf, regenerating into a new tropical system, which moved into Louisiana and Texas. The MMS estimated approximately 150 offshore facilities and 10,000 miles of pipelines were in the direct path of Ivan. Seven platforms were destroyed and 24 others damaged. The oil and natural gas industry submitted numerous damage reports to MMS, including for mobile drilling rigs, offshore platforms, producing wells, topside systems including wellheads and production and processing equipment, risers, and pipeline systems that transport oil and gas ashore from offshore facilities.
<urn:uuid:5a1087ae-92e7-46db-8ae2-20172a204f5d>
CC-MAIN-2013-20
http://www.api.org/news-and-media/hurricane-information/hurricane-preparation.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95195
2,794
2.546875
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
• Incubation: 18-20 days • Clutch Size: 4 eggs • Young Fledge: 16-21 days after hatching • Typical Foods: insects, aquatic invertebrates and seeds Female red phalaropes are stunning -- they are a rich chestnut color with a dark crown and white face. However, virtually all Ohio birds are in drab non-breeding plumage. Habitat and Habits This species prefers the open waters of Lake Erie. It is most typically found along stone jetties and breakwalls in sheltered harbors. The flight call is similar to that of the red-necked phalarope, but generally higher pitched. Reproduction and Care of the Young Breeding takes place in Alaska and northern Canada. Nests are hollows in the ground of marshy tundra. The male raises the young.
<urn:uuid:5822656d-7af7-48bb-9307-359a74f2061e>
CC-MAIN-2013-20
http://www.ohiodnr.com/Home/species_a_to_z/redphalarope/tabid/17697/Default.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.899401
185
3.03125
3
[ "nature" ]
{ "climate": [], "nature": [ "habitat" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Since 1993, RAN’s Protect-an-Acre program (PAA) has distributed more than one million dollars in grants to more than 150 frontline communities, Indigenous-led organizations, and allies, helping their efforts to secure protection for millions of acres of traditional territory in forests around the world. Rainforest Action Network believes that Indigenous peoples are the best stewards of the world’s rainforests and that frontline communities organizing against the extraction and burning of dirty fossil fuels deserve the strongest support we can offer. RAN established the Protect-an-Acre program to protect the world’s forests and the rights of their inhabitants by providing financial aid to traditionally under-funded organizations and communities in forest regions. Indigenous and frontline communities suffer disproportionate impacts to their health, livelihood and culture from extractive industry mega-projects and the effects of global climate change. That’s why Protect-an-Acre provides small grants to community-based organizations, Indigenous federations and small NGOs that are fighting to protect millions of acres of forest and keep millions of tons of CO2 in the ground. Our grants support organizations and communities that are working to regain control of and sustainably manage their traditional territories through land title initiatives, community education, development of sustainable economic alternatives, and grassroots resistance to destructive industrial activities. PAA is an alternative to “buy-an-acre” programs that seek to provide rainforest protection by buying tracts of land, but which often fail to address the needs or rights of local Indigenous peoples. Uninhabited forest areas often go unprotected, even if purchased through a buy-an-acre program. It is not uncommon for loggers, oil and gas companies, cattle ranchers, and miners to illegally extract resources from so-called “protected” areas. Traditional forest communities are often the best stewards of the land because their way of life depends upon the health of their environment. A number of recent studies add to the growing body of evidence that Indigenous peoples are better protectors of their forests than governments or industry. Based on the success of Protect-an-Acre, RAN launched The Climate Action Fund (CAF) in 2009 as a way to direct further resources and support to frontline communities and Indigenous peoples challenging the fossil fuel industry. Additionally, RAN has been a Global Advisor to Global Greengrants Fund (GGF) since 1995, identifying recipients for small grants to mobilize resources for global environmental sustainability and social justice using the same priority and criteria as we use for PAA and CAF. Through these three programs each year we support grassroots projects that result in at least:
<urn:uuid:995ec683-d967-4f36-82d9-547c9ea3d646>
CC-MAIN-2013-20
http://ran.org/protect-an-acre
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938919
540
2.671875
3
[ "climate" ]
{ "climate": [ "climate change", "co2" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Per Square Meter Warm-up: Relationships in Ecosystems (10 minutes) 1. Begin this lesson by presenting the powerpoint, “Per Square Meter”. 2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes) 3. Record the different types of relationships on the board. Activity One: My Own Square Meter (30 minutes) 1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot. 2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1 Activity Two: Who lives in our playground? (10 minutes) 1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings. a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class. 2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products. 3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower. Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes) 1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines. a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food? Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter. b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there? Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment. c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash? Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms. d. Is there a difference between positive human-made products and negative ones? What are some examples of each? Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution. Wrap Up: Our Classroom Eco-Web (20-30 minutes) 1. Have students create classroom artwork by illustrating the relationships between their eco-systems. 2. Each student should draw at least two components of his or her square meter. 3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified. 4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems. Extension: Exploring Aquatic Eco-Systems (On-going Activity) Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system. Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium.
<urn:uuid:c76adb43-fdc6-442d-882e-b7781f7e7d83>
CC-MAIN-2013-20
http://www.earthday.org/square-meter
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939051
1,207
3.921875
4
[ "nature" ]
{ "climate": [], "nature": [ "ecological", "ecosystem", "ecosystems", "habitat" ] }
{ "strong": 3, "weak": 1, "total": 4, "decision": "accepted_strong" }
|Freshwater Mussels of the Upper Mississippi River System| Mussel Conservation Activities 2005 Highlights: Possible fish predation of subadult Higgins eye was observed in the Upper Mississippi River, Pools 2 and 4. Subadult Higgins eye pearlymussels (Lampsilis higginsii) from the Upper Mississippi River, Pools 2 and 4. Shell damage may be due to predation by fish (i.e. common carp or freshwater drum). Top photo by Mike Davis, Minnesota Department of Natural Resources; bottom photo by Gary Wege, U.S. Fish and Wildlife Service. Species Identification and Location • Threatened and Endangered Mussels • Life History • Ecology • Mussel Harvest on the River • Current Threats • Mussel Conservation Activities • Ongoing Studies and Projects • Multimedia • Teacher Resources • Frequently Asked Questions • Glossary • References • Links to Other Mussel Sites Privacy • FOIA • FirstGov • Contact Department of the Interior • U.S. Fish & Wildlife Service • U.S. Geological Survey |Last updated on December 21, 2006
<urn:uuid:931a73fa-5694-4b94-93fe-4dedcf039dcd>
CC-MAIN-2013-20
http://www.fws.gov/midwest/mussel/highlights/2005_highlight6.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.745841
227
3.015625
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Karuk Tribe: Learning from the First Californians for the Next California Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice. This article was published in collaboration with GlobalPossibilities.org. The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management. “A lot of people don’t realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,” says Tripp. Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore “balanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.” The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change. Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall. There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California. Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies. The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change. Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment. “It has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,” says Tripp. Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact. Though the Karuk’s dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan. The Plan provides over twenty “Cultural Environmental Management Practices” that are based on traditional ecological knowledge and the “World Renewal” philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way. “It is designed to be a living document…We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,” says Tripp. One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests. “The practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapes” explains Tripp. Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle. “Today, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,” says Tripp. The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices. According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the plan’s goals, such as multi-site cultural burning practices. The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals. The work of California’s environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy. In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: “Being adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.” Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USC’s Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in America’s Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and America’s Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka).
<urn:uuid:003baaf4-69c7-4ee7-b37f-468bf9b55842>
CC-MAIN-2013-20
http://www.resilience.org/stories/2012-10-19/karuk-tribe-learning-from-the-first-californians-for-the-next-california
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945849
1,714
3.296875
3
[ "climate", "nature" ]
{ "climate": [ "adaptation", "climate change", "climate justice", "drought", "greenhouse gas" ], "nature": [ "ecological", "ecosystem", "ecosystem health", "restoration" ] }
{ "strong": 6, "weak": 3, "total": 9, "decision": "accepted_strong" }
John Langley Howard was a revolutionary regionalist painter known for depicting labor and industry in California as well as his reverence for the natural world. Howard took a strong stance on social and environmental issues and used his art to communicate his strong emotional response toward each of his subjects. Table of Contents John Langley Howard was born in 1902 into a respected family of artists and architects. His father, John Galen Howard relocated the family to California in 1904 to become campus architect of the University of California, Berkeley. It was only after attending the very same campus his father helped to create, that Howard suddenly decided he wanted to pursue a career as an artist and not an engineer as previously planned. Following this decision, Howard enrolled in the California Guild of Arts and Crafts in Oakland and then transferred to the Arts Students’ League in New York City. At the school, he met Kenneth Hayes Miller who supported Howard’s attitude because the “taught the bare rudiments of painting and composition, and stressed the cultivation of the ultra-sensitive, intuitive approach” (Hailey 56). After saving his money, Howard travelled to Paris for six months to seek out his own artistic philosophy. However, it quickly became apparent to Howard that he placed more value on pure talent than professional training. In 1924, Howard left art school to pursue his career and marry his first wife, Adeline Day. He had his first one-person exhibition at the Modern Gallery in San Francisco in 1927. Shortly after, he attempted portraiture. Following the start of the Depression, Howard found himself appalled by the social conditions and began to follow “his own brand of Marxism.” Howard and his wife began to attend meetings of the Monterey John Reed Club, discussing politics and social concerns. Soon, the artist became determined to communicate society’s needs for the betterment of the future. His landscapes began to include industry and its effects to the surrounding region. In 1934, Howard was hired through the New Deal Public Works Art Project to create a mural for the inside of Coit Tower on Telegraph Hill in San Francisco depicting California industry. The project called for twenty-seven artists to be hired to paint frescos inside the newly erected monument funded by philanthropist Lillie Hitchcock Coit. Each artist was to depict a scene central to California living, including industry, agriculture, law, and street scenes of San Francisco. Howard’s completed fresco drew notorious attention for showing an unemployed worker reading Marxist materials, a gathered group of unemployed workers, and a man panning for gold while watching a wealthy couple outside of their limousine. In a nearby mural by Bernard Zakheim (1896-1985), Howard himself was used as a model. He is shown crumpling a newspaper and grabbing a Marxist book from a library shelf. This soon led to the artists being linked to a local group of striking dock workers. They were accused of attempting to lead a Communist revolution. Howard’s murals as well as the work of Clifford Wight (1900-1960) and Zakheim became highly scrutinized, and the uproar over the works led to a delay in opening Coit Tower. In order to protect their work from being defaced or completely destroyed, the muralists chose to sleep outside the tower. The SF Art Commission ultimately cancelled the opening of Coit Tower as a result of the controversy and did not open it until months later. During this time, Howard relocated his family to Santa Fe, New Mexico citing his son’s health concerns for almost two years before returning to Monterey in 1940. Following the onset of World War II, he had a renewed interest in landscape and soon ceased to include social commentary within his work, thus removing the human figure from his paintings. The artist divorced his first wife in 1949. In 1951, Howard’s art took another turn when the artist painted The Rape of the Earth which rallied against the destruction of nature by technology, making Howard one of the first “eco-artists.” During the same year he also married sculptor Blanche Phillips (1908-1976). He began illustrating for Scientific American Magazine and used this medium to refine his technique. Howard’s landscapes began turning to “magic realism” or “poetic realism” as Howard preferred to call it. This method is described as the use of naturalistic images and forms “to suggest relationships that cannot always be directly described in words” (Aldrich 184). His aim was to communicate a poetic and spiritual connection with the landscape depicted. Overall, Howard lived in more than 20 different locations during his career. In 1997, Howard attended the dedication of Pioneer Park at Coit Tower and was the only surviving member of the twenty-seven muralists included in the original project. The murals were restored by the City of San Francisco in 1990 after water damage and age dictated the need for restoration. Howard died at the age of 97 in his sleep at his Potrero Hill home in 1999. II. AN ANALYSIS OF THE ARTIST'S WORK “I think of painting as poetry and I think of myself as a representational poet. I want to describe my subject minutely, but I also way to describe my emotional response to it…what I’m doing is making a self-portrait in a peculiar kind of way.” – John Langley Howard John Langley Howard was widely considered a wanderer and a free spirit. While Howard did receive academic training from the California Guild of Arts and Crafts in Oakland and the Arts Students’ League in New York City, he chose to align himself with instructors whose opinions of art education matched his predetermined beliefs. These teachers included Kenneth Hayes Miller (1876-1952) who valued an analytical, bare bones approach to art instruction and supported greater personal development of intuitive talent. Howard expressed this viewpoint stating that: “I want everything to be meaningful in a descriptive way. I want expression and at the same time I want to control it down to a gnat’s eyebrow. I identify with my subject. I empathize with my subject” (Moss 62). In the 1920s, Howard became known as a Cezanne-influenced landscape artist and portraitist. Tempera, oil, and etching became his primary media while his subject matter turned to poetic and often spiritually infused imagery which would resurface later in his career. Earth tones and very small brushstrokes were utilized, allowing Howard to refine his images. Howard exhibited frequently with his brothers Charles Howard (1899-1978) and Robert Howard (1896-1983). Critic Jehanne Bietry wrote of their joint Galerie Beaux Arts show that: “of (the Howard brothers), John Langley is the poet, the mystic and the most complex…there predominates in his work a certain quality, an element of sentiment that escapes definition but is the unmistakable trait by which one recognizes deeper art” (Hailey 60). It is significant that a critic would accurately take note of Howard artistic aims at such an early stage because what Bietry describes ultimately became the primary focus of Howard’s career. Howard experienced a dramatic change in medium when he was commissioned to paint a mural for the Coit Tower WPA project in 1934. The project was Howard’s first and only mural and provided the artist with an outlet for his newly discovered Marxist social beliefs. While Howard supported a political agenda rather explicitly in his image, his focus on deeper subject matter permeates throughout the work. Most important to Howard is “the idea of human conflict that [he] pictorializes and deplores – man’s tragic flaw manifest again in this particular situation” (Nash 79). Howard’s work had progressed steadily into the realm of social realism until the backlash against the Coit Tower murals led him in a new direction. Howard abandoned explicit statements of social commentary and returned to his roots as a landscape painter. However, this did not prevent the artist from illustrating important issues because he then became one of the first “eco-artists.” Through his painting, Howard investigated the role of technology on the environment and used the San Francisco Bay Area as well as Monterey to demonstrate his point of view. He continued following his original artistic tendencies by delving into “magic realism” or “poetic realism” which utilized the spiritual connection that Howard sought to find within his work. Art critic Henrietta Shore recognized the balance that Howard achieved within his work, stating that he “is modern in that he is progressive, yet his work proves that he does not discard the traditions from which all fine art has grown” (Hailey 65). Overall, Howard’s career presents a unique portrait of individual expression and spiritual exploration. 1902 Born in Montclair, New Jersey 1920 Enrolls as an Engineering major at UC Berkeley 1922 Realizes he wants to be an artist 1923-24 Attends Art Students’ League in New York 1924 Leaves art school 1924 Marries first wife, Adeline Day 1927 First one-person exhibition held at The Modern Gallery, San Francisco 1928 First child, Samuel born 1930 Daughter Anne born 1934 Commissioned to Paint Coit Tower mural, San Francisco 1940 Studies ship drafting and worked as a ship drafter during World War II 1942 Serves as air raid warden in Mill Valley, CA 1949 Divorces his first wife 1950 Teaches at California School of Fine Arts, San Francisco 1951 Marries second wife, sculptor Blanche Phillips 1951 Moves to Mexico 1951 Paints The Rape of the Earth communicating his eco-friendly stance 1953-1965 Illustrates for Scientific American magazine 1958 Teaches at Pratt Institute Art School, Brooklyn, NY 1965 Moves to Hydra, Greece 1967 Moves to London 1970 Returns to California 1979 Blanche Phillips dies 1980 Marries Mary McMahon Williams 1999 Died in his sleep at home San Francisco, California California Palace of the Legion of Honor, CA City of San Francisco, CA IBM Building, New York, NY The Oakland Museum, CA The Phillips Collection, Washington D.C. San Francisco Museum of Modern Art, CA Security Pacific National Bank Headquarters, Los Angeles, CA Springfield Museum of Fine Arts University of Utah, UT 1927 Modern Gallery, San Francisco, CA 1928 Beaux Arts Gallery, San Francisco, CA 1928 East-West Gallery, San Francisco, CA 1928-51 San Franciso Art Association, CA 1935 Paul Elder Gallery, San Francisco, CA 1936 Cincinnati Art Museum, OH 1936 Museum of Modern Art, San Francisco, CA 1939 Golden Gate International Exposition, Department of Fine Arts, Treasure Island, CA 1939 Museum of Modern Art, San Francisco, CA 1941 Carnegie Institute, Pittsburgh, PA 1943 Corcoran Gallery, Washington D.C. 1943 M. H. de Young Memorial Museum, San Francisco, CA 1946-47 Whitney Museum, NY 1947 Rotunda Gallery, City of Paris, San Francisco, CA 1952 Carnegie Institute, Pittsburgh, PA 1956 Santa Barbara Museum of Art, CA 1973 Capricorn Asunder Gallery, San Francisco, CA 1974 Lawson Galleries, San Francisco, CA 1976 de Saisset Art Gallery and Museum, CA 1982 San Francisco Museum of Modern Art Rental Gallery, San Francisco, CA 1983 California Academy of Sciences, CA 1983 Monterey Museum of Art, CA 1986 Charles Campbell Gallery, San Francisco, CA 1987 Martina Hamilton Gallery, NY 1988 Oakland Museum, CA 1989 Tobey C. Moss Gallery, CA 1991 M. H. de Young Memorial Museum, San Francisco, CA 1992 Tobey C. Moss Gallery, CA 1993 Tobey C. Moss Gallery, CA California Society of Mural Painters’ and Writers’ and Artists’ Union Carmel Art Association Club Beaux Arts San Francisco Art Association Society of Mural Painters Marin Society of Artists Monterey John Reed Club Anne Bremer Memorial Award for Painting, San Francisco Art Association First Prize, Pepsi-Cola Annual “Portrait of America” First Prize, San Francisco Art Association Award, City of San Francisco Art Festival Citation for Merit, Society of Illustrators, New York - 1. Aldrich, Linda. “John Langley Howard.” American Scene Painting: California, 1930s and 1940s. Irvine, Westphal Publishing: 1991. - 2. Hailey, Gene. “John Langley Howard…Biography and Works.” California Art Research Monographs, v. 17, p.54-92. San Francisco: Works Progress Administration: 1936-1937. - 3. Moss, Stacey. The Howards, First Family of Bay Area Modernism. Oakland Museum: 1988. - 4. Nash, Steven A. Facing Eden: 100 Years of Landscape Art in Bay Area. University of California Press: 1995. IX. WORKS FOR SALE BY THIS ARTIST
<urn:uuid:b1ad8cf1-1721-4d74-824a-229b6b70a91b>
CC-MAIN-2013-20
http://www.sullivangoss.com/johnlangley_Howard/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95768
2,757
3.625
4
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
A new tool to identify the calls of bat species could help conservation efforts. Because bats are nocturnal and difficult to observe or catch, the most effective way to study them is to monitor their echolocation calls. These sounds are emitted in order to hear the echo bouncing back from surfaces around the bats, allowing them to navigate, hunt and communicate. Many different measurements can be taken from each call, such as its minimum and maximum frequency, or how quickly the frequency changes during the call, and these measurements are used to help identify the species of bat. However, a paper by an international team of researchers, published in the Journal of Applied Ecology, asserts that poor standardisation of acoustic monitoring limits scientists’ ability to collate data. Kate Jones, chairwoman of the UK-based Bat Conservation Trust told the BBC that “without using the same identification methods everywhere, we cannot form reliable conclusions about how bat populations are doing and whether their distribution is changing. "Because many bats migrate between different European countries, we need to monitor bats at a European - as well as country - scale.” The team selected 1,350 calls from 34 different European bat species from EchoBank, a global echolocation library containing more than 200,000 bat call recordings. This raw data has allowed them to develop the identification tool, iBatsID , which can identify 34 out of 45 species of bats. This free online tool works anywhere in Europe, and its creators claim can identify most species correctly more than 80% of the time. There are 18 species of bat residing in the UK, including the common pipistrelle and greater horseshoe bat. Monitoring bats is vital not just to this species, but also to the whole ecosystem. Bats are extremely sensitive to changes in their environment, so if bat populations are declining, it can be an indication that other species might be affected in the future.
<urn:uuid:293a002d-f152-4885-9293-13b158f7cec0>
CC-MAIN-2013-20
http://www.countryfile.com/news/bats
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93685
398
4.46875
4
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecosystem" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
What Is Air Pollution? in its great magnitude has existed in the 20th century from the coal burning industries of the early century to the fossil burning technology in the new century. The problems of air pollution are a major problem for highly developed nations whose large industrial bases and highly developed infrastructures generate much of the air Every year, billions of tonnes of pollutants are released into the atmosphere; the sources include power plants burning fossil fuels to the effects of sunlight on certain natural materials. But the air pollutants released from natural materials pose very little health threat, only the natural radioactive gas radon poses any threat to health. So much of the air pollutants being released into the atmosphere are all results of man’s activities. In the United Kingdom, traffic is the major cause of air pollution in British cities. Eighty six percent of families own either one or two vehicles. Because of the high-density population of cities and towns, the number of people exposed to air pollutants is great. This had led to the increased number of people getting chronic diseases over these past years since the car ownership in the UK has nearly trebled. These include asthma and respiratory complaints ranging through the population demographic from children to elderly people who are most at risk. Certainly those who are suffering from asthma will notice the effects more greatly if living in the inner city areas or industrial areas or even near by major roads. Asthma is already the fourth biggest killer, after heart diseases and cancers in the UK and currently, it affects more than three point four million In the past, severe pollution in London during 1952 added with low winds and high-pressure air had taken more than four thousand lives and another seven hundred in 1962, in what was called the ‘Dark Years’ because of the dense dark polluted air. is also causing devastation for the environment; many of these causes are by man made gases like sulphur dioxide that results from electric plants burning fossil fuels. In the UK, industries and utilities that use tall smokestacks by means of removing air pollutants only boost them higher into the atmosphere, thereby only reducing the concentration at their site. These pollutants are often transported over the North Sea and produce adverse effects in western Scandinavia, where sulphur dioxide and nitrogen oxide from UK and central Europe are generating acid rain, especially in Norway and Sweden. The pH level, or relative acidity of many of Scandinavian fresh water lakes has been altered dramatically by acid rain causing the destruction of entire fish populations. In the UK, acid rain formed by subsequent sulphur dioxide atmospheric emissions has lead to acidic erosion in limestone in North Western Scotland and marble in Northern England. In 1998, the London Metropolitan Police launched the ‘Emissions Controlled Reduction’ scheme where by traffic police would monitor the amount of pollutants being released into the air by vehicle exhausts. The plan was for traffic police to stop vehicles randomly on roads leading into the city of London, the officer would then measure the amounts of air pollutants being released using a CO2 measuring reader fixed in the owner's vehicle's exhaust. If the exhaust exceeded the legal amount (based on micrograms of pollutants) the driver would be fined at around twenty-five pounds. The scheme proved unpopular with drivers, especially with those driving to work and did little to help improve the city air quality. In Edinburgh, the main causes of bad air quality were from the vast number of vehicles going through the city centre from west to east. In 1990, the Edinburgh council developed the city by-pass at a cost of nearly seventy five million pounds. The by-pass was ringed around the outskirts of the city where its main aim was to limit the number of vehicles going through the city centre and divert vehicles to use the by-pass in order to reach their destination without going through the city centre. This released much of the congestion within the city but did little very little in solving the city’s overall air quality. To further decrease the number of vehicles on the roads, the government promoted public transport. Over two hundred million pounds was devoted in developing the country's public transport network. Much of which included the development of more bus lanes in the city of London, which increased the pace of bus services. Introduction of gas and electric powered buses took place in Birmingham in order to decrease air pollutants emissions around the centre of the city. Because children and the elderly are at most risk to chronic diseases, such as asthma, major diversion roads were build in order to divert the vehicles away from residential areas, schools and elderly institutions. In some councils, trees were planted along the sides of the road in order to decrease the amount of carbon monoxide emissions. Other ways of improving the air quality included the restriction on the amounts of air pollutants being released into the atmosphere by industries; tough regulations were placed whereby if the air quality dropped below a certain level around the industries area, a heavy penalty would be wavered against them. © Copyright 2000, Andrew Wan.
<urn:uuid:ea6c54fe-1f6e-4a4c-bcb5-4f4c9e0fb6de>
CC-MAIN-2013-20
http://everything2.com/user/KS/writeups/air+pollution
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948933
1,097
3.25
3
[ "climate" ]
{ "climate": [ "co2" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
The Economics of Ecosystems and Biodiversity: Ecological and Economic Foundations Human well-being relies critically on ecosystem services provided by nature. Examples include water and air quality regulation, nutrient cycling and decomposition, plant pollination and flood control, all of which are dependent on biodiversity. They are predominantly public goods with limited or no markets and do not command any price in the conventional economic system, so their loss is often not detected and continues unaddressed and unabated. This in turn not only impacts human well-being, but also seriously undermines the sustainability of the economic system. It is against this background that TEEB: The Economics of Ecosystems and Biodiversity project was set up in 2007 and led by the United Nations Environment Programme to provide a comprehensive global assessment of economic aspects of these issues. The Economics of Ecosystems and Biodiversity, written by a team of international experts, represents the scientific state of the art, providing a comprehensive assessment of the fundamental ecological and economic principles of measuring and valuing ecosystem services and biodiversity, and showing how these can be mainstreamed into public policies. The Economics of Ecosystems and Biodiversity and subsequent TEEB outputs will provide the authoritative knowledge and guidance to drive forward the biodiversity conservation agenda for the next decade. 1. Integrating the Ecological and Economic Dimensions in Biodiversity and Ecosystem Service Valuation 2. Biodiversity, Ecosystems and Ecosystem Services 3. Measuring Biophysical Quantities and the Use of Indicators 4. The Socio-cultural Context of Ecosystem and Biodiversity Valuation 5. The Economics of Valuing Ecosystem Services and Biodiversity 6. Discounting, Ethics, and Options for Maintaining Biodiversity and Ecosystem Integrity 7. Lessons Learned and Linkages with National Policies Appendix 1: How the TEEB Framework Can be Applied: The Amazon Case Appendix 2: Matrix Tables for Wetland and Forest Ecosystems Appendix 3: Estimates of Monetary Values of Ecosystem Services "A landmark study on one of the most pressing problems facing society, balancing economic growth and ecological protection to achieve a sustainable future." - Simon Levin, Moffett Professor of Biology, Department of Ecology and Evolution Behaviour, Princeton University, USA "TEEB brings a rigorous economic focus to bear on the problems of ecosystem degradation and biodiversity loss, and on their impacts on human welfare. TEEB is a very timely and useful study not only of the economic and social dimensions of the problem, but also of a set of practical solutions which deserve the attention of policy-makers around the world." - Nicholas Stern, I.G. Patel Professor of Economics and Government at the London School of Economics and Chairman of the Grantham Research Institute on Climate Change and the Environment "The [TEEB] project should show us all how expensive the global destruction of the natural world has become and – it is hoped – persuade us to slow down.' The Guardian 'Biodiversity is the living fabric of this planet – the quantum and the variability of all its ecosystems, species, and genes. And yet, modern economies remain largely blind to the huge value of the abundance and diversity of this web of life, and the crucial and valuable roles it plays in human health, nutrition, habitation and indeed in the health and functioning of our economies. Humanity has instead fabricated the illusion that somehow we can get by without biodiversity, or that it is somehow peripheral to our contemporary world. The truth is we need it more than ever on a planet of six billion heading to over nine billion people by 2050. This volume of 'TEEB' explores the challenges involved in addressing the economic invisibility of biodiversity, and organises the science and economics in a way decision makers would find it hard to ignore." - Achim Steiner, Executive Director, United Nations Environment Programme This volume is an output of TEEB: The Economics of Ecosystems and Biodiversity study and has been edited by Pushpam Kumar, Reader in Environmental Economics, University of Liverpool, UK. TEEB is hosted by the United Nations Environment Programme (UENP) and supported by the European Commission, the German Federal Ministry for the Environment (BMU) and the UK Department for Environment, Food and Rural Affairs (DEFRA), recently joined by Norway's Ministry for Foreign Affairs, The Netherlands' Ministry of Housing (VROM), the UK Department for International Development (DFID) and also the Swedish International Development Cooperation Agency (SIDA). The study leader is Pavan Sukhdev, who is also Special Adviser – Green Economy Initiative, UNEP. View other products from the same publisher
<urn:uuid:906f7240-4b78-478d-9b89-4b845237d4f3>
CC-MAIN-2013-20
http://www.nhbs.com/the_economics_of_ecosystems_and_biodiversity_tefno_176729.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.898484
966
3.21875
3
[ "climate", "nature" ]
{ "climate": [ "climate change" ], "nature": [ "biodiversity", "biodiversity loss", "conservation", "ecological", "ecosystem", "ecosystem integrity", "ecosystem services", "ecosystems", "wetland" ] }
{ "strong": 8, "weak": 2, "total": 10, "decision": "accepted_strong" }
Scaevola taccada is a dense, spreading shrub that generally grows up to 3 meter in height. The light green leaves are somewhat succulent with a waxy covering and are alternately arranged along the stem. The blades are elongated and rounded at the tips, 5 to 20 cm long and 5 to 7 cm wide and the edges are often curled downward. The flowers are white or cream colored, often with purple streaks, 8 - 12 mm long, and have a pleasant fragrance. They have an irregular shape with all five petals on one side of the flower making it appear to have been torn in half. The flowers grow in small clusters from the leaf axils near the ends of the stems. The fruits of Scaevola taccada are fleshy berries. They are white, oblong, and about 1 cm long. The seeds are beige, corky and ridged. The inside of the fruit is spongy or corky and the fruits are buoyant. They can float for months in the ocean and still germinate after having been in salt water for up to a year. One study showed that the seeds germinated best after 250 days in salt water. (National Tropical Botanical Garden (NTBG). 1994. Naupaka. In Native Hawaiian plant information sheets. Lawai, Kauai: Hawaii Plant Conservation Center. National Tropical Botanical Garden. Unpublished internal papers.) (Rauch, Fred D., Heidi L. Bornhorst, and David L. Hensley. 1997. Beach Naupaka, Ornamentals and Flowers.) (Wagner, Warren L., Darrel R. Herbst, and S. H. Sohmer. 1990. Manual of the flowering plants of Hawai'i.) (Bornhorst, Heidi L. 1996. Growing native Hawaiian plants: a how-to guide for the gardener.)
<urn:uuid:4847de2d-8bad-482a-88bd-b2afc875fbbd>
CC-MAIN-2013-20
http://www.ntbg.org/plants/plant_details.php?rid=821&plantid=10272
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910244
395
3.109375
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
When the National Wild Turkey Federation (NWTF) was first founded in 1973 there were only 1.5 million wild turkeys across the U.S., Canada and Mexico. Today, it is estimated there are more than 5.6 million wild turkeys. In Utah, wild turkey restoration efforts continue to be the most aggressive in the nation. Over 2,800 wild turkeys have been relocated to suitable habitat areas in Utah since the winter of 1999. As a result, wild turkey permits have increased 20 percent for the spring 2002 season. However, this program will not be complete until over 200,000 wild turkeys roam the cottonwood river bottoms, pinyon/juniper, and ponderosa pine forests of the state. Whether you pursue wild turkeys as a hunter, or simply enjoy watching these magnificent birds in their natural surroundings, the time to view wild turkeys in Utah has never been better. At the forefront of this dramatic return in Utah, has been the Federation's volunteers, working side-by-side with the Utah Division of Wildlife Resources. Now, with most restoration efforts completed in the East, all eyes have shifted to the West, where the wild turkey continues to redefine it's own idea of suitable habitat. While the release of a wild turkey into western habitat remains one of the federation's most enduring symbols, it is just one brick in a foundation of good works that are impacting people's lives and the environment in many positive ways. Since 1977, the NWTF has spent over 144 million dollars on over 16,000 projects nationwide. The federation helps fund transplants, research projects, habitat acquisition, education, and the equipment needed to successfully accomplish these tasks. Through the Federation's regional habitat programs, volunteers have helped improve hundreds of thousands of acres by planting trees, crops, winter food sources and grasses that provide food and shelter for not only the wild turkey, but many others species of wildlife as well. Also improved in many areas, particularly in the west, has been water quality. Projects occurring right here in southeastern Utah include a San Rafael Desert guzzler, Knolls Ranch habitat improvement, and numerous other projects on the La Sal Mountains, Blue Mountains and Book Cliffs areas. This month the Price River chapter of the NWTF will be hosting it's annual Wild Turkey Banquet on January 26. For more information, please call (435) 259-9453.
<urn:uuid:75ff55fb-1d04-482b-9c4b-4e8cfec3acf0>
CC-MAIN-2013-20
http://www.sunadvocate.com/print.php?tier=1&article_id=149&poll=269&vote=results
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943449
490
2.53125
3
[ "nature" ]
{ "climate": [], "nature": [ "habitat", "restoration" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
DENVER – Put on your poodle skirts and tune in Elvis on the transistor radio, because it’s starting to look a lot like the 1950s. Unfortunately, this won’t be the nostalgic ’50s of big cars and pop music. The 1950s that could be on the way to Colorado is the decade of drought. So says Brian Bledsoe, a Colorado Springs meteorologist who studies the history of ocean currents and uses what he learns to make long-term weather forecasts. “I think we’re reliving the ’50s, bottom line,” Bledsoe said Friday morning at the annual meeting of the Colorado Water Congress. Bledsoe studies the famous El Niño and La Niña ocean currents. But he also looks at other, less well-known cycles, including long-term temperature cycles in the oceans. In the 1950s, water in the Pacific Ocean was colder than normal, but it was warmer than usual in the Atlantic. That combination caused a drought in Colorado that was just as bad as the Dust Bowl of the 1930s. The ocean currents slipped back into their 1950s pattern in the last five years, Bledsoe said. The cycles can last a decade or more, meaning bad news for farmers, ranchers, skiers and forest residents. “Drought feeds on drought. The longer it goes, the harder it is to break,” Bledsoe said. The outlook is worst for Eastern Colorado, where Bledsoe grew up and his parents still own a ranch. They recently had to sell half their herd when their pasture couldn’t provide enough feed. “They’ve spent the last 15 years grooming that herd for organic beef stock,” he said. Bledsoe looks for monsoon rains to return to the Four Corners and Western Slope in July. But there’s still a danger in the mountains in the summer. “Initially, dry lightning could be a concern, so obviously, the fire season is looking not so great right now,” he said. Weather data showed the last year’s conditions were extreme. Nolan Doesken, Colorado’s state climatologist, said the summer of 2012 was the hottest on record in Colorado. And it was the fifth-driest winter since record-keeping began more than 100 years ago. Despite recent storms in the San Juan Mountains, this winter hasn’t been much better. “We’ve had a wimpy winter so far,” Doesken said. “The past week has been a good week for Colorado precipitation.” However, the next week’s forecast shows dryness returning to much of the state. Reservoir levels are higher than they were in 2002 – the driest year since Coloradans started keeping track of moisture – but the state is entering 2013 with reservoirs that were depleted last year. “You don’t want to start a year at this level if you’re about to head into another drought,” Doesken said. It was hard to find good news in Friday morning’s presentations, but Bledsoe is happy that technology helps forecasters understand the weather better than they did during past droughts. That allows people to plan for what’s on the way. “I’m a glass-half-full kind of guy,” he said.
<urn:uuid:6b5ff0a8-5351-4289-bb86-d7195a7837dc>
CC-MAIN-2013-20
http://durangoherald.com/article/20130201/NEWS01/130209956/0/20120510/Drought-is-making-itself-at-home
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.964422
739
2.640625
3
[ "climate" ]
{ "climate": [ "drought", "el niño", "monsoon" ], "nature": [] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices Monday, March 23, 2009 The “presidi” translates as “garrisons” (from the French word, “to equip”), as protectors of traditional food production practices This past year, I have had rewarding opportunities to observe traditional food cultures in varied regions of the world. These are: Athabascan Indian in the interior of Alaska (the traditional Tanana Chiefs Conference tribal lands) in July, 2008 (for more, read below); Swahili coastal tribes in the area of Munje village (population about 300), near Msambweni, close to the Tanzania border in December, 2008-January, 2009 (for more, read below); and,Laikipia region of Kenya (January, 2009), a German canton of Switzerland (March, 2009), and the Piemonte-Toscana region of northern/central Italy (images only, February-March, 2009). In Fort Yukon, Alaska, salmon is a mainstay of the diet. Yet, among the Athabascan Indians, threats to subsistence foods and stresses on household economics abound. In particular, high prices for external energy sources (as of July, 2008, almost $8 for a gallon of gasoline and $6.50 for a gallon of diesel, which is essential for home heating), as well as low Chinook salmon runs for information click here, and moose numbers. Additional resource management issues pose threats to sustaining village life – for example, stream bank erosion along the Yukon River, as well as uneven management in the Yukon Flats National Wildlife Refuge. People are worried about ever-rising prices for fuels and store-bought staples, and fewer and fewer sources of wage income. The result? Villagers are moving out from outlying areas into “hub” communities like Fort Yukon -- or another example, Bethel in Southwest Alaska – even when offered additional subsidies, such as for home heating. But, in reality, “hubs” often offer neither much employment nor relief from high prices. In Munje village in Kenya, the Digo, a Bantu-speaking, mostly Islamic tribe in the southern coastal area of Kenya, enjoy the possibilities of a wide variety of fruits, vegetables, and fish/oils. Breakfast in the village typically consists of mandazi (a fried bread similar to a doughnut), and tea with sugar. Lunch and dinner is typically ugali and samaki (fish), maybe with some dried cassava or chickpeas. On individual shambas (small farms), tomatoes, cassava, maize, cowpeas, bananas, mangos, and coconut are typically grown. Ugali is consumed every day, as are cassava, beans, oil, fish -- and rice, coconut, and chicken, depending on availability. Even with their own crops, villagers today want very much to enter the market economy and will sell products from their shambas to buy staples and the flour needed to make mandazis, which they in turn sell. Sales of mandazis (and mango and coconut, to a lesser extent) bring in some cash for villagers. A treasured food is, in fact, the coconut. This set of pictures show how coconut is used in the village. True, coconut oil now is reserved only for frying mandazi. But it also is used as a hair conditioner, and the coconut meat is eaten between meals. I noted also that dental hygiene and health were good in the village. Perhaps the coconut and fish oils influence this (as per the work of Dr. Weston A. Price). Photos L-R: Using a traditional conical basket (kikatu), coconut milk is pressed from the grated meat; Straining coconut milk from the grated meat, which is then heated to make oil; Common breakfast food (and the main source of cash income), the mandazi, is still cooked in coconut oil Note: All photos were taken by G. Berardi Thursday, February 19, 2009 Despite maize in the fields, it is widely known that farmers are hoarding stocks in many districts. Farmers are refusing the NCPB/government price of Sh1,950 per 90-kg bag. They are waiting to be offered at least the same amount of money as that which was being assigned to imports (Bii, 2009b). “The country will continue to experience food shortages unless the Government addresses the high cost of farm inputs to motivate farmers to increase production,” said Mr. Jonathan Bii of Uasin Gish (Bartoo & Lucheli, 2009; Bii, 2009a, 2009b; Bungee, 2009). Pride and politics, racism and corruption are to blame for food deficits (Kihara & Marete, 2009; KNA, 2009; Muluka, 2009; Siele, 2009). Clearly, what are needed in Kenya are food system planning, disaster management planning, and protection and development of agricultural and rural economies. Click here for the full text. Photos taken by G. Berardi Cabbage, an imported food (originally), and susceptible to much pest damage. Camps still remain for Kenya’s Internally Displaced Persons resulting from post-election violence forced migrations. Food security is poor. Lack of sustained recent short rains have resulted in failed maize harvests. Friday, January 16, 2009 Today I went to a lunch time discussion of sustainability. This concept promoted development with an equitable eye to the triple bottom line - financial, social, and ecological costs. We discussed the how it seemed relatively easier to discuss the connections between financial and ecological costs, than between social costs and other costs. Sustainable development often comes down to "green" designs that consider environmental impacts or critiques of the capitalist model of financing. As I thought about sustainable development, or sustainable community management if you are a bit queasy with the feasibility of continuous expansion, I considered its corollaries in the field of disaster risk reduction. It struck me again that it is somewhat easier to focus on some components of the triple bottom line in relation to disasters. The vulnerability approach to disasters has rightly brought into focus the fact that not all people are equally exposed to or impacted by disasters. Rather, it is often the poor or socially marginalized most at risk and least able to recover. This approach certainly brings into focus the social aspects of disasters. The disaster trap theory, likewise, brings into focus the financial bottom line. This perspective is most often discussed in international development and disaster reduction circles. It argues that disasters destroy development gains and cause communities to de-develop unless both disaster reduction and development occur in tandem. Building a cheaper, non-earthquake resistant school in an earthquake zone, may make short-term financial sense. However,over the long term, this approach is likely to result in loss of physical infrastructure, human life, and learning opportunities when an earthquake does occur. What seems least developed to me, though I would enjoy being rebutted, is the ecological bottom line of disasters. Perhaps it is an oxymoron to discuss the ecological costs of disasters, given that many disasters are triggered natural ecological processes like cyclones, forest fires, and floods. It might also be an oxymoron simply because a natural hazard disaster is really looking at an ecological event from an almost exclusively human perspective. Its not a disaster if it doesn't destroy human lives and human infrastructure. But, the lunch-time discussion made me wonder if there wasn't something of an ecological bottom line to disasters in there somewhere. Perhaps it is in the difference between an ecological process heavily or lightly impacted by human ecological modification. Is a forest fire in a heavily managed forest different from that in an unmanaged forest? Certainly logging can heighten the impacts of heavy rains by inducing landslides, resulting in a landscape heavily rather than lightly impacted by the rains. Similar processes might also be true in the case of heavily managed floodplains. Flooding is concentrated and increased in areas outside of levee systems. What does that mean for the ecology of these locations? Does a marsh manage just as well in low as high flooding? My guess would be no. And of course, there is the big, looming disaster of climate change. This is a human-induced change that may prove quite disasterous to many an ecological system, everything from our pine forests here, to arctic wildlife, and tropical coral reefs. Perhaps, we disaster researchers, need to also consider a triple bottom line when making arguments for the benefits of disaster risk reduction. Tuesday, January 13, 2009 This past week the Northwest experienced a severe barrage of weather systems back to back. Everyone seemed to be affected. Folks were re-routed on detours, got soaked, slipped on ice, or had to spend money to stay a little warmer. In Whatcom and Skagit Counties, there are hundreds to thousands of people currently in the process of recovering and cleaning-up after the floods. These people live in the rural areas throughout the county, with fewer people knowing about their devastation and having greater vulnerability to flood hazards. Luckily, there are local agencies and non-profits who are ready at a moment’s call to help anyone in need. The primary organization that came to the aid of the flood victims was the American Red Cross. The last week I began interning and volunteering with one of these non-profits, the Mt. Baker American Red Cross (ARC) Chapter. While I am still in the process of getting screened and officially trained, I received first-hand experience and saw how important this organization is to the community. With the flood waters rising throughout the week, people were flooded out of their homes and rescued from the overflowing rivers and creeks. As the needs for help increased, hundreds of ARC volunteers were called to service. Throughout the floods there have been several shelters opened to accommodate the needs of these flood victims. On Saturday I was asked to help staff one of these shelters overnight in Ferndale. While I talked with parents and children, I became more aware of the stark reality of how these people have to recover from having all their possessions covered in sewage and mud and damaged by flood waters. In the meantime, these flood victims have all their privacy exposed to others in a public shelter, while they work to find stability in the middle of all the traumas of the events. As I sat talking and playing with the children, another thought struck me. Children are young and resilient, but it must be very difficult when they connect with a volunteer and then lose that connection soon after. Sharing a shelter with the folks over the weekend showed a higher degree of reality and humanity to the situation than the news coverage ever could. I posted this bit about my volunteer experience because it made me realize something about my education and degree track in disaster reduction and emergency planning. We look at ways to create a more sustainable community, and we need to remember that community service is an important part of creating this ideal. Underlying sustainable development is the triple bottom line (social, economy, and environment). Volunteers and non-profits are a major part of this social line of sustainability. Organizations like the American Red Cross only exist because of volunteers. So embrace President-elect Obama’s call for a culture of civil service this coming week and make a commitment to the organization of your choice with your actions or even your pocketbook. Know that sustainable development cannot exist with out social responsibility. Thursday, January 8, 2009 Its been two days now that schools have been closed in Whatcom County, not for snow, but for rain and flooding. This unusual event coincides with record flooding throughout Western Washington, just a year after record flooding closed I5 for three days and Lewis County businesses experienced what they then called an unprecedented 500 year flood. I guess not. There are many strange things about flood risk notation, and this idea that a 500 year flood often trips people up. They often believe a flood of that size will happen only once in 500 years. On a probabilistic level, this is inaccurate. A 500 year flood simply has a .2% probability of happening each year. A more useful analogy might be to tell people they are rolling a 500 sided die every year and hoping that it doesn’t come up with a 1. Next year they’ll be forced to roll again. But, this focus on misunderstandings of probability often hides an even larger societal misunderstanding . Flood risk changes when we change the environment in which it occurs. If a flood map tells you that you are not in the flood plain, better check the date of the map. Most maps are utterly out of date and many vastly underestimate present flood risk. There are several reasons this happens. Urban development, especially development with a lot of parking lots and buildings that don’t let water seep into the ground, will cause rainwater to move quickly into rivers rather than seep into the ground and slowly release. Developers might complain that they are required to create runoff catchment wetlands when they do build. They do, but these requirements may very well be based upon outdated data on flood risk. Thus, each new development never fully compensates for its runoff, a small problem for each site but a mammoth problem when compounded downstream. Deforesting can have the same effect, with the added potential for house-crushing and river-clogging mudslides. Timber harvesting is certainly an important industry in our neck of the woods. Not only is commercial logging an important source of jobs for many rural and small towns, logging on state Department of Natural Resource land is the major source of funding for K-12 education. Yet, commercial logging, like other industries, suffers from a problem of cost externalization. When massive mudslides occurred during last year’s storm, Weyerhaeuser complained that it wasn’t it’s logging practices, but the fact that it was an unprecedented, out of the blue, 500 year storm that caused it. While it is doubtful the slides would have occurred uncut land, that isn’t the only fallacy. When the slide did occur, the costs of repairing roads, treatment plants, and bridges went to the county and often was passed on to the nation’s tax payers through state and federal recovery grants. Thus, what should have been paid by Weyerhaeuser, 500 year probability or not, was paid by someone else. Finally, there is local government. Various folks within local governments set regulations for zoning, deciding what will be built and where. Here is the real crux of the problem. Local government also gets an increase in revenue in the form of property, sales, and business income taxes. Suppress the updating of flood plain maps, and you get a short term profit and often, a steady supply of happy voters. You might think these local governments will have to pay when the next big flood comes, but often that can be avoided. Certainly, they must comply with federal regulations on flood plain management to be part of the National Flood Insurance program, but that plan has significant leeway and little monitoring. Like the commercial logging, disaster-stricken local governments can often push the recovery costs off to individual homeowners through the FEMA homeowner’s assistance program, and off to state and federal agencies by receiving disaster recovery and community development grants and loans. Certainly, some communities are so regularly devastated, and are so few resources, that disasters simply knock them down before they can given stand up again. But others have found loopholes and can profit by continuing to use old food maps and failing to aggressively control flood plain development. What is it going to take to really change this system and make it unprofitable to profit from bad land use management? Here’s a good in-depth article on last year’s landslides in Lewis County. http://seattletimes.nwsource.com/html/localnews/2008048848_logging13m.html An interesting article on the failure of best management practices in development catchment basins can be found here: Hur, J. et al (2008) Does current management of storm water runoff adequately protect water resources in developing catchments? Journal of Soil and Water Conservation, 63 (2) pp. 77-90. Monday, December 29, 2008 It’s difficult to imagine a more colorful book, celebrating locally-grown and –marketed foods, than David Westerlund’s Simone Goes to the Market: A Children’s Book of Colors Connecting Face and Food. This book is aimed at families and the foods they eat. Who doesn’t want to know where their food is coming from – the terroir, the kind of microclimate it’s produced in, as well as who’s selling it? Gretchen sells her pole beans (purple), Maria her Serrano peppers (green), Dana and Matt sell their freshly-roasted coffee (black), Katie her carrots (orange), a blue poem from Matthew, brown potatoes from Roslyn, yellow patty pan squash from Jed, red tomatoes (soft and ripe) from Diana, and golden honey from Bill (and his bees). This is a book perfect for children of any age who want to connect to and with the food systems that sustain community. Order from [email protected].
<urn:uuid:e139d24e-7144-4cf8-866c-6066d64a435f>
CC-MAIN-2013-20
http://igcr.blogspot.com/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962803
3,622
2.875
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "disaster risk reduction", "flood risk", "food security" ], "nature": [ "conservation", "ecological", "wetlands" ] }
{ "strong": 4, "weak": 3, "total": 7, "decision": "accepted_strong" }
Identifying time lags in the restoration of grassland butterfly communities: a multi-site assessment Woodcock, B.A.; Bullock, J.M.; Mortimer, S.R.; Brereton, T.; Redhead, J.W.; Thomas, J.A.; Pywell, R.F.. 2012 Identifying time lags in the restoration of grassland butterfly communities: a multi-site assessment. Biological Conservation, 155. 50-58. 10.1016/j.biocon.2012.05.013Full text not available from this repository. Although grasslands are crucial habitats for European butterflies, large-scale declines in quality and area have devastated many species. Grasslandrestoration can contribute to the recovery of butterfly populations, although there is a paucity of information on the long-term effects of management. Using eight UK data sets (9–21 years), we investigate changes in restoration success for (1) arable reversion sites, were grassland was established on bare ground using seed mixtures, and (2) grassland enhancement sites, where degraded grasslands are restored by scrub removal followed by the re-instigation of cutting/grazing. We also assessed the importance of individual butterfly traits and ecological characteristics in determining colonisation times. Consistent increases in restoration success over time were seen for arable reversion sites, with the most rapid rates of increase in restoration success seen over the first 10 years. For grasslands enhancement there were no consistent increases in restoration success over time. Butterfly colonisation times were fastest for species with widespread host plants or where host plants established well during restoration. Low mobility butterfly species took longer to colonise. We show that arable reversion is an effective tool for the management of butterflycommunities. We suggest that as restoration takes time to achieve, its use as a mitigation tool against future environmental change (i.e. by decreasing isolation in fragmented landscapes) needs to take into account such time lags. |Programmes:||CEH Topics & Objectives 2009 onwards > Biodiversity| |CEH Sections:||CEH fellows |Additional Keywords:||arable reversion, calcareous, grassland enhancement, mesotrophic, functional traits, recreation| |NORA Subject Terms:||Ecology and Environment| |Date made live:||12 Sep 2012 15:38| Actions (login required)
<urn:uuid:4cc9b21b-9ac9-4733-b1e7-35a758cedd90>
CC-MAIN-2013-20
http://nora.nerc.ac.uk/19510/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.885742
497
2.84375
3
[ "nature" ]
{ "climate": [], "nature": [ "biodiversity", "conservation", "ecological", "restoration" ] }
{ "strong": 3, "weak": 1, "total": 4, "decision": "accepted_strong" }
Buried inside Robert Bryce’s relatively new book entitled Power Hungry is a call to “aggressively pursue taxes or caps on the emissions of neurotoxins, particularly those that come from burning coal” to generate electricity such as mercury and lead. This is notable not because Bryce agrees with many environmental and human health experts, but also because the book credibly debunks the move to tax or cap carbon dioxide emissions both from technical and political perspectives. The word “neurotoxic” literally translates as “nerve poison”. Broadly described, a neurotoxicant is any chemical substance which adversely acts on the structure or function of the human nervous system. As its subtitle signals, Power Hungry also declares policies subsidizing renewable sources of electricity, biofuels and electric vehicles as too costly and impractical to make a significant difference in making the U.S. power and transportation systems more sustainable. So why take aim at mercury and lead, which is certain to drive up the cost of coal-fired electricity just as a carbon cap or tax would? Because, Bryce asserts, “arguing against heavy metal contaminants with known neurotoxicity will be far easier than arguing against carbon dioxide emissions. Cutting the output of mercury and the other heavy metals may, in the long run, turn out to have far greater benefits for the environmental and human health.” Bryce draws a parallel to the U.S. government ordering oil refiners to remove lead from gasoline starting in the 1970s. In the book, which has has received predominantly good reviews on Amazon.com, Bryce makes some valid points about the carbon density of our energy sources. Among his overarching messages is that the carbon density of the world’s major economies is actually declining (see graph below). Not to be missed: his attack on carbon sequestration, pp. 160-165. His case about the threat of neurotoxins begins on p. 167. There’s a lot more to this challenge of reducing America’s reliance on coal-fired power plants than this. But considering the failure by the U.S. Congress to agree on a carbon tax or cap, his idea has serious merit and deserves a broad discussion, especially as Congress reassess its budget priorities. This includes billions of dollars of tax breaks and incentives for oil and other fossil fuels.
<urn:uuid:ed7842f6-485f-401b-96c7-6ca3e6045411>
CC-MAIN-2013-20
http://www.theenergyfix.com/2011/05/07/tax-toxins-not-carbon-dioxide-from-coal-fired-power-plants/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955607
481
2.671875
3
[ "climate" ]
{ "climate": [ "carbon dioxide", "carbon sequestration" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Deaths in Moscow have doubled to an average of 700 people a day as the Russian capital is engulfed by poisonous smog from wildfires and a sweltering heat wave, a top health official said today, according to the Associated Press. The Russian newspaper Pravda reported: “Moscow is suffocating. Thick toxic smog has been covering the sky above the city for days. The sun in Moscow looks like the moon during the day: it’s not that bright and yellow, but pale and orange with misty outlines against the smoky sky. Muscovites have to experience both the smog and sweltering heat at once.” “Russia has recently seen the longest unprecedented heat wave for at least one thousand years, the head of the Russian Meteorological Center,” the news site Ria Novosti reported. Various news sites report that foreign embassies have reduced activities or shut down, with many staff leaving Moscow to escape the toxic atmosphere. Russian heatwave: This NASA map released today shows areas of Russia experiencing above-average temperatures this summer (orange and red). The map was released on NASA’s Earth Obervatory website. NASA Earth Observatory image by Jesse Allen, based on MODIS land surface temperature data available through the NASA Earth Observations (NEO) Website. Caption by Michon Scott. According to NASA: In the summer of 2010, the Russian Federation had to contend with multiple natural hazards: drought in the southern part of the country, and raging fires in western Russia and eastern Siberia. The events all occurred against the backdrop of unusual warmth. Bloomberg reported that temperatures in parts of the country soared to 42 degrees Celsius (108 degrees Fahrenheit), and the Wall Street Journal reported that fire- and drought-inducing heat was expected to continue until at least August 12. This map shows temperature anomalies for the Russian Federation from July 20-27, 2010, compared to temperatures for the same dates from 2000 to 2008. The anomalies are based on land surface temperatures observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. Areas with above-average temperatures appear in red and orange, and areas with below-average temperatures appear in shades of blue. Oceans and lakes appear in gray. Not all parts of the Russian Federation experienced unusual warmth on July 20-27, 2010. A large expanse of northern central Russia, for instance, exhibits below-average temperatures. Areas of atypical warmth, however, predominate in the east and west. Orange- and red-tinged areas extend from eastern Siberia toward the southwest, but the most obvious area of unusual warmth occurs north and northwest of the Caspian Sea. These warm areas in eastern and western Russia continue a pattern noticeable earlier in July, and correspond to areas of intense drought and wildfire activity. Bloomberg reported that 558 active fires covering 179,596 hectares (693 square miles) were burning across the Russian Federation as of August 6, 2010. Voice of America reported that smoke from forest fires around the Russian capital forced flight restrictions at Moscow airports on August 6, just as health officials warned Moscow residents to take precautions against the smoke inhalation. Posted by David Braun Earlier related post: Russia burns in hottest summer on record (July 28, 2010) Talk about tough: These guys throw themselves out of 50-year-old aircraft into burning Siberian forests. (National Geographic Magazine feature, February 2008) Photo by Mark Thiessen Join Nat Geo News Watch community Readers are encouraged to comment on this and other posts–and to share similar stories, photos and links–on the Nat Geo News Watch Facebook page. You must sign up to be a member of Facebook and a fan of the blog page to do this. Leave a comment on this page You may also email David Braun ([email protected]) if you have a comment that you would like to be considered for adding to this page. You are welcome to comment anonymously under a pseudonym.
<urn:uuid:10b103c1-284b-41c5-8dc9-bc9d1b7577ea>
CC-MAIN-2013-20
http://newswatch.nationalgeographic.com/2010/08/09/russia_chokes_as_fires_rage_worst_summer_ever/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933289
827
2.84375
3
[ "climate" ]
{ "climate": [ "drought", "heatwave" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Habitat Affects Escape Behavior of Birds It seems birds of the same species raised in diverse settings behave differently in the face of a threat. Birds raised in an urban environment react differently than country-bred birds when faced with a predator. A study undertaken by Diego Ibanez-Alamo, researcher at the University of Granada (UGR) and Anders Pape Moller from Paris-Sud University, highlights the fact that urbanization plays an influential role in a bird's survival strategies. They analyzed the escape techniques of 1,132 birds belonging to 15 species in different rural and urban areas. Like Us on Facebook The study showed that city birds have changed their behavior to adapt to new threats like cats, which are their main enemies in an urban habitat, instead of their more traditional enemies in the countryside, such as the sparrow hawk. "When they are captured, city birds are less aggressive, they produce alarm calls more frequently, they remain more paralyzed when attacked by their predator and they lose more feathers than their countryside counterparts," explained by Juan Diego Ibanez-Alamo. They were surprised to see that urbanization was directly linked with these differences. This finding gives rise to the concept that escape strategies evolve alongside the expansion of cities. "It is crucial to discover how birds adapt to transformations in their habitat so that we can decrease their effects," said Ibanez-Alamo.. "Predation change caused by city growth is serious," outlined Ibanez-Alamo. As the scientist indicates, tactics against their hunters are "crucial" so that birds can adapt to their new environment: "Birds should modify their behavior to be able to survive in cities because if not, they will become extinct at the mercy of urban growth." These results appear in the journal, Animal Behavior.
<urn:uuid:cd6f5793-37ef-4050-ada7-07a73ce582e4>
CC-MAIN-2013-20
http://www.scienceworldreport.com/articles/4148/20121108/habitat-affects-escape-behavior-of-birds.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954264
375
3.828125
4
[ "nature" ]
{ "climate": [], "nature": [ "habitat" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
NINE BANDED ARMADILLO Photo Credit: U. S. Fish and Wildlife Service SCIENTIFIC NAME: Dasypus novemcinctus OTHER NAMES: Armadillo, Common Long-Nosed Armadillo DESCRIPTION: The nine-banded armadillo (Dasypus novemcinctus) cannot easily be confused with any other North American wild mammal. The armadillo’s body is covered with an armored carapace or shell. The carapace is a double layer of horn and bone, segmented into three main divisions: an anterior scapular shield covering the shoulder; a posterior pelvic shield covering the hip region; and a middle section comprised of a series of bands connected by soft, infolded skin between the bands. The head and legs are covered with thick scales, and the tail is encased in a series of bony rings. Coloration of nine-banded armadillos is generally grayish brown, with yellowish-white scales along the side of the carapace. The armadillo has a long, pointed snout, small eyes, and large, cylindrical ears. The armadillo’s pointed snout, short, stout legs, and heavy claws are well suited for digging and burrowing. Armadillos have a limited number of vocalizations: a low, wheezy grunt associated with digging and rooting; a wheezy grunt uttered by recently captured individuals; an audible buzzing noise given when highly alarmed or fleeing, a pig-like squeal given by frightened individuals; and a weak purring given by young attempting to nurse from an unrelated female. Total length ranges from 24 to 31 inches and weights vary from 8 to 15 pounds. There are six subspecies of Dasypus novemcinctus in Central and South America, but only one subspecies, D. n. mexicanus occurs in North America. DISTRIBUTION: Dasypus novemcinctus mexicanus’ original distribution was from the lower Rio Grande Valley between Mexico and Texas, southward through Mexico and Central America to northwestern Peru on the west side of the Andes, and all of South America to northern Argentina east of the Andes, including the islands of Grenada, Trinidad, Tobago, and Margarita. The range of the nine-banded armadillo has undergone rapid expansion into the southern United States since the late 1800s. The recent rapid expansion of the armadillo’s range was facilitated by a number of factors: reduction in the number of large carnivores; climatic and biotic changes; and accidental and deliberate relocations of animals to unoccupied areas. Armadillos now occur throughout the southern and southeastern U.S., as far north as Missouri, Kansas, Colorado, and Nebraska. These animals are common throughout most of Alabama, but less common in several northeastern counties. HABITAT: The armadillo is very adaptable and does well in most habitat types found in Alabama. They generally avoid or are scarce in very wet or very dry habitats. Habitat suitability likely depends more on the characteristics of the substrate or soils, rather than vegetation type due to the armadillo’s feeding and burrowing behavior. FEEDING HABITS: A major portion of the armadillo’s time spent outside its burrow is devoted to feeding. They typically start foraging as they emerge from their burrow and move at a slow pace following an often erratic course. Prey is apparently detected by smell, although sound also may play a role. Typical foraging behavior involves quickly probing with the nose and occasionally pausing to dig for prey. Armadillos are opportunistic feeders and consume a wide variety of food items. Invertebrates, primarily insects, make up roughly 90 percent of their diet. Small vertebrates and plant material make up the remainder of their diet. Researchers also have seen evidence of armadillos feeding on small reptiles and amphibians, the eggs of ground-nesting birds, and carrion. LIFE HISTORY AND ECOLOGY: Armadillos seem to exhibit a polygynous mating system, with most females paired with a single male and most males paired with more than one female. Den burrows have an enlarged nest chamber and are more complicated than a burrow dug for other purposes. The nest is a bulky mass of dried plant debris crammed into the nest chamber without any obvious structure. Armadillos in areas with poorly drained soils will construct above ground nests of dry plant material. Most breeding among armadillos occurs during the summer (June-August). The normal gestation period is 8 to 9 months, with most young born between February and May. The armadillo exhibits monozygotic polyembryony in which a single fertilized egg normally gives rise to four separate embryos at the blastula stage of development. This results in a litter of four genetically identical haploid clone offspring. Dasypus is the only genus of vertebrates in which this reproductive phenomenon occurs. The offspring are precocial and begin accompanying the female outside of the burrow at about 2 to 3 months of age. By 3 to 4 months, the young are self-sufficient. Most males reach sexual maturity between 6 to 12 months of age, but females do not become sexually mature until they are 1 to 2 years old. REFERENCES: Author: Chris Cook, Wildlife Biologist, June 2005 Armstrong, J. Controlling Armadillo Ddamage in Alabama. ANR-773. Alabama Cooperative Extension System. 2pp. Layne, J. N. 2003. Armadillo. Pages 75-97 in G. A. Feldhamer, B. C. Thompson, and J. A. Chapman, eds. Wild Mammals of North America: Biology, Management, and Conservation. Second edition. The Johns Hopkins University Press, Baltimore, MD and London, U.K. Nowak, R. M. 1999. Walker’s Mammals of the World, sixth edition, volume one. The Johns Hopkins University Press, Baltimore, MD and London, U.K. 903 pp. Outdoor Alabama Magazine Article, Nine-banded Armadillo Watchable Wildlife Article
<urn:uuid:2f4dd03f-9452-4d19-bff5-a881b710035a>
CC-MAIN-2013-20
http://www.outdooralabama.com/watchable-wildlife/what/Mammals/armadillo.cfm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914825
1,311
3.828125
4
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "habitat" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Time to think big Did the designation of 2010 as the first-ever International Year of Biodiversity mean anything at all? Is it just a publicity stunt, with no engagement on the real, practical issues of conservation, asks Simon Stuart, Chair of IUCN’s Species Survival Commission. Eight years ago 183 of the world’s governments committed themselves “to achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of all life on Earth”. This was hardly visionary—the focus was not on stopping extinctions or loss of key habitats, but simply on slowing their rate of loss—but it was, at least, the first time the nations of the world had pledged themselves to any form of concerted attempt to face up to the ongoing degradation of nature. Now the results of all the analyses of conservation progress since 2002 are coming in, and there is a unanimous finding: the world has spectacularly failed to meet the 2010 Biodiversity Target, as it is called. Instead species extinctions, habitat loss and the degradation of ecosystems are all accelerating. To give a few examples: declines and extinctions of amphibians due to disease and habitat loss are getting worse; bleaching of coral reefs is growing; and large animals in South-East Asia are moving rapidly towards extinction, especially from over-hunting and degradation of habitats. |This month the world’s governments will convene in Nagoya, Japan, for the Convention on Biological Diversity’s Conference of the Parties. Many of us hope for agreement there on new, much more ambitious biodiversity targets for the future. The first test of whether or not the 2010 International Year of Biodiversity means anything will be whether or not the international community can commit itself to a truly ambitious conservation agenda.| The early signs are promising. Negotiating sessions around the world have produced 20 new draft targets for 2020. Collectively these are nearly as strong as many of us hoped, and certainly much stronger than the 2010 Biodiversity Target. They include: halving the loss and degradation of forests and other natural habitats; eliminating overfishing and destructive fishing practices; sustainably managing all areas under agriculture, aquaculture and forestry; bringing pollution from excess nutrients and other sources below critical ecosystem loads; controlling pathways introducing and establishing invasive alien species; managing multiple pressures on coral reefs and other vulnerable ecosystems affected by climate change and ocean acidification; effectively protecting at least 15 per cent of land and sea, including the areas of particular importance for biodiversity; and preventing the extinction of known threatened species. We now have to keep up the pressure to prevent these from becoming diluted. We at IUCN are pushing for urgent action to stop biodiversity loss once and for all. The well-being of the entire planet—and of people—depends on our committing to maintain healthy ecosystems and strong wildlife populations. We are therefore proposing, as a mission for 2020, “to have put in place by 2020 all the necessary policies and actions to prevent further biodiversity loss”. Examples include removing government subsidies which damage biodiversity (as many agricultural ones do), establishing new nature reserves in important areas for threatened species, requiring fisheries authorities to follow the advice of their scientists to ensure the sustainability of catches, and dramatically cutting carbon dioxide emissions worldwide to reduce the impacts of climate change and ocean acidification. If the world makes a commitment along these lines, then the 2010 International Year of Biodiversity will have been about more than platitudes. But it will still only be a start: the commitment needs to be implemented. We need to look for signs this year of a real change from governments and society over the priority accorded to biodiversity. |One important sign will be the amount of funding that governments pledge this year for replenishing the Global Environment Facility (GEF), the world’s largest donor for biodiversity conservation in developing countries. Between 1991 and 2006, it provided approximately $2.2 billion in grants to support more than 750 biodiversity projects in 155 countries. If the GEF is replenished at much the same level as over the last decade we shall know that the governments are still in “business as usual” mode. But if it is doubled or tripled in size, then we shall know that they are starting to get serious.| IUCN estimates that even a tripling of funding would still fall far short of what is needed to halt biodiversity loss. Some conservationists have suggested that developed countries need to contribute 0.2 per cent of gross national income in overseas biodiversity assistance to achieve this. That would work out at roughly $120 billion a year—though of course this would need to come through a number of sources, not just the GEF. It is tempting to think that this figure is unrealistically high, but it is small change compared to the expenditures governments have committed to defence and bank bail outs. It is time for the conservation movement to think big. We are addressing problems that are hugely important for the future of this planet and its people, and they will not be solved without a huge increase in funds.
<urn:uuid:2d3e80a0-ca7b-4358-80a9-0f5129e87a3e>
CC-MAIN-2013-20
http://cms.iucn.org/es/recursos/focus/enfoques_anteriores/cbd_2010/noticias/opinion/?6131/time-to-think-big
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940236
1,055
3.296875
3
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "climate change" ], "nature": [ "biodiversity", "biodiversity loss", "conservation", "ecosystem", "ecosystems", "habitat" ] }
{ "strong": 7, "weak": 1, "total": 8, "decision": "accepted_strong" }
Following Oceana’s newly released report on the harmful impacts of illegal fishing, one of the questions that I as Oceana's Northeast representative was asked most often was, “Where is this happening?” The short answer: Illegal fishing happens everywhere, from the most distant waters near Antarctica to just off the U.S. coast. This week brought great news for shark populations that are dwindling both in U.S. waters and worldwide. Today, the Delaware House of Representatives introduced a bill prohibiting the possession, trade, sale and distribution of shark fins within the state. If passed, House Bill 41 would make Delaware the first East Coast state to pass a ban on the shark fin trade, following in the footsteps of Oregon, Washington, California, Hawaii and Illinois. Current federal law prohibits shark finning in U.S. waters, requiring that sharks be brought into port with their fins still attached. However, this law does not prohibit the sale and trade of processed fins that are imported into the country from other regions that could have weak or even nonexistent shark protections in place. This unsustainable catch is driven by the demand for shark fins, often used as an ingredient in shark fin soup, and kills millions of sharks every year. Delaware’s bill would close the loopholes that fuel the trade and demand for fins, and ensure that the state is not a gateway for shark products to enter into other U.S. state markets. Not only was there great news coming out of the U.S., international shark lovers have reason to celebrate as well. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), voted this week to place stricter regulations on the trade of manta rays, three species of hammerheads, oceanic whitetip and porbeagle sharks, acknowledging that these species are in dire need of protection. When countries export these species, they are required to possess special permits that prove these species were harvested sustainably. This decision will greatly curb illegal overfishing and reduce the numbers of endangered sharks killed globally. History was made today in Bangkok, when Parties to CITES (the Convention on International Trade in Endangered Species of Wild Flora and Fauna) voted to protect five species of sharks and two species of manta rays. The seven protected species are: oceanic whitetip (Carcharhinus longimanus), porbeagle (Lamna nasus), scalloped hammerhead (Sphyrna lewini), great hammerhead (S. mokarran), smooth hammerhead (S. zygaena), oceanic manta ray (Manta birostris) and reef manta ray (M. alfredi). All seven species are considered threatened by international trade – the sharks for their fins, and the manta rays for their gills, which are used in Traditional Chinese Medicine. CITES protection is an important complement to fisheries management measures, which, for these species, have failed to safeguard their survival. The vote was to list the animals for protection under Appendix II which does not entail a ban on the trade, but instead means that trade must be regulated. Exporting countries are required to issue export permits, and can only do so if they can ensure that they have been legally caught, and that their trade is not detrimental to the species’ survival. All of the proposals received the two-thirds majority needed to be accepted – but the listing is not yet final. Decisions can be overturned with another vote during the final plenary session of the meeting, which wraps up on Thursday. This is what happened with porbeagle sharks in the 2010 CITES meeting in Qatar – an Appendix II listing approved by the Committee evaporated with another vote in plenary. As a result, at that meeting, none of the proposed shark species were granted protection. Now, three years later, we’re hopeful that the international community finally sees the importance of regulating the trade that puts these animals at risk. Keep your fingers crossed! Happy Friday, everyone. It's been a rough few weeks for the oceans at CITES, but now it's time to pick up the pieces. If CITES taught us anything, it's that the work of the ocean conservation community is more important than ever. This week in ocean news, ....Rick at Malaria, Bed bugs, Sea Lice and Sunsets discussed one of the more shady aspects of CITES: the secret ballots, which were invoked for votes on bluefin tuna, sharks, polar bears, and deep water corals. …The Washington Post reported that Maryland is cracking down on watermen who catch oysters in protected sanctuaries or with banned equipment. Once a principal source of oysters, the Chesapeake now provides less than 5 percent of the annual U.S. harvest. …For the first time, scientists were able to use videos to observe octupuses’ behavioral responses. The result? The octupuses had no consistent reaction to one film -- in other words, they had no “personality.” Curiously, other cephalopods display consistent personalities for most of their lives. …The New York Times wondered if the 700,000 saltwater home aquariums in the United States and the associated trade in reef invertebrates are threatening real reef ecosystems. This is the ninth in a series of dispatches from the CITES meeting in Doha, Qatar. As Oceana marine scientist Elizabeth Griffin put it: “This meeting was a flop.” CITES has been a complete failure for the oceans. The one success -- the listing of the porbeagle shark under Appendix II -- was overturned yesterday in the plenary session. “It appears that money can buy you anything, just ask Japan,” said Dave Allison, senior campaign director. “Under the crushing weight of the vast sums of money gained by unmanaged trade and exploitation of endangered marine species by Japan, China, other major trading countries and the fishing industry, the very foundation of CITES is threatened with collapse.” Maybe next time -- if these species are still around to be protected. The failure of CITES means that Oceana’s work – and your support and activism – is more important than ever. You can start by supporting our campaign work to protect these creatures. Here's Oceana's Gaia Angelini on the conclusion of CITES: This is the eighth in a series of dispatches from the CITES conference in Doha, Qatar. More difficult news out of Doha today. While seven of the eight proposed shark species (including several species of hammerheads, oceanic whitetip and spiny dogfish) were not included in Appendix II, the one bright spot was for the porbeagle shark, which is threatened by widespread consumption in Europe. The porbeagle’s Appendix II listing is a huge improvement because it requires the use of export permits to ensure that the species are caught by a legal and sustainably managed fishery. And there is a slight chance that the other shark decisions could be reversed during the plenary session in the final two days. Here are Oceana scientists Elizabeth Griffin and Rebecca Greenberg reflecting on the shark decisions: This is the seventh in a series of posts from CITES. Check out the rest of the dispatches from Doha here. Eight shark species have been proposed for listing to Appendix II of CITES, including the oceanic whitetip, scalloped hammerhead, dusky, sandbar, smooth hammerhead, great hammerhead, porbeagle and spiny dogfish. Listing these species, which are threatened by shark finning, is necessary to ensure international trade does not drive these shark species to extinction. Here's Oceana's Ann Schroeer from our Brussels office with an optimistic outlook on the upcoming shark proposals at CITES. This is the latest in a series of posts from CITES. See the rest of the dispatches here. Over the weekend, CITES failed to include 31 species of red and pink coral in Appendix II, trade protections that were promised during the last CITES Conference more than two and a half years ago. These corals are harvested to meet the growing demand for jewelry and souvenirs. The unregulated and virtually unmanaged collection and trade of these species is driving them to extinction. Many of the corals are long-lived, reaching more than 100 years of age, and grow slowly, usually less than one millimeter in thickness per year. These colonies are fragile and extremely vulnerable to exploitation and destruction, and their biological characteristics severely limit their ability to recover. Oceana campaign director Dave Allison had this to say about the corals decision (first video), as well as the failure of CITES to protect marine species in general (second video.) Happy Friday, ocean fans. It's almost spring, and a surfing alpaca exists in the world. Things are looking up. Before we get to the week's best marine tidbits, an important announcement: Oceana board member Ted Danson will be answering questions live on CNN.com on April 1, so send your ocean queries in, stat! Also, don't forget that today is the last day to take the Ocean IQ quiz for a chance to win prizes, including a trip with SEE Turtles. This week in ocean news, …Yes, CITES failed to deliver on bluefin tuna yesterday, but as Monterey Bay Aquarium’s Julie Packard pointed out, at least the conversation is changing. Bluefin is now in the same rhetorical realm as endangered land creatures such as tigers and elephants. …Deep Sea News wrote a requiem for a robot -- the Autonomous Benthic Explorer (ABE) that was lost at sea last week during a research expedition to the Chilean Subduction Zone. On a recent dive, ABE had detected evidence of hydrothermal vents. At the time of its loss, ABE had just begun a second dive to home into a vent site and photograph it. This is the fifth in a series of dispatches from CITES. You can read the other dispatches here. Although there were repeated calls from delegates from the E.U., U.S. and Monaco to allow time for parties to meet and arrive at a compromise position, a Libya delegate forced a preemptory vote on the E.U. proposal, which resulted in a 43 to 72 vote, with 14 abstaining. Campaign director Dave Allison called the defeat "a clear win by short-term economic interest over the long-term health of the ocean and the rebuilding of Atlantic bluefin tuna populations." The decision could spell the beginning of the end for the tigers of the sea. Here's Oceana's Maria Jose Cornax on the decision:
<urn:uuid:d2464720-2e87-4061-9f5b-3ccfe0c3f6db>
CC-MAIN-2013-20
http://oceana.org/es/category/blog-free-tags/cites
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.945931
2,336
2.96875
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecosystems", "endangered species" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
Opportunities and Challenges in High Pressure Processing of Foods By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues. Keywords high pressure, food processing, non-thermal processing Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003). The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994). Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows: 1. it enables food processing at ambient temperature or even lower temperatures; 2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage; 3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and 4. it can be used to create ingredients with novel functional properties. The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression. As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981). For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995). Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments. High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999). High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990). The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited. EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects. Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min. Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure. Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed. Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots. The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed. High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion. Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces. Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature. Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment. Polyphenoloxidase and Pemxidase Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively. Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed. Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site. Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment. Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated. High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998). High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993). Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997). When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997). The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa. Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa. Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules. Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts. Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey. The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported. The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds. Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa. The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions. The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure. Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported. COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section. Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged. The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992). The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995). Carbon Dioxide and Argon Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed. Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered. Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C. The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999). OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations. Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method. Dehydration and Osmotic Dehydration The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001). Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001). Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998). Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure. The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality. Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998). Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature. Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min. Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans. A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999). Solid Liquid Extraction The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice. Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695>
CC-MAIN-2013-20
http://www.redorbit.com/news/business/815480/opportunities_and_challenges_in_high_pressure_processing_of_foods/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924161
14,546
2.5625
3
[ "climate" ]
{ "climate": [ "carbon dioxide", "co2", "temperature rise" ], "nature": [] }
{ "strong": 2, "weak": 1, "total": 3, "decision": "accepted_strong" }
2010 is the International Year of Biodiversity. Why should you care? Biodiversity is the number of different species that exist in a given area. The healthier an area is, the more biodiversity it has. Different forms of wildlife and plants inhabit areas, and these plants and animals learn to coexist and form ecological relationships with each other. In unhealthy environments, only a few kinds of species of each plant or animal exist. This is what is known as a monoculture, and monocultures are unhealthy. Whether a monoculture is ten thousand or a hundred thousand acres planted all together of one kind of crop, or an entire subdivision with lawns all growing the same five plants, monocultures are vulnerable to pests and disease. For example, part of the fire ant problem in the United States is exacerbated by homeowners, because fire ants prefer monocultures and will shy away from biodiverse areas. By contrast, the more kinds of species that inhabit an area, the more likely it is that at least a few strains of plants or animals will be resistant to pests and disease. If you have thirty kinds of plants, and a virus blows in across the ocean from a remote island, your lawn has a better chance of surviving and renewing itself than if you have only five kinds of plants. In the same way, having more species of wild birds will provide more secure insect control than having only a few kinds of wild birds. So, by growing more kinds of plants in your yard, you will attract more animals, and help to increase your neighbourhood’s biodiversity.
<urn:uuid:5a91c161-3285-41c8-8469-edeb3b85a206>
CC-MAIN-2013-20
http://www.theinfomine.com/2010/05/02/why-biodiversity-matters/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948197
325
3.4375
3
[ "nature" ]
{ "climate": [], "nature": [ "biodiversity", "ecological" ] }
{ "strong": 1, "weak": 1, "total": 2, "decision": "accepted_strong" }
By Margaret Southern For many people in the United States, food comes from the grocery store. Most Americans would be hard pressed to determine the real origins of much of the food we eat. But residents of the Cayos Cochinos islands off the northern coast of Honduras know exactly where their food comes from: the ocean outside their doors. Here, it’s not just their stomachs that rely on the bounty of the ocean—it’s almost their entire economy. But overfishing and industrial fishing continues to put pressure on their marine resources. The Nature Conservancy is working with partners and locals to protect their reefs through education, conservation and income-generating projects. In 2007, the Conservancy and the Cayos Cochinos Foundation identified almost 20 income-generating projects with the goal of increasing income for households that depend heavily on the reef. One of these projects was the construction of a small tourist complex managed by the East End community. The community hopes to attract tourists with two eco-friendly cabins, which were built with outside funding, and an ocean-view restaurant serving traditional Garifuna meals, funded by the Conservancy. The Conservancy is connecting the community with tourism service providers to promote best practices and ensure the protection of conservation targets. The Cayos Cochinos Foundation, along with the Conservancy and other groups, has also worked with the communities to establish management plans that encourage no-take zones and the construction of artificial reefs that serve as nurseries for small fish. And through other partnerships, laws have been passed that will eventually prohibit industrial fishing around the marine protected area. All of these initiatives are important in creating sustainability in the reefs and communities of Cayos Cochinos. According to Francisco Velasquez, a school teacher in East End, if it weren’t for the Conservancy, they wouldn’t still be living on that island. “Our livelihood depends on fishing, but in the past large ships would come in and sweep the whole ocean clear of fish. There was nothing left for us to eat,” Velasquez said. “But now with the conservation of this area the fish are returning, and through the tourism project we are creating new solutions to increase our income and improve our lives.” In Cayos Cochinos, these projects would not have been possible without Anthony Ives. A former San Francisco stock trader who at one point was managing 50 billion dollars for the state of California, Ives found his way to Honduras after Sept. 11 through the Peace Corps. He created the Foundation Heart Ventures-Grupo de Apoyo al Desarollo (IHV-GAD) to have a focus on education, conservation and job creation, with the idea that it is not possible to separate the three objectives. “We have found that an integrated approach is much more sustainable, and we’re very happy that the Conservancy shares our long-term vision to sustainable conservation,” Ives said. “We hope to convince other organizations that through an integrated approach that involves education, we can achieve sustainable development while focusing on the environment.”January 03, 2011
<urn:uuid:b11ad61b-8c0a-48c9-a3ec-7b45dddc2988>
CC-MAIN-2013-20
http://www.nature.org/ourinitiatives/regions/africa/wherewework/cayos-cochinos.xml
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958403
648
3.25
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "protected area" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
China has worked actively and seriously to tackle global climate change and build capacity to respond to it. We believe that every country has a stake in dealing with climate change and every country has a responsibility for the safety of our planet. China is at a critical stage of building a moderately prosperous society on all fronts, and a key stage of accelerated industrialization and urbanization. Yet, despite the huge task of developing the economy and improving people’s lives, we have joined global actions to tackle climate change with the utmost resolve and a most active attitude, and have acted in line with the principle of common but differentiated responsibilities established by the United Nations. China voluntarily stepped up efforts to eliminate backward capacity in 2007, and has since closed a large number of heavily polluting small coal-fired power plants, small coal mines and enterprises in the steel, cement, paper-making, chemical and printing and dyeing sectors. Moreover, in 2009, China played a positive role in the success of the Copenhagen conference on climate change and the ultimate conclusion of the Copenhagen Accord. In keeping with the requirements of the Copenhagen Accord, we have provided the Secretariat of the United Nations Framework Convention on Climate Change with information on China’s voluntary actions on emissions reduction and joined the list of countries supporting the Copenhagen Accord. The targets released by China last year for greenhouse gas emissions control require that by 2020, CO2 emissions per unit of GDP should go down by 40% - 45% from the 2005 level, non-fossil energy should make up about 15% of primary energy consumption, and forest coverage should increase by 40 million hectares and forest stock volume by 1.3 billion cubic meters, both from the 2005 level. The measure to lower energy consumption alone will help save 620 million tons of standard coal in energy consumption in the next five years, which will be equivalent to the reduction of 1.5 billion tons of CO2 emissions. This is what China has done to step up the shift in economic development mode and economic restructuring. It contributes positively to Asia’s and the global effort to tackle climate change. Ladies and Gentlemen, Green and sustainable development represents the trend of our times. To achieve green and sustainable development in Asia and beyond and ensure the sustainable development of resources and the environment such as the air, fresh water, ocean, land and forest, which are all vital to human survival, we countries in Asia should strive to balance economic growth, social development and environmental protection. To that end, we wish to work with other Asian countries and make further efforts in the following six areas. First, shift development mode and strive for green development. To accelerate the shift in economic development mode and economic restructuring provides an important precondition for our efforts to actively respond to climate change, achieve green development and secure the sustainable development of the population, resources and the environment. It is the shared responsibility of governments and enterprises of all countries in Asia and around the world. We should actively promote a conservation culture and raise awareness for environmental protection. We need to make sure that the concept of green development, green consumption and a green lifestyle and the commitment to taking good care of Planet Earth, our common home are embedded in the life of every citizen in society. Second, value the importance of science and technology as the backing of innovation and development. We Asian countries have a long way to go before we reach the advanced level in high-tech-powered energy consumption reduction and improvement of energy and resource efficiency. Yet, this means we have a huge potential to catch up. It is imperative for us to quicken the pace of low-carbon technology development, promote energy efficient technologies and raise the proportion of new and renewable energies in our energy mix so as to provide a strong scientific and technological backing for green and sustainable development of Asian countries. As for developed countries, they should facilitate technology transfer and share technologies with developing countries on the basis of proper protection of intellectual property rights. Third, open wider to the outside world and realize harmonious development. In such an open world as ours, development of Asian countries and development of the world are simply inseparable. It is important that we open our markets even wider, firmly oppose and resist protectionism in all forms and uphold a fair, free and open global trade and investment system. At the same time, we should give full play to the role of regional and sub-regional dialogue and cooperation mechanisms in Asia to promote harmonious and sustainable development of Asia and the world. Fourth, strengthen cooperation and sustain common development. Pragmatic, mutually beneficial and win-win cooperation is a sure choice of all Asian countries if we are to realize sustainable development. No country could stay away from or manage to meet on its own severe challenges like the international financial crisis, climate change and energy and resources security. We should continue to strengthen macro-economic policy coordination and vigorously promote international cooperation in emerging industries, especially in energy conservation, emissions reduction, environmental protection and development of new energy sources to jointly promote sustainable development of the Asian economy and the world economy as a whole. Fifth, work vigorously to eradicate poverty and gradually achieve balanced development. A major root cause for the loss of balance in the world economy is the seriously uneven development between the North and the South. Today, 900 million people in Asia, or roughly one fourth of the entire Asian population, are living below the 1.25 dollars a day poverty line. We call for greater efforts to improve the international mechanisms designed to promote balanced development, and to scale up assistance from developed countries to developing countries, strengthen South-South cooperation, North-South cooperation and facilitate attainment of the UN Millennium Development Goals. This will ensure that sustainable development brings real benefits to poor regions, poor countries and poor peoples. Sixth, bring forth more talents to promote comprehensive development. The ultimate goal of green and sustainable development is to improve people’s living environment, better their lives and promote their comprehensive development. Success in this regard depends, to a large extent, on the emergence of talents with an innovative spirit. We need to build institutions, mechanisms and a social environment to help people bring out the best of their talents, and to intensify education and training of professionals of various kinds. This will ensure that as Asia achieves green and sustainable development, our people will enjoy comprehensive development. Ladies and Gentlemen, We demonstrated solidarity as we rose up together to the international financial crisis in 2009. Let us carry forward this great spirit, build up consensus, strengthen unity and cooperation and explore a path of green and sustainable development. This benefits Asia. It benefits the world, too. In conclusion, I wish this annual conference of the Boao Forum for Asia a complete success.
<urn:uuid:648ee2b5-f8cd-4273-8ab0-29206d637638>
CC-MAIN-2013-20
http://news.xinhuanet.com/english2010/china/2010-04/11/c_13245754_2.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936942
1,357
2.96875
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "co2", "greenhouse gas" ], "nature": [ "conservation" ] }
{ "strong": 4, "weak": 0, "total": 4, "decision": "accepted_strong" }
Discovered in 1988, the Roman Hippodrome in Beirut is situated in Wadi Abou Jmil, next to the newly renovated Jewish Synagogue in Downtown Beirut. This monument, dating back for thousands of years, now risks to be destroyed. The hippodrome is considered, along with the Roman Road and Baths, as one of the most important remaining relics of the Byzantine and Roman era. It spreads over a total area of 3500 m2. Requests for construction projects in the hippodrome’s location have been ongoing since the monument’s discovery but were constantly refused by former ministers of culture of which we name Tarek Metri, Tamam Salam and Salim Warde. In fact, Tamam Salam had even issued a decree banning any work on the hippodrome’s site, effectively protecting it by law. Salim Warde did not contest the decree. Current minister of culture Gabriel Layoun authorized constructions to commence. When it comes to ancient sites in cities that have lots of them, such as Beirut, the current adopted approach towards these sites is called a “mitigation approach” which requires that the incorporation of the monuments in modern plans does not affect those monuments in any way whatsoever. The current approval by minister Layoun does not demand such an approach to be adopted. The monument will have one of its main walls dismantled and taken out of location. Why? to build a fancy new high-rise instead. Minister Layoun sees nothing wrong with this. In fact, displacing ruins is never done unless due to some extreme circumstances. I highly believe whatever Solidere has in store for the land is considered an “extreme circumstance.” The Roman Hippodrome in downtown Beirut is considered as one of the best preserved not only in Lebanon, but in the world. It is also the fifth to be discovered in the Middle East. In fact, a report (Arabic) by the General Director of Ruins in Lebanon, Frederick Al Husseini, spoke about the importance of the monument as one that has been talked about in various ancient books. It has also been correlated with Beirut’s infamous ancient Law School. He speaks about the various structures that are still preserved and only needing some restoration to be fully exposed. He called the monument as a highly important site for Lebanon and the world and is one of Beirut’s main facilities from the Byzantine and Roman eras, suggesting to work on preserving and making this site one of Beirut’s important cultural and touristic locations. His report dates back from 2008. MP Michel Aoun, the head of the party of which Gabriel Layoun is part, defended his minsiter’s position by saying that: “there are a lot of discrepancies between Solidere and us. Therefore, a minister from our party cannot be subjected to Solidere. Minister Layoun found a way, which is adopted internationally, to incorporate ancient sites with newer ones… So I hope that media outlets do not discuss this issue in a way that would raise suspicion.” With all due respect to Mr. Aoun and his minister but endangering Beirut’s culture to strip away even more of the identity that makes it Beirut is not something that should concern him or Solidere. What’s happening is a cultural crime to the entirety of the Lebanese population, one where the interests of meaningless politicians becomes irrelevant. Besides, for a party that has been anti-Solidere for years, I find it highly hypocritical that they are allowing Solidere to dismantle the Roman Hippodrome. The conclusion is: never has a hippodrome been dismantled and displaced in any parts of the world. Beirut’s hippodrome will effectively become part of the parking of the high-rise to be built in its place. No mitigation approach will be adopted here. It is only but a diversion until people forget and plans go well underway in secrecy. But the time for us to be silent about this blatant persecution of our history cannot continue. If there’s anything that we can do is let the issue propagate as much as we can. There shouldn’t be a Lebanese person in the 10452 km2 that remains clueless about any endangered monument for that matter. Sadly enough, this goes beyond the hippodrome. We have become so accustomed to the reality of it that we’ve become very submissive: the ancient Phoenician port is well behind us, there are constructions around the ancient Phoenician port of Tyre and the city itself risks of being removed off UNESCO’s list for Cultural Heritage Sites. The land on which ancient monuments are built doesn’t belong to Solidere, to the Ministry of Culture or to any other contractor – no matter how much they’ve paid to buy it. It belongs to the Lebanese people in their entirety. When you realize that of the 200 sites uncovered at Solidere, those that have remained intact can be counted with the fingers of one hand, the reality becomes haunting. It’s about time we rise to our rights. Beirut’s hippodrome will not be destroyed.
<urn:uuid:996b9ec7-7a4b-4f7e-abcf-dc25c231bf57>
CC-MAIN-2013-20
http://stateofmind13.com/2012/03/18/save-beiruts-heritage-the-roman-hippodrome-to-be-demolished/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966408
1,063
2.921875
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Cougar sightings in Louisiana not that long ago were considered figments of the imagination of spooked hunters, hikers and others in the outdoors. A citizen sent LDWF a trail camera picture taken Aug. 13, 2011. LDWF Large Carnivore Program Manager Maria Davidson and biologist Brandon Wear conducted a site investigation that confirmed the authenticity of the photograph. “It is quite possible for this animal to be captured on other trail cameras placed at deer bait sites,” Davidson said. “Deer are the primary prey item for cougars; therefore, they are drawn to areas where deer congregate.” It is unlikely this cougar will remain in any one area longer than it would take to consume a kill. Cougars do not prefer to eat spoiled meat and will move on as soon as the Louisiana heat and humidity take its toll on the kill. “It is impossible to determine if the animal in the photograph is a wild, free-ranging cougar, or an escaped captive,” Davidson added. “Although it is illegal to own a cougar in Louisiana, it is possible that there are some illegally held ‘pets’ in the state.” LDWF has documented several occurrences since 2002. The first cougar sighting was in 2002 by an employee at Lake Fausse Point State Park. That sighting was later confirmed with DNA analysis from scat found at the site. Three trail camera photos were taken of a cougar in Winn, Vernon and Allen parishes in 2008. Subsequently on Nov. 30, 2008, a cougar was shot and killed in a neighborhood by Bossier City Police Department. The mountain lion, cougar, panther or puma are names that all refer to the same animal. Their color ranges from lighter tan to brownish grey. The only species of big cats that occur as black are the jaguar and leopard. Jaguars are native to South America and leopards are native to Africa. Both species can occur as spotted or black, although in both cases the spotted variety is much more common. Although LDWF receives numerous calls about black panthers, there has never been a documented case of a black cougar anywhere in North America. The vast majority of these reports received by LDWF cannot be verified due to the very nature of a sighting. Many of the calls are determined to be cases of mistaken identity, with dog tracks making up the majority of the evidence submitted by those reporting cougar sightings. Other animals commonly mistaken for cougars are bobcats and house cats, usually seen from a distance or in varying shades of light. The significant lack of physical evidence indicates that Louisiana does not have an established, breeding population of cougars. In states that have verified small populations of cougars, physical evidence can readily be found in the form of tracks, cached deer kills, scat and road kills. The recent sightings of cougars in Louisiana are believed to be young animals dispersing from existing populations. An expanding population in Texas can produce dispersing individual cougars that move into suitable habitat in Louisiana. Young males are known to disperse from their birthplace and travel hundreds of miles seeking their own territories. Cougars that occur in Louisiana are protected under state and federal law. Penalties for taking a cougar in Louisiana may include up to one year in jail and/or a $100,000 fine. Anyone with any information regarding the taking of a cougar should call the Operation Game Thief hotline at 1-800-442-2511. Callers may remain anonymous and may receive a cash reward.
<urn:uuid:837ed0bb-0bd1-432c-9baf-a4a8b88595fc>
CC-MAIN-2013-20
http://stmarynow.com/view/full_story/15264141/article-Images-confirm-cougar-presence-in-Vernon-Parish?instance=secondary_stories_left_column
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963177
736
2.671875
3
[ "nature" ]
{ "climate": [], "nature": [ "habitat" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
by Gerry Everding St. Louis MO (SPX) Feb 12, 2013 Nominated early this year for recognition on the UNESCO World Heritage List, which includes such famous cultural sites as the Taj Mahal, Machu Picchu and Stonehenge, the earthen works at Poverty Point, La., have been described as one of the world's greatest feats of construction by an archaic civilization of hunters and gatherers. Now, new research in the current issue of the journal Geoarchaeology, offers compelling evidence that one of the massive earthen mounds at Poverty Point was constructed in less than 90 days, and perhaps as quickly as 30 days - an incredible accomplishment for what was thought to be a loosely organized society consisting of small, widely scattered bands of foragers. "What's extraordinary about these findings is that it provides some of the first evidence that early American hunter-gatherers were not as simplistic as we've tended to imagine," says study co-author T.R. Kidder, PhD, professor and chair of anthropology in Arts and Sciences at Washington University in St. Louis. "Our findings go against what has long been considered the academic consensus on hunter-gather societies - that they lack the political organization necessary to bring together so many people to complete a labor-intensive project in such a short period." Co-authored by Anthony Ortmann, PhD, assistant professor of geosciences at Murray State University in Kentucky, the study offers a detailed analysis of how the massive mound was constructed some 3,200 years ago along a Mississippi River bayou in northeastern Louisiana. Based on more than a decade of excavations, core samplings and sophisticated sedimentary analysis, the study's key assertion is that Mound A at Poverty Point had to have been built in a very short period because an exhaustive examination reveals no signs of rainfall or erosion during its construction. "We're talking about an area of northern Louisiana that now tends to receive a great deal of rainfall," Kidder says. "Even in a very dry year, it would seem very unlikely that this location could go more than 90 days without experiencing some significant level of rainfall. Yet, the soil in these mounds shows no sign of erosion taking place during the construction period. There is no evidence from the region of an epic drought at this time, either." Part of a much larger complex of earthen works at Poverty Point, Mound A is believed to be the final and crowning addition to the sprawling 700-acre site, which includes five smaller mounds and a series of six concentric C-shaped embankments that rise in parallel formation surrounding a small flat plaza along the river. At the time of construction, Poverty Point was the largest earthworks in North America. Built on the western edge of the complex, Mound A covers about 538,000 square feet [roughly 50,000 square meters] at its base and rises 72 feet above the river. Its construction required an estimated 238,500 cubic meters - about eight million bushel baskets - of soil to be brought in from various locations near the site. Kidder figures it would take a modern, 10-wheel dump truck about 31,217 loads to move that much dirt today. "The Poverty Point mounds were built by people who had no access to domesticated draft animals, no wheelbarrows, no sophisticated tools for moving earth," Kidder explains. "It's likely that these mounds were built using a simple 'bucket brigade' system, with thousands of people passing soil along from one to another using some form of crude container, such as a woven basket, a hide sack or a wooden platter." To complete such a task within 90 days, the study estimates it would require the full attention of some 3,000 laborers. Assuming that each worker may have been accompanied by at least two other family members, say a wife and a child, the community gathered for the build must have included as many as 9,000 people, the study suggests. "Given that a band of 25-30 people is considered quite large for most hunter-gatherer communities, it's truly amazing that this ancient society could bring together a group of nearly 10,000 people, find some way to feed them and get this mound built in a matter of months," Kidder says. Soil testing indicates that the mound is located on top of land that was once low-lying swamp or marsh land - evidence of ancient tree roots and swamp life still exists in undisturbed soils at the base of the mound. Tests confirm that the site was first cleared for construction by burning and quickly covered with a layer of fine silt soil. A mix of other heavier soils then were brought in and dumped in small adjacent piles, gradually building the mound layer upon layer. As Kidder notes, previous theories about the construction of most of the world's ancient earthen mounds have suggested that they were laid down slowly over a period of hundreds of years involving small contributions of material from many different people spanning generations of a society. While this may be the case for other earthen structures at Poverty Point, the evidence from Mound A offers a sharp departure from this accretional theory. Kidder's home base in St. Louis is just across the Mississippi River from one of America's best known ancient earthen structures, the Monk Mound at Cahokia, Ill. He notes that the Monk Mound was built many centuries later than the mounds at Poverty Point by a civilization that was much more reliant on agriculture, a far cry from the hunter-gatherer group that built Poverty Point. Even so, Mound A at Poverty Point is much larger than almost any other mound found in North America; only Monk's Mound at Cahokia is larger. "We've come to realize that the social fabric of these socieites must have been much stronger and more complex that we might previously have given them credit. These results contradict the popular notion that pre-agricultural people were socially, politically, and economically simple and unable to organize themselves into large groups that could build elaborate architecture or engage in so-called complex social behavior," Kidder says. "The prevailing model of hunter-gatherers living a life 'nasty, brutish and short' is contradicted and our work indicates these people were practicing a sophisticated ritual/religious life that involved building these monumental mounds." Washington University in St. Louis All About Human Beings and How We Got To Be Here |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:a5058d3c-2691-4aef-862f-88a3935a760d>
CC-MAIN-2013-20
http://www.terradaily.com/reports/Archaic_Native_Americans_built_massive_Louisiana_mound_in_less_than_90_days_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966482
1,459
2.9375
3
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
By Pauline Hammerbeck It's been a doozy of a wildfire season (Colorado's most destructive ever), leaving homeowners wondering what safety measures they can put in place to stave off flames in the event of a fire in their own neighborhood. Landscaping, it turns out, can be an important measure in wildfire protection. But fire-wise landscaping isn't just something for those dwelling on remote Western hilltops. Brush, grass and forest fires occur nearly everywhere in the United States, says the National Fire Protection Association. Here's how your landscaping can help keep you safe. Create 'defensible' space Most homes that burn during a wildfire are ignited by embers landing on the roof, gutters, and on decks and porches. So your first point of action should be creating a defensible space, a buffer zone around your home, to reduce sources of fuel. Start by keeping the first 3 to 5 feet around your home free of all flammable materials and vegetation: plants, shrubs, trees and grasses, as well as bark and other organic mulches should all be eliminated (a neat perimeter of rock mulch or a rock garden can be a beautiful thing). Maintenance is also important: - Clear leaves, pine needles and other debris from roofs, gutters and eaves - Cut back tree branches that overhang the roof - Clear debris from under decks, porches and other structures Moving farther from the house, you might consider adding hardscaping - driveways, patios, walkways, gravel paths, etc. These features add visual interest, but they also maintain a break between vegetation and your home in the event of a fire. Some additional tasks to consider in the first 100 feet surrounding your home: - Thin out trees and shrubs (particularly evergreens) within 30 feet - Trim low tree branches so they're a minimum of 6 feet off the ground - Mow lawn regularly and dispose of clippings and other debris promptly - Move woodpiles to a space at least 30 feet from your home Use fire-resistant plants Populating your landscape with plants that are resistant to fire can also be an important tactic. Look for low-growing plants that have thick leaves (a sign that they hold water), extensive root systems and the ability to withstand drought. This isn't as limiting as it sounds. Commonly used hostas, butterfly bushes and roses are all good choices. And there are plenty of fire-resistant plant lists to give you ideas on what to pick. Where and how you plant can also have a dramatic effect on fire behavior. The plants nearest your home should be smaller and more widely spaced than those farther away. Be sure to use a variety of plant types, which reduces disease and keeps the landscape healthy and green. Plant in small clusters - create a garden island, for instance, by surrounding a group of plantings with a rock perimeter - and use rock mulch to conserve moisture. Maintain accessible water sources Wildfires present a special challenge to local fire departments, so it's in your interest to be able to access or maintain an emergency water supply - particularly if you're in a remote location. At a minimum, keep 100 feet of garden hose attached to a spigot (if your water comes from a well, consider an emergency generator to operate the pump during a power failure). But better protection can come from the installation of a small pond, cistern or, if budget allows, a swimming pool. Good planning and a bit of elbow grease have a big hand in wildfire safety. In a year with record heat and drought, looking over your landscape with a firefighter's eye can offer significant peace of mind. - Are You Properly Insured for Your Real Estate? - The Ins and Outs of Homeowner's Insurance - Tips for Fire Safety in Your Home Guest blogger Pauline Hammerbeck is an editor for the Allstate Blog, which helps people prepare for the unpredictability of life. Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinion or position of Zillow.
<urn:uuid:dbe77f52-384c-4c40-a487-84aae16a1d76>
CC-MAIN-2013-20
http://www.gloucestertimes.com/real_estate_news/x2068758245/How-to-Landscape-Your-Home-for-Fire-Safety
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939375
854
2.578125
3
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Upland Bird Regional Forecast When considering upland game population levels during the fall hunting season, two important factors impact population change. First is the number of adult birds that survived the previous fall and winter and are considered viable breeders in the spring. The second is the reproductive success of this breeding population. Reproductive success consists of nest success (the number of nests that successfully hatched) and chick survival (the number of chicks recruited into the fall population). For pheasant and quail, annual population turnover is relatively high; therefore, the fall population is more dependent on reproductive success than breeding population levels. For grouse (prairie chickens), annual population turnover is not as rapid although reproductive success is still the major population regulator and important for good hunting. In the following forecast, breeding population and reproductive success of pheasants, quail, and prairie chickens will be discussed. Breeding population data were gathered during spring breeding surveys for pheasants (crow counts), quail (whistle counts), and prairie chickens (lek counts). Data for reproductive success were collected during late summer roadside surveys for pheasants and quail. Reproductive success of prairie chickens cannot be easily assessed using the same methods because they generally do not associate with roads like the other game birds. Kansas experienced extreme drought this past year. Winter weather was mild, but winter precipitation is important for spring vegetation, which can impact reproductive success, and most of Kansas did not get enough winter precipitation. Pheasant breeding populations showed significant reductions in 2012, especially in primary pheasant range in western Kansas. Spring came early and hot this year, but also included fair spring moisture until early May, when the precipitation stopped, and Kansas experienced record heat and drought through the rest of the reproductive season. Early nesting conditions were generally good for prairie chickens and pheasants. However, the primary nesting habitat for pheasants in western Kansas is winter wheat, and in 2012, Kansas had one of the earliest wheat harvests on record. Wheat harvest can destroy nests and very young broods. The early harvest likely lowered pheasant nest and early brood success. The intense heat and lack of rain in June and July resulted in a decrease in brooding cover and insect populations, causing lower chick survival for all upland game birds. Because of drought, all counties in Kansas were opened to Conservation Reserve Program (CRP) emergency haying or grazing. CRP emergency haying requires fields that are hayed to leave at least 50 percent of the field in standing grass cover. CRP emergency grazing requires 25 percent of the field (or contiguous fields) to be left ungrazed or grazing at 75-percent normal stocking rates across the entire field. Many CRP fields, including Walk In Hunting Areas (WIHA), may be affected across the state. WIHA property is privately-owned land open to the public for hunting access. Kansas has more than one million acres of WIHA. Often, older stands of CRP grass are in need of disturbance, and haying and grazing can improve habitat for the upcoming breeding season, and may ultimately be beneficial if weather is favorable. Due to continued drought, Kansas will likely experience a below-average upland game season this fall. For those willing to hunt hard, there will still be pockets of decent bird numbers, especially in the northern Flint Hills and northcentral and northwestern parts of the state. Kansas has approximately 1.5 million acres open to public hunting (wildlife areas and WIHA combined). The regular opening date for the pheasant and quail seasons will be Nov. 10 for the entire state. The previous weekend will be designated for the special youth pheasant and quail season. Youth participating in the special season must be 16 years old or younger and accompanied by a non-hunting adult who is 18 or older. All public wildlife areas and WIHA tracts will be open for public access during the special youth season. Please consider taking a young person hunting this fall, so they might have the opportunity to develop a passion for the outdoors that we all enjoy. PHEASANT – Drought in 2011 and 2012 has taken its toll on pheasant populations in Kansas. Pheasant breeding populations dropped by nearly 50 percent or more across pheasant range from 2011 to 2012 resulting in fewer adult hens in the population to start the 2012 nesting season. The lack of precipitation has resulted in less cover and insects needed for good pheasant reproduction. Additionally, winter wheat serves as a major nesting habitat for pheasants in western Kansas, and a record early wheat harvest this summer likely destroyed many nests and young broods. Then the hot, dry weather set in from May to August, the primary brood-rearing period for pheasants. Pheasant chicks need good grass and weed cover and robust insect populations to survive. Insufficient precipitation and lack of habitat and insects throughout the state’s primary pheasant range resulted in limited production. This will reduce hunting prospects compared to recent years. However, some good opportunities still exist to harvest roosters in the sunflower state, especially for those willing to work for their birds. Though the drought has taken its toll, Kansas still contains a pheasant population that will produce a harvest in the top three or four major pheasant states this year. The best areas this year will likely be pockets of northwest and northcentral Kansas. Populations in southwest Kansas were hit hardest by the 2011-2012 drought (72 percent decline in breeding population), and a very limited amount of production occurred this season due to continued drought and limited breeding populations. QUAIL – The bobwhite breeding population in 2012 was generally stable or improved compared to 2011. Areas in the northern Flint Hills and parts of northeast Kansas showed much improved productivity this year. Much of eastern Kansas has seen consistent declines in quail populations in recent decades. After many years of depressed populations, this year’s rebound in quail reproduction in eastern Kansas is welcomed, but overall populations are still below historic averages. The best quail hunting will be found throughout the northern Flint Hills and parts of central Kansas. Prolonged drought undoubtedly impacted production in central and western Kansas. PRAIRIE CHICKEN – Kansas is home to greater and lesser prairie chickens. Both species require a landscape of predominately native grass. Lesser prairie chickens are found in westcentral and southwestern Kansas in native prairie and nearby stands of native grass within the conservation reserve program (CRP). Greater prairie chickens are found primarily in the tallgrass and mixed-grass prairies in the eastern one-third and northern one-half of the state. The spring prairie chicken lek survey indicated that most populations remained stable or declined from last year. Declines were likely due to extreme drought throughout 2011. Areas of northcentral and northwest Kansas fared the best, while areas in southcentral and southwest Kansas experienced the sharpest declines where drought was most severe. Many areas in the Flint Hills were not burned this spring due to drought. This resulted in far more residual grass cover for much improved nesting conditions compared to recent years. There have been some reports of prairie chickens broods in these areas, and hunting will likely be somewhat improved compared to recent years. Because of recent increases in prairie chicken (both species) populations in northwest Kansas, regulations have been revised this year. The early prairie chicken season (Sept. 15-Oct. 15) and two-bird bag limit has been extended into northwest Kansas. The northwest unit boundary has also been revised to include areas north of U.S. Highway 96 and west of U.S. Highway 281. Additionally, all prairie chicken hunters are now required to purchase a $2.50 prairie chicken permit. This permit will allow KDWPT to better track hunters and harvest, which will improve management activities. Both species of prairie chicken are of conservation concern and the lesser prairie chicken is a candidate species for federal listing under the Endangered Species Act. This region has 11,809 acres of public land and 339,729 acres of WIHA open to hunters this fall. Pheasant – Spring breeding populations declined almost 50 percent from 2011 to 2012, reducing fall population potential. Early nesting conditions were decent due to good winter wheat growth, but early wheat harvest and severe heat and drought through the summer reduced populations. While this resulted in a significant drop in pheasant numbers, the area will still have the highest densities of pheasants this fall compared to other areas in the state. Some counties — such as Graham, Rawlins, Decatur, and Sherman — showed the relatively-highest densities of pheasants during summer brood surveys. Much of the cover will be reduced compared to previous years due to drought and resulting emergency haying and grazing in CRP fields. Good hunting opportunities will also be reduced compared to recent years, and harvest will likely be below average. Quail – Populations in this region have been increasing in recent years although the breeding population had a slight decline. This area is at the extreme northwestern edge of bobwhite range in Kansas, and densities are relatively low compared to central Kansas. Some counties — such as Graham, Rawlins, and Decatur — will provide hunting opportunities for quail. Prairie Chicken – Prairie chicken populations have expanded in both numbers and range within the region over the past 20 years. The better hunting opportunities will be found in the central and southeastern portions of the region in native prairies and nearby CRP grasslands. Spring lek counts in that portion of the region were slightly depressed from last year and nesting conditions were only fair this year. Extreme drought likely impaired chick survival. This region has 75,576 acres of public land and 311,182 acres of WIHA open to hunters this fall. Pheasant – The Smoky Hills breeding population dropped about 40 percent from 2011 to 2012, reducing overall fall population potential. While nesting conditions were fair due to good winter wheat growth, the drought and early wheat harvest impacted the number of young recruited into the fall population. Certain areas had decent brood production, including portions of Mitchell, Rush, Rice, and Cloud counties. Across the region, hunting opportunities will likely be below average and definitely reduced from recent years. CRP was opened to emergency haying and grazing, reducing available cover. Quail – Breeding populations increased nearly 60 percent from 2011 to 2012, increasing fall population potential. However, drought conditions were severe, likely impairing nesting and brood success. There are reports of fair quail numbers in certain areas throughout the region. Quail populations in northcentral Kansas are naturally spotty due to habitat characteristics. Some areas, such as Cloud County, showed good potential while other areas in the more western edges of the region did not fare as well. Prairie Chicken – Greater prairie chickens occur throughout the Smoky Hills in large areas of native rangeland and some CRP. This region includes some of the highest densities and greatest hunting opportunities in the state for greater prairie chickens. Spring counts indicated that numbers were stable or slightly reduce from last year. Much of the rangeland cover is significantly reduced due to drought, which likely impaired production, resulting in reduced fall hunting opportunities.. This region has 60,559 acres of public land and 54,170 of WIHA open to hunters this fall. Pheasant – Spring crow counts this year showed a significant increase in breeding populations of pheasants. While this increase is welcome, this region was nearing all-time lows in 2011. Pheasant densities across the region are still low, especially compared to other areas in western Kansas. Good hunting opportunities will exist in only a few pockets of good habitat. Quail – Breeding populations stayed relatively the same as last year, and some quail were detected during the summer brood survey. The long-term trend for this region has been declining, largely due to unfavorable weather and degrading habitat. This year saw an increase in populations. Hunting opportunities for quail will be improved this fall compared to recent years in this region. The best areas will likely be in Marshall and Jefferson counties. Prairie Chickens – Very little prairie chicken range occurs in this region, and opportunities are limited. The best areas are in the western edges of the region, in large areas of native rangeland. This region has 80,759 acres of public land and 28,047 acres of WIHA open to hunters this fall. Pheasant – This region is outside the primary pheasant range and has very limited hunting. A few birds can be found in the northwestern portion of the region. Quail – Breeding populations were relatively stable from 2011 to 2012 for this region although long term trends have been declining. In the last couple years, the quail populations throughout much of the region have been on the increase. Specific counties that showed relatively higher numbers are Coffey, Osage, and Wilson. However, populations remain far below historic levels across the bulk of the region due to extreme habitat degradation. Prairie Chicken – Greater prairie chickens occur in the central and northwest parts of this region in large areas of native rangeland. Breeding population densities were up nearly 40 percent from last year, and opportunities may increase accordingly. However, populations have been in consistent decline over the long term. Infrequent fire frequency has resulted in woody encroachment of native grasslands in the area, gradually reducing the amount of suitable habitat. This region has 128,371 acres of public land and 63,069 acres of WIHA open to hunters this fall. Pheasant – This region is on the eastern edge of pheasant range in Kansas and well outside the primary range. Pheasant densities have always been relatively low throughout the Flint Hills. Spring breeding populations were down nearly 50 percent, and reproduction was limited this summer. The best pheasant hunting will be in the northwestern edge of this region in Marion and Dickinson counties. Quail – This region contains some of the highest densities of bobwhite in Kansas. The breeding population in this region increased 25 percent compared to 2011, and the long-term trend (since 1998) has been stable do to steadily increasing populations over the last four or five years. High reproductive success was reported in the northern half of this region, and some of the best opportunities for quail hunting will be found in the northern Flint Hills this year. In the south, Cowley County showed good numbers of quail this summer. Prairie Chickens – The Flint Hills is the largest intact tallgrass prairie left in North America. It has served as a core habitat for greater prairie chickens for many years. Since the early 1980s, inadequate range burning frequencies have consistently reduced nest success in the area, and prairie chicken numbers have been declining as a result. Because of the drought this spring, many areas that are normally burned annually were left unburned this year. This left more residual grass cover for nesting and brood rearing. There are some good reports of prairie chicken broods, and hunting opportunities will likely increase throughout the region this year. This region has 19,534 acres of public land and 73,341 acres of WIHA open to hunters this fall. Pheasant – The breeding population declined about 40 percent from 2011 to 2012. Prolonged drought for two years now and very poor vegetation conditions resulted in poor reproductive success this year. All summer indices showed a depressed pheasant population in this region, especially compared to other regions. Some of the relatively better counties in this area will be Reno, Pawnee, and Pratt although these counties have not been immune to recent declines. There will likely few good hunting opportunities this fall. Quail – The breeding population dropped over 30 percent this year from 2011 although long term trends (since 1998) have been stable in this region. This region generally has some of the highest quail densities in Kansas, but prolonged drought and reduced vegetation have caused significant declines in recent years. Counties such as Reno, Pratt, and Stafford will likely have the best opportunities in the region. While populations may be down compared to recent years, this region will continue to provide fair hunting opportunities for quail. Prairie Chicken – This region is almost entirely occupied by lesser prairie chickens. The breeding population declined nearly 50 percent from 2011 to 2012. Reproductive conditions were not good for the region due to extreme drought and heat for the last two years, and production was limited. The best hunting opportunities will likely be in the sand prairies south of the Arkansas River. This region has 2,904 acres of public land and 186,943 acres of WIHA open to hunters this fall. Pheasant – The breeding population plummeted more than 70 percent in this region from 2011 to 2012. Last year was one of the worst on record for pheasant reproduction. However, last fall there was some carry-over rooster (second-year) from a record high season in 2010. Those carry-over birds are mostly gone now, which will hurt hunting opportunities this fall. Although reproduction was slightly improved from 2011, chick recruitment was still fair to below average this summer due to continued extreme drought conditions. Moreover, there were not enough adult hens in the population yet to make a significant rebound. Generally, hunting opportunity will remain well below average in this region. Haskell and Seward counties showed some improved reproductive success, especially compared to other counties in the region. Quail – The breeding population in this region tends to be highly variable depending on available moisture and resulting vegetation. The region experienced an increase in breeding populations from 2011 to 2012 although 2011 was a record low for the region. While drought likely held back production, the weather was better than last year, and some reproduction occurred. Indices are still well below average for the region. There will be some quail hunting opportunities in the region although good areas will be sparse. Prairie Chicken – While breeding populations in the eastern parts of this region were generally stable or increasing, areas of extreme western and southwest portions (Cimarron National Grasslands) saw nearly 30-percent declines last year and 65 percent declines this year. Drought remained extreme in this region, and reproductive success was likely very low. Hunting opportunities in this region will be extremely limited this fall.
<urn:uuid:a611d07f-9067-4341-92f3-f62b82e34e98>
CC-MAIN-2013-20
http://www.kdwpt.state.ks.us/index.php/news/Hunting/Upland-Birds/Upland-Bird-Regional-Forecast
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956535
3,769
3.484375
3
[ "climate", "nature" ]
{ "climate": [ "drought" ], "nature": [ "conservation", "endangered species", "habitat" ] }
{ "strong": 4, "weak": 0, "total": 4, "decision": "accepted_strong" }
Oil & Natural Gas Projects Transmission, Distribution, & Refining Multispectral and Hyperspectral Remote Sensing Techniques for Natural Gas Transmission Infrastructure Systems The goal is to help maintain the nation's natural gas transmission infrastructure through the timely and effective detection of natural gas leaks through evaluation of geobotanical stress signatures. The remote sensing techniques being developed employ advanced spectrometer systems that produce visible and near infrared reflected light images with spatial resolution of 1 to 3 meters in 128 wavelength bands. This allows for the discrimination of individual species of plants as well as geological and man-made objects, and permits the detection of biological impacts of methane leaks or seepages in large complicated areas. The techniques employed do not require before-and-after imagery because they use the spatial patterns of plant species and health variations present in a single image to distinguish leaks. Also, these techniques should allow discrimination between the effects of small leaks and the damage caused by human incursion or natural factors such as storm run off, landslides and earthquakes. Because plants in an area can accumulate doses of leaked materials, species spatial patterns can record time-integrated effects of leaked methane. This can be important in finding leaks that would otherwise be hard to detect by direct observation of methane concentrations in the air. This project is developing remote sensing methods of detecting, discriminating, and mapping the effects of natural gas leaks from underground pipelines. The current focus is on the effects that the increased methane soil concentrations, created by the leaks, will have on plants. These effects will be associated with extreme soil CH4 concentrations, plant sickness, and even death. Similar circumstances have been observed and studied in the effects of excessive CO2soil concentrations at Mammoth Mountain near Mammoth Lakes California, USA. At the Mammoth Mountain site, the large CO2 soil concentrations are due to the volcanic rumblings of the magma still active below Mammoth Mountain. At more subtle levels this research has been able to map, using hyperspectral air borne imagery, the tree plant stress over all of the Mammoth Mountain. These plant stress maps match, and greatly extend into surrounding regions, the on-ground CO2 emission mapping done by the USGS in Menlo Park, California. In addition, vegetation health mapping along with altered mineralization mapping at Mammoth Mountain does reveal subtle hidden faults. These hidden faults are pathways for potential CO2 leaks, at least near the surface, over the entire region. The methods being developed use airborne hyperspectral and multi-spectral high-resolution imagery and very high resolution (0.6 meter) satellite imagery. The team has identified and worked with commercial providers of both airborne hyperspectral imagery acquisitions and high resolution satellite imagery acquisitions. Both offer competent image data post processing, so that eventually, the ongoing surveillance of pipeline corridors can be contracted for commercially. Current work under this project is focused on detecting and quantifying natural gas pipeline leaks using hyperspectral imagery from airborne or satellite based platforms through evaluation of plant stress. Lawrence Livermore National Laboratory (LLNL) – project management and research products NASA – Ames – Development of UAV platform used to carry hyperspectral payload HyVista Corporation– Development and operation of the HyMap hyperspectral sensor Livermore, CA 94511 The use of geobotanical plant stress signatures from hyperspectral imagery potential offers a unique means of detecting and quantifying the existence of natural gas leaks from the U.S. pipeline infrastructure. The method holds the potential to cover large expanses of pipeline with minimal man effort thus reducing the potential likelihood that a leak would go undetected. By increasing the effectiveness and efficiency of leak detection, the amount of gas leaked from a site can be reduced resulting in decreased environmental impact from fugitive emissions of gas, increased safety and reliability of gas delivery and increase in overall available gas; as less product is lost from the lines. The method chosen for testing these techniques was to image the area surrounding known gas pipeline leaks. After receiving notice and location information for a newly discovered leak from research collaborator Pacific Gas and Electric (PG&E), researchers determined the area above the buried pipeline to be scanned, including some surrounding areas thought to be outside the influence of any methane that might percolate to within root depth of the surface. Flight lines were designed for the airborne acquisition program and researchers used a geographic positioning system (GPS) and digital cameras to visually record the soils, plants, minerals, waters, and manmade objects in the area while the airborne imagery was acquired. After the airborne imagery set for all flight lines was received (including raw data, data corrected to reflectance including atmospheric absorptions, and georectification control files), the data was analyzed using commercial computer software (ENVI) by a team of researchers at University of California, Santa Cruz (UCSC), Lawrence Livermore National Laboratory (LLNL), and one of the acquisition contractors. - Created an advanced Geographic Information System (GIS) that will be able provide dynamic integration of airborne imagery, satellite imagery, and other GIS information to monitor pipelines for geobotanical leak signatures. - Used the software to integrate hyperspectral imagery, high resolution satellite imagery, and digital elevation models of the area around a known gas leak to determine if evidence of the leak could be resolved. - Helped develop hyperspectral imagery payload for use on an unmanned aerial vehicle developed by NASA-Ames. - Participated in DOE-NETL sponsored natural gas pipeline leak detection demonstration in Casper, Wyoming on September 13-17, 2004. Using both the UAV hyperspectral payload (~1000 ft), and Hyvista hyperspectral platform (~5000 ft) to survey for plant stress. Researchers used several different routines available within the ENVI program suite to produce “maps” of plant species types, plant health within species types, soil types, soil conditions, water bodies, water contents such as algae or sediments, mineralogy of exposed formations, and manmade objects. These maps were then studied for relative plant health patterns, altered mineral distributions, and other categories. The researchers then returned to the field to verify and further understand the mappings, fine-tune the results, and produce more accurate maps. Since the maps are georectified and the pixel size is 3 meters, individual objects can all be located using the maps and a handheld GPS. These detailed maps show areas of existing anomalous conditions such as plant kills and linear species modifications caused by subtle hidden faults, modifications of the terrain due to pipeline work or encroachment. They are also the “baseline” that can be used to chart any future changes by re-imaging the area routinely to monitor and document any effects caused by significant methane leakage. The sensors used for image acquisition are hyperspectral scanners, one of which provides 126 bands across the reflective solar wavelength region of 0.45 – 2.5 nm with contiguous spectral coverage (except in the atmospheric water vapor bands) and bandwidths between 15 – 20 nm. This sensor operates on a 3-axis gyro-stabilized platform to minimize image distortion due to aircraft motion and provides a signal to noise ratio >500:1. Geo-location and image geo-coding is achieved with an-on board Differential GPS (DGPS) and an integrated IMU (inertial monitoring unit). During a DOE – NETL sponsored natural gas leak detection demonstration at the National Petroleum Reserve 3 (NPR3) site of the Rocky Mountain Oilfield Testing Center (RMOTC) outside of Casper, Wyoming, the project utilized hyperspectral imaging of vegetation to sense plant stress related to the presence of natural gas on a simulated pipeline using actual natural gas releases. The spectral signature of sunlight reflected from vegetation was used to determine vegetation health. Two different platforms were used for imaging the virtual pipeline path: a Twin Otter aircraft flying at an altitude of about 5,000 feet above ground level that imaged the entire site in strips, and an unmanned autonomous vehicle (UAV) flying at an altitude of approximately 1,000 feet above ground level that imaged an area surrounding the virtual pipeline. The manned hyperspectral imaging took place on two days. Wednesday, September 9 and Wednesday, September 15. The underground leaks were started on August 30. This was done to allow time for the methane from the leaks to saturate the soils and produce plant stress by excluding oxygen from the plant root systems. On both days, the entire NPR3-RMOTC site was successfully imaged. At that time of year, the vegetation at NPR3-RMOTC was largely in hibernation. The exception was in the gullies where there was some moisture. Therefore, the survey looked for unusually stressed plant “patches” in the gullies as possible leak points. Several spots were found in the hyperspectral imagery that had the spectral signature typical of sick vegetation that were several pixels in diameter in locations in the gullies or ravines along the virtual pipeline route. Due to the limited vegetation along the test route the successful detection of natural gas leaks through imaging of plant stress was limited in success. The technique did demonstrate an ability to show plant stress in areas near leak sites but was less successful in determining general leak severity based on those results. In areas with much denser vegetation coverage and less dormant plant life the method still shows promise. | Airborne hyperspectral imagery unit - close-up || Airborne hyperspectral imagery unit - on plane Overall results from the DOE-NETL sponsored natural gas leak detection demonstration can be found in the demonstration final report [PDF-7370KB] . Current Status and Remaining Tasks: All work under this project has been completed. Project Start: August 13, 2001 Project End: December 31, 2005 DOE Contribution: $966,900 Performer Contribution: $0 NETL – Richard Baker ([email protected] or 304-285-4714) LLNL – Dr. William L. Pickles ([email protected] or 925-422-7812) DOE Leak Detection Technology Demonstration Final Report [PDF-7370KB] DOE Fossil Energy Techline: National Labs to Strengthen Natural Gas Pipelines' Integrity, Reliability Status Assessment [PDF-26KB]
<urn:uuid:14f91c40-6ff9-4e80-8412-73a8f1b2b57e>
CC-MAIN-2013-20
http://www.netl.doe.gov/technologies/oil-gas/NaturalGas/Projects_n/TDS/TD/T%26D_A_FEW0104-0085Multispectral.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92439
2,141
2.828125
3
[ "climate" ]
{ "climate": [ "co2", "methane" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Notes on the Bible, by Albert Barnes, , at sacred-texts.com Now the word of the Lord - , literally, "And, ..." This is the way in which the several inspired writers of the Old Testament mark that what it was given them to write was united onto those sacred books which God had given to others to write, and it formed with them one continuous whole. The word, "And," implies this. It would do so in any language, and it does so in Hebrew as much as in any other. As neither we, nor any other people, would, without any meaning, use the word, And, so neither did the Hebrews. It joins the four first books of Moses together; it carries on the history through Joshua, Judges, the Books of Samuel and of the Kings. After the captivity, Ezra and Nehemiah begin again where the histories before left off; the break of the captivity is bridged over; and Ezra, going back in mind to the history of God's people before the captivity, resumes the history, as if it had been of yesterday, "And in the first year of Cyrus." It joins in the story of the Book of Ruth before the captivity, and that of Esther afterward. At times, even prophets employ it, in using the narrative form of themselves, as Ezekiel, "and it was in the thirtieth year, in the fourth month, in the fifth day of the month, and I was in the captivity by the river of Chebar, the heavens opened and I saw." If a prophet or historian wishes to detach his prophecy or his history, he does so; as Ezra probably began the Book of Chronicles anew from Adam, or as Daniel makes his prophecy a whole by itself. But then it is the more obvious that a Hebrew prophet or historian, when he does begin with the word, "And," has an object in so beginning; he uses an universal word of all languages in its uniform meaning in all language, to join things together. And yet more precisely; this form, "and the word of the Lord came to - saying," occurs over and over again, stringing together the pearls of great price of God's revelations, and uniting this new revelation to all those which had preceded it. The word, "And," then joins on histories with histories, revelations with revelations, uniting in one the histories of God's works and words, and blending the books of Holy Scripture into one divine book. But the form of words must have suggested to the Jews another thought, which is part of our thankfulness and of our being Act 11:18, "then to the Gentiles also hath God given repentance unto life." The words are the self-same familiar words with which some fresh revelation of God's will to His people had so often been announced. Now they are prefixed to God's message to the pagan, and so as to join on that message to all the other messages to Israel. Would then God deal thenceforth with the pagan as with the Jews? Would they have their prophets? Would they be included in the one family of God? The mission of Jonah in itself was an earnest that they would, for God. Who does nothing fitfully or capriciously, in that He had begun, gave an earnest that He would carry on what He had begun. And so thereafter, the great prophets, Isaiah, Jeremiah, Ezekiel, were prophets to the nations also; Daniel was a prophet among them, to them as well as to their captives. But the mission of Jonah might, so far, have been something exceptional. The enrolling his book, as an integral part of the Scriptures, joining on that prophecy to the other prophecies to Israel, was an earnest that they were to be parts of one system. But then it would be significant also, that the records of God's prophecies to the Jews, all embodied the accounts of their impenitence. Here is inserted among them an account of God's revelation to the pagan, and their repentance. "So many prophets had been sent, so many miracles performed, so often had captivity been foreannounced to them for the multitude of their sins. and they never repented. Not for the reign of one king did they cease from the worship of the calves; not one of the kings of the ten tribes departed from the sins of Jeroboam? Elijah, sent in the Word and Spirit of the Lord, had done many miracles, yet obtained no abandonment of the calves. His miracles effected this only, that the people knew that Baal was no god, and cried out, "the Lord He is the God." Elisha his disciple followed him, who asked for a double portion of the Spirit of Elijah, that he might work more miracles, to bring back the people. He died, and, after his death as before it, the worship of the calves continued in Israel. The Lord marveled and was weary of Israel, knowing that if He sent to the pagan they would bear, as he saith to Ezekiel. To make trial of this, Jonah was chosen, of whom it is recorded in the Book of Kings that he prophesied the restoration of the border of Israel. When then he begins by saying, "And the word of the Lord came to Jonah," prefixing the word "And," he refers us back to those former things, in this meaning. The children have not hearkened to what the Lord commanded, sending to them by His servants the prophets, but have hardened their necks and given themselves up to do evil before the Lord and provoke Him to anger; "and" therefore "the word of the Lord came to Jonah, saying, Arise and go to Nineveh that great city, and preach unto her," that so Israel may be shewn, in comparison with the pagan, to be the more guilty, when the Ninevites should repent, the children of Israel persevered in unrepentance." Jonah the son of Amittai - Both names occur here only in the Old Testament, Jonah signifies "Dove," Amittai, "the truth of God." Some of the names of the Hebrew prophets so suit in with their times, that they must either have been given them propheticly, or assumed by themselves, as a sort of watchword, analogous to the prophetic names, given to the sons of Hosea and Isaiah. Such were the names of Elijah and Elisha, "The Lord is my God," "my God is salvation." Such too seems to be that of Jonah. The "dove" is everywhere the symbol of "mourning love." The side of his character which Jonah records is that of his defect, his want of trust in God, and so his unloving zeal against those, who were to be the instruments of God against his people. His name perhaps preserves that character by which he willed to be known among his people, one who moaned or mourned over them. Arise, go to Nineveh, that great city - The Assyrian history, as far as it has yet been discovered, is very bare of events in regard to this period. We have as yet the names of three kings only for 150 years. But Assyria, as far as we know its history, was in its meridian. Just before the time of Jonah, perhaps ending in it, were the victorious reigns of Shalmanubar and Shamasiva; after him was that of Ivalush or Pul, the first aggressor upon Israel. It is clear that this was a time Of Assyrian greatness: since God calls it "that great city," not in relation to its extent only, but its power. A large weak city would not have been called a "great city unto God" Jon 3:3. And cry against it - The substance of that cry is recorded afterward, but God told to Jonah now, what message he was to cry aloud to it. For Jonah relates afterward, how he expostulated now with God, and that his expostulation was founded on this, that God was so merciful that He would not fulfill the judgment which He threatened. Faith was strong in Jonah, while, like Apostles "the sons of thunder," before the Day of Pentecost, he knew not" what spirit he was of." Zeal for the people and, as he doubtless thought, for the glory of God, narrowed love in him. He did not, like Moses, pray Exo 32:32, "or else blot me also out of Thy book," or like Paul, desire even to be "an anathema from Christ" Rom 9:3 for his people's sake, so that there might be more to love his Lord. His zeal was directed, like that of the rebuked Apostles, against others, and so it too was rebuked. But his faith was strong. He shrank back from the office, as believing, not as doubting, the might of God. He thought nothing of preaching, amid that multitude of wild warriors, the stern message of God. He was willing, alone, to confront the violence of a city of 600,000, whose characteristic was violence. He was ready, at God's bidding, to enter what Nahum speaks of as a den of lions Nah 2:11-12; "The dwelling of the lions and the feeding-place of the young lions, where the lion did tear in pieces enough for his whelps, and strangled for his lionesses." He feared not the fierceness of their lion-nature, but God's tenderness, and lest that tenderness should be the destruction of his own people. Their wickedness is come up before Me - So God said to Cain, Gen 4:10. "The voice of thy brother's blood crieth unto Me from the ground:" and of Sodom Gen 18:20 :21, "The cry of Sodom and Gomorrah is great, because their sin is very grievous; the cry of it is come up unto Me." The "wickedness" is not the mere mass of human sin, of which it is said Jo1 5:19, "the whole world lieth in wickedness," but evil-doing toward others. This was the cause of the final sentence on Nineveh, with which Nahum closes his prophecy, "upon whom hath not thy wickedness passed continually?" It bad been assigned as the ground of the judgment on Israel through Nineveh Hos 10:14-15. "So shall Bethel do unto you, on account of the wickedness of your wickedness." It was the ground of the destruction by the flood Gen 6:5. "God saw that the wickedness of man was great upon the earth." God represents Himself, the Great Judge, as sitting on His Throne in heaven, Unseen but All-seeing, to whom the wickedness and oppressiveness of man against man "goes up," appealing for His sentence against the oppressor. The cause seems ofttimes long in pleading. God is long-suffering with the oppressor too, that if so be, he may repent. So would a greater good come to the oppressed also, if the wolf became a lamb. But meanwhile, " every iniquity has its own voice at the hidden judgment seat of God." Mercy itself calls for vengeance on the unmerciful. But (And) Jonah rose up to flee ... from the presence of the Lord - literally "from being before the Lord." Jonah knew well, that man could not escape from the presence of God, whom he knew as the Self-existing One, He who alone is, the Maker of heaven, earth and sea. He did not "flee" then "from His presence," knowing well what David said Psa 139:7, Psa 139:9-10, "whither shall I go from Thy Spirit, or whither shall I flee from Thy presence? If I take the wings of the morning, and dwell in the uttermost parts of the sea, even there shall Thy hand lead me and Thy right hand shall hold me." Jonah fled, not from God's presence, but from standing before him, as His servant and minister. He refused God's service, because, as he himself tells God afterward Jon 4:2, he knew what it would end in, and he misliked it. So he acted, as people often do, who dislike God's commands. He set about removing himself as far as possible from being under the influence of God, and from the place where he "could" fulfill them. God commanded him to go to Nineveh, which lay northeast from his home; and he instantly set himself to flee to the then furthermost west. Holy Scripture sets the rebellion before us in its full nakedness. "The word of the Lord came unto Jonah, go to Nineveh, and Jonah rose up;" he did something instantly, as the consequence of God's command. He "rose up," not as other prophets, to obey, but to disobey; and that, not slowly nor irresolutely, but "to flee, from" standing "before the Lord." He renounced his office. So when our Lord came in the flesh, those who found what He said to be "hard sayings," went away from Him, "and walked no more with Him" Joh 6:66. So the rich "young man went away sorrowful Mat 19:22, for he had great possessions." They were perhaps afraid of trusting themselves in His presence; or they were ashamed of staying there, and not doing what He said. So men, when God secretly calls them to prayer, go and immerse themselves in business; when, in solitude, He says to their souls something which they do not like, they escape His Voice in a throng. If He calls them to make sacrifices for His poor, they order themselves a new dress or some fresh sumptuousness or self-indulgence; if to celibacy, they engage themselves to marry immediately; or, contrariwise, if He calls them not to do a thing, they do it at once, to make an end of their struggle and their obedience; to put obedience out of their power; to enter themselves on a course of disobedience. Jonah, then, in this part of his history, is the image of those who, when God calls them, disobey His call, and how He deals with them, when he does not abandon them. He lets them have their way for a time, encompasses them with difficulties, so that they shall "flee back from God displeased to God appeased." "The whole wisdom, the whole bliss, the whole of man lies in this, to learn what God wills him to do, in what state of life, calling, duties, profession, employment, He wills him to serve Him." God sent each one of us into the world, to fulfill his own definite duties, and, through His grace, to attain to our own perfection in and through fulfilling them. He did not create us at random, to pass through the world, doing whatever self-will or our own pleasure leads us to, but to fulfill His will. This will of His, if we obey His earlier calls, and seek Him by prayer, in obedience, self-subdual, humility, thoughtfulness, He makes known to each by His own secret drawings, and, in absence of these, at times by His Providence or human means. And then , "to follow Him is a token of predestination." It is to place ourselves in that order of things, that pathway to our eternal mansion, for which God created us, and which God created for us. So Jesus says Joh 10:27-28, "My sheep hear My voice and I know them, and they follow Me, and I give unto them eternal life, and they shall never perish, neither shall any man pluck them out of My Hand." In these ways, God has foreordained for us all the graces which we need; in these, we shall be free from all temptations which might be too hard for us, in which our own special weakness would be most exposed. Those ways, which people choose out of mere natural taste or fancy, are mostly those which expose them to the greatest peril of sin and damnation. For they choose them, just because such pursuits flatter most their own inclinations, and give scope to their natural strength and their moral weakness. So Jonah, disliking a duty, which God gave him to fulfill, separated himself from His service, forfeited his past calling, lost, as far as in him lay, his place among "the goodly fellowship of the prophets," and, but for God's overtaking grace, would have ended his days among the disobedient. As in Holy Scripture, David stands alone of saints, who had been after their calling, bloodstained; as the penitent robber stands alone converted in death; as Peter stands singly, recalled after denying his Lord; so Jonah stands, the one prophet, who, having obeyed and then rebelled, was constrained by the overpowering providence and love of God, to return and serve Him. "Being a prophet, Jonah could not be ignorant of the mind of God, that, according to His great Wisdom and His unsearchable judgments and His untraceable and incomprehensible ways, He, through the threat, was providing for the Ninevites that they should not suffer the things threatened. To think that Jonah hoped to hide himself in the sea and elude by flight the great Eye of God, were altogether absurd and ignorant, which should not be believed, I say not of a prophet, but of no other sensible person who had any moderate knowledge of God and His supreme power. Jonah knew all this better than anyone, that, planning his flight, he changed his place, but did not flee God. For this could no man do, either by hiding himself in the bosom of the earth or depths of the sea or ascending (if possible) with wings into the air, or entering the lowest hell, or encircled with thick clouds, or taking any other counsel to secure his flight. This, above all things and alone, can neither be escaped nor resisted, God. When He willeth to hold and grasp in His Hand, He overtaketh the swift, baffleth the intelligent, overthroweth the strong, boweth the lofty, tameth rashness, subdueth might. He who threatened to others the mighty Hand of God, was not himself ignorant of nor thought to flee, God. Let us not believe this. But since he saw the fall of Israel and perceived that the prophetic grace would pass over to the Gentiles, he withdrew himself from the office of preaching, and put off the command." "The prophet knoweth, the Holy Spirit teaching him, that the repentance of the Gentiles is the ruin of the Jews. A lover then of his country, he does not so much envy the deliverance of Nineveh, as will that his own country should not perish. - Seeing too that his fellow-prophets are sent to the lost sheep of the house of Israel, to excite the people to repentance, and that Balaam the soothsayer too prophesied of the salvation of Israel, he grieveth that he alone is chosen to be sent to the Assyrians, the enemies of Israel, and to that greatest city of the enemies where was idolatry and ignorance of God. Yet more he feared lest they, on occasion of his preaching, being converted to repentance, Israel should be wholly forsaken. For he knew by the same Spirit whereby the preaching to the Gentiles was entrusted to him, that the house of Israel would then perish; and he feared that what was at one time to be, should take place in his own time." "The flight of the prophet may also be referred to that of man in general who, despising the commands of God, departed from Him and gave himself to the world, where subsequently, through the storms of ill and the wreck of the whole world raging against him, he was compelled to feel the presence of God, and to return to Him whom he had fled. Whence we understand, that those things also which men think for their good, when against the will of God, are turned to destruction; and help not only does not benefit those to whom it is given, but those too who give it, are alike crushed. As we read that Egypt was conquered by the Assyrians, because it helped Israel against the will of God. The ship is emperiled which had received the emperiled; a tempest arises in a calm; nothing is secure, when God is against us." Tarshish - , named after one of the sons of Javan, Gen 10:4. was an ancient merchant city of Spain, once proverbial for its wealth (Psa 72:10. Strabo iii. 2. 14), which supplied Judaea with silver Jer 10:9, Tyre with "all manner of riches," with iron also, tin, lead. Eze 27:12, Eze 27:25. It was known to the Greeks and Romans, as (with a harder pronunciation) Tartessus; but in our first century, it had either ceased to be, or was known under some other name. Ships destined for a voyage, at that time, so long, and built for carrying merchandise, were naturally among the largest then constructed. "Ships of Tarshish" corresponded to the "East-Indiamen" which some of us remember. The breaking of "ships of Tarshish by the East Wind" Psa 48:7 is, on account of their size and general safety, instanced as a special token of the interposition of God. And went down to Joppa - Joppa, now Jaffa (Haifa), was the one well-known port of Israel on the Mediterranean. There the cedars were brought from Lebanon for both the first and second temple Ch2 3:16; Ezr 2:7. Simon the Maccabee (1 Macc. 14:5) "took it again for a haven, and made an entrance to the isles of the sea." It was subsequently destroyed by the Romans, as a pirate-haven. (Josephus, B. J. iii. 9. 3, and Strabo xvi. 2. 28.) At a later time, all describe it as an unsafe haven. Perhaps the shore changed, since the rings, to which Andromeda was tabled to have been fastened, and which probably were once used to moor vessels, were high above the sea. Perhaps, like the Channel Islands, the navigation was safe to those who knew the coast, unsafe to others. To this port Jonah "went down" from his native country, the mountain district of Zabulon. Perhaps it was not at this time in the hands of Israel. At least, the sailors were pagan. He "went down," as the man who fell among the thieves, is said to "have gone down from Jerusalem to Jericho." Luk 10:30. He "went down" from the place which God honored by His presence and protection. And he paid the fare thereof - Jonah describes circumstantially, how he took every step to his end. He went down, found a strongly built ship going where he wished, paid his fare, embarked. He seemed now to have done all. He had severed himself from the country where his office lay. He had no further step to take. Winds and waves would do the rest. He had but to be still. He went, only to be brought back again. "Sin brings our soul into much senselessness. For as those overtaken by heaviness of head and drunkenness, are borne on simply and at random, and, be there pit or precipice or whatever else below them, they fall into it unawares; so too, they who fall into sin, intoxicated by their desire of the object, know not what they do, see nothing before them, present or future. Tell me, Fleest thou the Lord? Wait then a little, and thou shalt learn from the event, that thou canst not escape the hands of His servant, the sea. For as soon as he embarked, it too roused its waves and raised them up on high; and as a faithful servant, finding her fellow-slave stealing some of his master's property, ceases not from giving endless trouble to those who take him in, until she recover him, so too the sea, finding and recognizing her fellow-servant, harasses the sailors unceasingly, raging, roaring, not dragging them to a tribunal but threatening to sink the vessel with all its unless they restore to her, her fellow-servant." "The sinner "arises," because, will he, nill he, toil he must. If he shrinks from the way of God, because it is hard, he may not yet be idle. There is the way of ambition, of covetousness, of pleasure, to be trodden, which certainly are far harder. 'We wearied ourselves (Wisdom 5:7),' say the wicked, 'in the way of wickedness and destruction, yea, we have gone through deserts where there lay no way; but the way of the Lord we have not known.' Jonah would not arise, to go to Nineveh at God's command; yet he must needs arise, to flee to Tarshish from before the presence of God. What good can he have who fleeth the Good? what light, who willingly forsaketh the Light? "He goes down to Joppa." Wherever thou turnest, if thou depart from the will of God, thou goest down. Whatever glory, riches, power, honors, thou gainest, thou risest not a whit; the more thou advancest, while turned from God, the deeper and deeper thou goest down. Yet all these things are not had, without paying the price. At a price and with toil, he obtains what he desires; he receives nothing gratis, but, at great price purchases to himself storms, griefs, peril. There arises a great tempest in the sea, when various contradictory passions arise in the heart of the sinner, which take from him all tranquility and joy. There is a tempest in the sea, when God sends strong and dangerous disease, whereby the frame is in peril of being broken. There is a tempest in the sea, when, thro' rivals or competitors for the same pleasures, or the injured, or the civil magistrate, his guilt is discovered, he is laden with infamy and odium, punished, withheld from his wonted pleasures. Psa 107:23-27. "They who go down to the sea of this world, and do business in mighty waters - their soul melteth away because of trouble; they reel to and fro and stagger like a drunken man, and all their wisdom is swallowed up." But (And) the Lord sent out - (literally 'cast along'). Jonah had done his all. Now God's part began. This He expresses by the word, "And." Jonah took "his" measures, "and" now God takes "His." He had let him have his way, as He often deals with those who rebel against Him. He lets them have their way up to a certain point. He waits, in the tranquility of His Almightiness, until they have completed their preparations; and then, when man has ended, He begins, that man may see the more that it is His doing . "He takes those who flee from Him in their flight, the wise in their counsels, sinners in their conceits and sins, and draws them back to Himself and compels them to return. Jonah thought to find rest in the sea, and lo! a tempest." Probably, God summoned back Jonah, as soon as he had completed all on his part, and sent the tempest, soon after he left the shore. At least, such tempests often swept along that shore, and were known by their own special name, like the Euroclydon off Crete. Jonah too alone had gone down below deck to sleep, and, when the storm came, the mariners thought it possible to put back. Josephus says of that shore, "Joppa having by nature no haven, for it ends in a rough shore, mostly abrupt, but for a short space having projections, i. e., deep rocks and cliffs advancing into the sea, inclining on either side toward each other (where the traces of the chains of Andromeda yet shown accredit the antiquity of the fable,) and the north wind beating right on the shore, and dashing the high waves against the rocks which receive them, makes the station there a harborless sea. As those from Joppa were tossing here, a strong wind (called by those who sail here, the black north wind) falls upon them at daybreak, dashing straightway some of the ships against each other, some against the rocks, and some, forcing their way against the waves to the open sea, (for they fear the rocky shore ...) the breakers towering above them, sank." The ship was like - (literally 'thought') To be broken Perhaps Jonah means by this very vivid image to exhibit the more his own dullness. He ascribes, as it were, to the ship a sense of its own danger, as she heaved and rolled and creaked and quivered under the weight of the storm which lay on her, and her masts groaned, and her yard-arms shivered. To the awakened conscience everything seems to have been alive to God's displeasure, except itself. And cried, every man unto his God - They did what they could. "Not knowing the truth, they yet know of a Providence, and, amid religious error, know that there is an Object of reverence." In ignorance they had received one who offended God. And now God, "whom they ignorantly worshiped" Act 17:23, while they cried to the gods, who, they thought, disposed of them, heard them. They escaped with the loss of their wares, but God saved their lives and revealed Himself to them. God hears ignorant prayer, when ignorance is not willful and sin. To lighten it of them - , literally "to lighten from against them, to lighten" what was so much "against them," what so oppressed them. "They thought that the ship was weighed down by its wonted lading, and they knew not that the whole weight was that of the fugitive prophet." "The sailors cast forth their wares," but the ship was not lightened. For the whole weight still remained, the body of the prophet, that heavy burden, not from the nature of the body, but from the burden of sin. For nothing is so onerous and heavy as sin and disobedience. Whence also Zechariah Zac 5:7 represented it under the image of lead. And David, describing its nature, said Psa 38:4, "my wickednesses are gone over my head; as a heavy burden they are too heavy for me." And Christ cried aloud to those who lived in many sins, Mat 11:28. "Come unto Me, all ye that labor and are heavy-laden, and I will refresh you." Jonah was gone down - , probably before the beginning of the storm, not simply before the lightening of the vessel. He could hardly have fallen asleep "then." A pagan ship was a strange place for a prophet of God, not as a prophet, but as a fugitive; and so, probably, ashamed of what he had completed, he had withdrawn from sight and notice. He did not embolden himself in his sin, but shrank into himself. The conscience most commonly awakes, when the sin is done. It stands aghast as itself; but Satan, if he can, cuts off its retreat. Jonah had no retreat now, unless God had made one. And was fast asleep - The journey to Joppa had been long and hurried; he had "fled." Sorrow and remorse completed what fatigue began. Perhaps he had given himself up to sleep, to dull his conscience. For it is said, "he lay down and was fast asleep." Grief produces sleep; from where it is said of the apostles in the night before the Lord's Passion, when Jesus "rose up from prayer and was come to His disciples, He found them sleeping for sorrow" Luk 22:45 . "Jonah slept heavily. Deep was the sleep, but it was not of pleasure but of grief; not of heartlessness, but of heavy-heartedness. For well-disposed servants soon feel their sins, as did he. For when the sin has been done, then he knows its frightfulness. For such is sin. When born, it awakens pangs in the soul which bare it, contrary to the law of our nature. For so soon as we are born, we end the travail-pangs; but sin, so soon as born, rends with pangs the thoughts which conceived it." Jonah was in a deep sleep, a sleep by which he was fast held and bound; a sleep as deep as that from which Sisera never woke. Had God allowed the ship to sink, the memory of Jonah would have been that of the fugitive prophet. As it is, his deep sleep stands as an image of the lethargy of sin . "This most deep sleep of Jonah signifies a man torpid and slumbering in error, to whom it sufficed not to flee from the face of God, but his mind, drowned in a stupor and not knowing the displeasure of God, lies asleep, steeped in security." What meanest thou? - or rather, "what aileth thee?" (literally "what is to thee?") The shipmaster speaks of it (as it was) as a sort of disease, that he should be thus asleep in the common peril. "The shipmaster," charged, as he by office was, with the common weal of those on board, would, in the common peril, have one common prayer. It was the prophet's office to call the pagan to prayers and to calling upon God. God reproved the Scribes and Pharisees by the mouth of the children who "cried Hosanna" Mat 21:15; Jonah by the shipmaster; David by Abigail; Sa1 25:32-34; Naaman by his servants. Now too he reproves worldly priests by the devotion of laymen, sceptic intellect by the simplicity of faith. If so be that God will think upon us - , (literally "for us") i. e., for good; as David says, Psa 40:17. "I am poor and needy, the Lord thinketh upon" (literally "for") "me." Their calling upon their own gods had failed them. Perhaps the shipmaster had seen something special about Jonah, his manner, or his prophet's garb. He does not only call Jonah's God, "thy" God, as Darius says to Daniel "thy God" Dan 6:20, but also "the God," acknowledging the God whom Jonah worshiped, to be "the God." It is not any pagan prayer which he asks Jonah to offer. It is the prayer of the creature in its need to God who can help; but knowing its own ill-desert, and the separation between itself and God, it knows not whether He will help it. So David says Psa 25:7, "Remember not the sins of my youth nor my transgressions; according to Thy mercy remember Thou me for Thy goodness' sake, O Lord." "The shipmaster knew from experience, that it was no common storm, that the surges were an infliction borne down from God, and above human skill, and that there was no good in the master's skill. For the state of things needed another Master who ordereth the heavens, and craved the guidance from on high. So then they too left oars, sails, cables, gave their hands rest from rowing, and stretched them to heaven and called on God." Come, and let us cast lots - Jonah too had probably prayed, and his prayers too were not heard. Probably, too, the storm had some unusual character about it, the suddenness with which it burst upon them, its violence, the quarter from where it came, its whirlwind force . "They knew the nature of the sea, and, as experienced sailors, were acquainted with the character of wind and storm, and had these waves been such as they had known before, they would never have sought by lot for the author of the threatened wreck, or, by a thing uncertain, sought to escape certain peril." God, who sent the storm to arrest Jonah and to cause him to be cast into the sea, provided that its character should set the mariners on divining, why it came. Even when working great miracles, God brings about, through man, all the forerunning events, all but the last act, in which He puts forth His might. As, in His people, he directed the lot to fall upon Achan or upon Jonathan, so here He overruled the lots of the pagan sailors to accomplish His end. " We must not, on this precedent, immediately trust in lots, or unite with this testimony that from the Acts of the Apostles, when Matthias was by lot elected to the apostolate, since the privileges of individuals cannot form a common law." "Lots," according to the ends for which they were cast, were for: i) The lot for dividing is not wrong if not used, 1) "without any necessity, for this would be to tempt God:" 2) "if in case of necessity, not without reverence of God, as if Holy Scripture were used for an earthly end," as in determining any secular matter by opening the Bible: 3) for objects which ought to be decided otherwise, (as, an office ought to be given to the fittest:) 4) in dependence upon any other than God Pro 16:33. "The lot is cast into the lap, but the whole disposing of it is the Lord's." So then they are lawful "in secular things which cannot otherwise be conveniently distributed," or when there is no apparent reason why, in any advantage or disadvantage, one should be preferred to another." Augustine even allows that, in a time of plague or persecution, the lot might be cast to decide who should remain to administer the sacraments to the people, lest, on the one side, all should be taken away, or, on the other, the Church be deserted. ii.) The lot for consulting, i. e., to decide what one should do, is wrong, unless in a matter of mere indifference, or under inspiration of God, or in some extreme necessity where all human means fail. iii.) The lot for divining, i. e., to learn truth, whether of things present or future, of which we can have no human knowledge, is wrong, except by direct inspiration of God. For it is either to tempt God who has not promised so to reveal things, or, against God, to seek superhuman knowledge by ways unsanctioned by Him. Satan may readily mix himself unknown in such inquiries, as in mesmerism. Forbidden ground is his own province. God overruled the lot in the case of Jonah, as He did the sign which the Philistines sought . "He made the heifers take the way to Bethshemesh, that the Philistines might know that the plague came to them, not by chance, but from Hilmself" . "The fugitive (Jonah) was taken by lot, not by any virtue of the lots, especially the lots of pagan, but by the will of Him who guided the uncertain lots" "The lot betrayed the culprit. Yet not even thus did they cast him over; but, even while such a tumult and storm lay on them, they held, as it were, a court in the vessel, as though in entire peace, and allowed him a hearing and defense, and sifted everything accurately, as men who were to give account of their judgment. Hear them sifting all as in a court - The roaring sea accused him; the lot convicted and witnessed against him, yet not even thus did they pronounce against him - until the accused should be the accuser of his own sin. The sailors, uneducated, untaught, imitated the good order of courts. When the sea scarcely allowed them to breathe, whence such forethought about the prophet? By the disposal of God. For God by all this instructed the prophet to be humane and mild, all but saying aloud to him; 'Imitate these uninstructed sailors. They think not lightly of one soul, nor are unsparing as to one body, thine own. But thou, for thy part, gavest up a whole city with so many myriads. They, discovering thee to be the cause of the evils which befell them, did not even thus hurry to condemn thee. Thou, having nothing whereof to accuse the Ninevites, didst sink and destroy them. Thou, when I bade thee go and by thy preaching call them to repentance, obeyedst not; these, untaught, do all, compass all, in order to recover thee, already condemned, from punishment.'" Tell us, for whose cause - Literally "for what to whom." It may be that they thought that Jonah had been guilty toward some other. The lot had pointed him out. The mariners, still fearing to do wrong, ask him thronged questions, to know why the anger of God followed him; "what" hast thou done "to whom?" "what thine occupation?" i. e., either his ordinary occupation, whether it was displeasing to God? or this particular business in which he was engaged, and for which he had come on board. Questions so thronged have been admired in human poetry, Jerome says. For it is true to nature. They think that some one of them will draw forth the answer which they wish. It may be that they thought that his country, or people, or parents, were under the displeasure of God. But perhaps, more naturally, they wished to "know all about him," as people say. These questions must have gone home to Jonah's conscience. "What is thy business?" The office of prophet which he had left. "Whence comest thou?" From standing before God, as His minister. "What thy country? of what people art thou?" The people of God, whom he had quitted for pagan; not to win them to God, as He commanded; but, not knowing what they did, to abet him in his flight. What is thine occupation? - They should ask themselves, who have Jonah's office to speak in the name of God, and preach repentance . "What should be thy business, who hast consecrated thyself wholly to God, whom God has loaded with daily benefits? who approachest to Him as to a Friend? "What is thy business?" To live for God, to despise the things of earth, to behold the things of heaven," to lead others heavenward. Jonah answers simply the central point to which all these questions tended: I am an Hebrew - This was the name by which Israel was known to foreigners. It is used in the Old Testament, only when they are spoken of by foreigners, or speak of themselves to foreigners, or when the sacred writers mention them in contrast with foreigners . So Joseph spoke of his land Gen 40:15, and the Hebrew midwives Exo 1:19, and Moses' sister Exo 2:7, and God in His commission to Moses Exo 3:18; Exo 7:16; Exo 9:1 as to Pharaoh, and Moses in fulfilling it Exo 5:3. They had the name, as having passed the River Euphrates, "emigrants." The title might serve to remind themselves, that they were "strangers" and "pilgrims," Heb 11:13. whose fathers had left their home at God's command and for God , "passers by, through this world to death, and through death to immortality." And I fear the Lord - , i. e., I am a worshiper of Him, most commonly, one who habitually stands in awe of Him, and so one who stands in awe of sin too. For none really fear God, none fear Him as sons, who do not fear Him in act. To be afraid of God is not to fear Him. To be afraid of God keeps men away from God; to fear God draws them to Him. Here, however, Jonah probably meant to tell them, that the Object of his fear and worship was the One Self-existing God, He who alone is, who made all things, in whose hands are all things. He had told them before, that he had fled "from being before Yahweh." They had not thought anything of this, for they thought of Yahweh, only as the God of the Jews. Now he adds, that He, Whose service he had thus forsaken, was "the God of heaven, Who made the sea and dry land," that sea, whose raging terrified them and threatened their lives. The title, "the God of heaven," asserts the doctrine of the creation of the heavens by God, and His supremacy. Hence, Abraham uses it to his servant Gen 24:7, and Jonah to the pagan mariners, and Daniel to Nebuchadnezzar Dan 2:37, Dan 2:44; and Cyrus in acknowledging God in his proclamation Ch2 36:23; Ezr 1:2. After his example, it is used in the decrees of Darius Ezr 6:9-10 and Artaxerxes Ezr 7:12, Ezr 7:21, Ezr 7:23, and the returned exiles use it in giving account of their building the temple to the Governor Ezr 5:11-12. Perhaps, from the habit of contact with the pagan, it is used once by Daniel Dan 2:18 and by Nehemiah Neh 1:4-5; Neh 2:4, Neh 2:20. Melchizedek, not perhaps being acquainted with the special name, Yahweh, blessed Abraham in the name of "God, the Possessor" or "Creator of heaven and earth" Gen 14:19, i. e., of all that is. Jonah, by using it, at once taught the sailors that there is One Lord of all, and why this evil had fallen on them, because they had himself with them, the renegade servant of God. "When Jonah said this, he indeed feared God and repented of his sin. If he lost filial fear by fleeing and disobeying, he recovered it by repentance." Then were the men exceedingly afraid - Before, they had feared the tempest and the loss of their lives. Now they feared God. They feared, not the creature but the Creator. They knew that what they had feared was the doing of His Almightiness. They felt how awesome a thing it was to be in His Hands. Such fear is the beginning of conversion, when people turn from dwelling on the distresses which surround them, to God who sent them. Why hast thou done this? - They are words of amazement and wonder. Why hast thou not obeyed so great a God, and how thoughtest thou to escape the hand of the Creator ? "What is the mystery of thy flight? Why did one, who feared God and had revelations from God, flee, sooner than go to fulfill them? Why did the worshiper of the One true God depart from his God?" "A servant flee from his Lord, a son from his father, man from his God!" The inconsistency of believers is the marvel of the young Christian, the repulsion of those without, the hardening of the unbeliever. If people really believed in eternity, how could they be thus immersed in things of time? If they believed in hell, how could they so hurry there? If they believed that God died for them, how could they so requite Him? Faith without love, knowledge without obedience, conscious dependence and rebellion, to be favored by God yet to despise His favor, are the strangest marvels of this mysterious world. All nature seems to cry out to and against the unfaithful Christian, "why hast thou done this?" And what a why it is! A scoffer has recently said so truthfully : "Avowed scepticism cannot do a tenth part of the injury to practical faith, that the constant spectacle of the huge mass of worldly unreal belief does." It is nothing strange, that the world or unsanctified intellect should reject the Gospel. It is a thing of course, unless it be converted. But, to know, to believe, and to DISOBEY! To disobey God, in the name of God. To propose to halve the living Gospel, as the woman who had killed her child Kg1 3:26, and to think that the poor quivering remnants would be the living Gospel anymore! As though the will of God might, like those lower forms of His animal creation, be divided endlessly, and, keep what fragments we will, it would still be a living whole, a vessel of His Spirit! Such unrealities and inconsistencies would be a sore trial of faith, had not Jesus, who (cf. Joh 2:25), "knew what is in man," forewarned us that it should be so. The scandals against the Gospel, so contrary to all human opinion, are only all the more a testimony to the divine knowledge of the Redeemer. What shall we do unto thee? - They knew him to be a prophet; they ask him the mind of his God. The lots had marked out Jonah as the cause of the storm; Jonah had himself admitted it, and that the storm was for "his" cause, and came from "his" God . "Great was he who fled, greater He who required him. They dare not give him up; they cannot conceal him. They blame the fault; they confess their fear; they ask "him" the remedy, who was the author of the sin. If it was faulty to receive thee, what can we do, that God should not be angered? It is thine to direct; ours, to obey." The sea wrought and was tempestuous - , literally "was going and whirling." It was not only increasingly tempestuous, but, like a thing alive and obeying its Master's will, it was holding on its course, its wild waves tossing themselves, and marching on like battalions, marshalled, arrayed for the end for which they were sent, pursuing and demanding the runaway slave of God . "It was going, as it was bidden; it was going to avenge its Lord; it was going, pursuing the fugitive prophet. It was swelling every moment, and, as though the sailors were too tardy, was rising in yet greater surges, shewing that the vengeance of the Creator admitted not of delay." Take me up, and cast me into the sea - Neither might Jonah have said this, nor might the sailors have obeyed it, without the command of God. Jonah might will alone to perish, who had alone offended; but, without the command of God, the Giver of life, neither Jonah nor the sailors might dispose of the life of Jonah. But God willed that Jonah should be cast into the sea - where he had gone for refuge - that (Wisdom 11:16) wherewithal he had "sinned, by the same also he might be punished" as a man; and, as a prophet, that he might, in his three days' burial, prefigure Him who, after His Resurrection, should convert, not Nineveh, but the world, the cry of whose wickedness went up to God. For I know that for my sake - o "In that he says, "I know," he marks that he had a revelation; in that he says, "this great storm," he marks the need which lay on those who cast him into the sea." The men rowed hard - , literally "dug." The word, like our "plowed the main," describes the great efforts which they made. Amid the violence of the storm, they had furled their sails. These were worse than useless. The wind was off shore, since by rowing alpine they hoped to get back to it. They put their oars well and firmly in the sea, and turned up the water, as men turn up earth by digging. But in vain! God willed it not. The sea went on its way, as before. In the description of the deluge, it is repeated Gen 7:17-18, "the waters increased and bare up the ark, and it was lifted up above the earth; the waters increased greatly upon the earth; and the ark went upon the face of the waters." The waters raged and swelled, drowned the whole world, yet only bore up the ark, as a steed bears its rider: man was still, the waters obeyed. In this tempest, on the contrary, man strove, but, instead of the peace of the ark, the burden is, the violence of the tempest; "the sea wrought and was tempestuous against them" . "The prophet had pronounced sentence against himself, but they would not lay hands upon him, striving hard to get back to land, and escape the risk of bloodshed, willing to lose life rather than cause its loss. O what a change was there. The people who had served God, said, Crucify Him, Crucify Him! These are bidden to put to death; the sea rageth; the tempest commandeth; and they are careless its to their own safety, while anxious about another's." Wherefore (And) they cried unto the Lord - "They cried" no more "each man to his god," but to the one God, whom Jonah had made known to them; and to Him they cried with an earnest submissive, cry, repeating the words of beseeching, as men, do in great earnestness; "we beseech Thee, O Lord, let us not, we beseech Thee, perish for the life of this man" (i. e., as a penalty for taking it, as it is said, Sa2 14:7. "we will slay him for the life of his brother," and, Deu 19:21. "life for life.") They seem to have known what is said, Gen 9:5-6. "your blood of your lives will I require; at the hand of every beast will I require it and at the hand of man; at the hand of every man's brother will I require the life of man. Whoso sheddeth man's blood, by man shall his blood be shed, for in the image of God made He man" , "Do not these words of the sailors seem to us to be the confession of Pilate, who washed his hands, and said, 'I am clean from the blood of this Man?' The Gentiles would not that Christ should perish; they protest that His Blood is innocent." And lay not upon us innocent blood - innocent as to them, although, as to this thing, guilty before God, and yet, as to God also, more innocent, they would think, than they. For, strange as this was, one disobedience, their whole life, they now knew, was disobedience to God; His life was but one act in a life of obedience. If God so punishes one sin of the holy Pe1 4:18, "where shall the ungodly and sinner appear?" Terrible to the awakened conscience are God's chastenings on some (as it seems) single offence of those whom He loves. For Thou, Lord, (Who knowest the hearts of all men,) hast done, as it pleased Thee - Wonderful, concise, confession of faith in these new converts! Psalmists said it, Psa 135:6; Psa 115:3. "Whatsoever God willeth, that doeth He in heaven and in earth, in the sea and in all deep places." But these had but just known God, and they resolve the whole mystery of man's agency and God's Providence into the three simple words , as (Thou) "willedst" (Thou) "didst." "That we took him aboard, that the storm ariseth, that the winds rage, that the billows lift themselves, that the fugitive is betrayed by the lot, that he points out what is to be done, it is of Thy will, O Lord" . "The tempest itself speaketh, that 'Thou, Lord, hast done as Thou willedst.' Thy will is fulfilled by our hands." "Observe the counsel of God, that, of his own will, not by violence or by necessity, should he be cast into the sea. For the casting of Jonah into the sea signified the entrance of Christ into the bitterness of the Passion, which He took upon Himself of His own will, not of necessity. Isa 53:7. "He was offered up, and He willingly submitted Himself." And as those who sailed with Jonah were delivered, so the faithful in the Passion of Christ. Joh 18:8-9. "If ye seek Me, let these go their way, that the saying might be fulfilled which" Jesus spake, 'Of them which Thou gavest Me, I have lost none. '" They took up Jonah - o "He does not say, 'laid hold on him', nor 'came upon him' but 'lifted' him; as it were, bearing him with respect and honor, they cast him into the sea, not resisting, but yielding himself to their will." The sea ceased (literally "stood") from his raging - Ordinarily, the waves still swell, when the wind has ceased. The sea, when it had received Jonah, was hushed at once, to show that God alone raised and quelled it. It "stood" still, like a servant, when it had accomplished its mission. God, who at all times saith to it Job 38:11, "Hitherto shalt thou come and no further, and here shall thy proud waves be stayed," now unseen, as afterward in the flesh Mat 8:26, "rebuked the winds and the sea, and there was a great calm" . "If we consider the errors of the world before the Passion of Christ, and the conflicting blasts of diverse doctrines, and the vessel, and the whole race of man, i. e., the creature of the Lord, imperiled, and, after His Passion, the tranquility of faith and the peace of the world and the security of all things and the conversion to God, we shall see how, after Jonah was cast in, the sea stood from its raging" . "Jonah, in the sea, a fugitive, shipwrecked, dead, sayeth the tempest-tossed vessel; he sayeth the pagan, aforetime tossed to and fro by the error of the world into divers opinions. And Hosea, Amos, Isaiah, Joel, who prophesied at the same time, could not amend the people in Judaea; whence it appeared that the breakers could not be calmed, save by the death of (Him typified by) the fugitive." And the men feared the Lord with a great fear - because, from the tranquility of the sea and the ceasing of the tempest, they saw that the prophet's words were true. This great miracle completed the conversion of the mariners. God had removed all human cause of fear; and yet, in the same words as before, he says, "they feared a great fear;" but he adds, "the Lord." It was the great fear, with which even the disciples of Jesus feared, when they saw the miracles which He did, which made even Peter say, Luk 5:8. "Depart from me, for I am a sinful man, O Lord." Events full of wonder had thronged upon them; things beyond nature, and contrary to nature; tidings which betokened His presence, Who had all things in His hands. They had seen "wind and storm fulfilling His word" Psa 148:8, and, forerunners of the fishermen of Galilee, knowing full well from their own experience that this was above nature, they felt a great awe of God. So He commanded His people, "Thou shalt fear the Lord thy God Deu 6:13, for thy good always" Deu 6:24. And offered a sacrifice - Doubtless, as it was a large decked vessel and bound on a long voyage, they had live creatures on board, which they could offer in sacrifice. But this was not enough for their thankfulness; "they vowed vows." They promised that they would do thereafter what they could not do then ; "that they would never depart from Him whom they had begun to worship." This was true love, not to be content with aught which they could do, but to stretch forward in thought to an abiding and enlarged obedience, as God should enable them. And so they were doubtless enrolled among the people of God, firstfruits from among the pagan, won to God Who overrules all things, through the disobedience and repentance of His prophet. Perhaps, they were the first preachers among the pagan, and their account of their own wonderful deliverance prepared the way for Jonah's mission to Nineveh. Now the Lord had (literally "And the Lord") prepared - Jonah (as appears from his thanksgiving) was not swallowed at once, but sank to the bottom of the sea, God preserving him in life there by miracle, as he did in the fish's belly. Then, when the seaweed was twined around his head, and he seemed to be already buried until the sea should give up her dead, "God prepared the fish to swallow Jonah" . "God could as easily have kept Jonah alive in the sea as in the fish's belly, but, in order to prefigure the burial of the Lord, He willed him to be within the fish whose belly was as a grave." Jonah, does not say what fish it was; and our Lord too used a name, signifying only one of the very largest fish. Yet it was no greater miracle to create a fish which should swallow Jonah, than to preserve him alive when swallowed . "The infant is buried, as it were, in the womb of its mother; it cannot breathe, and yet, thus too, it liveth and is preserved, wonderfully nurtured by the will of God." He who preserves the embryo in its living grave can maintain the life of man as easily without the outward air as with it. The same Divine Will preserves in being the whole creation, or creates it. The same will of God keeps us in life by breathing this outward air, which preserved Jonah without it. How long will men think of God, as if He were man, of the Creator as if He were a creature, as though creation were but one intricate piece of machinery, which is to go on, ringing its regular changes until it shall be worn out, and God were shut up, as a sort of mainspring within it, who might be allowed to be a primal Force, to set it in motion, but must not be allowed to vary what He has once made? "We must admit of the agency of God," say these men when they would not in name be atheists, "once in the beginning of things, but must allow of His interference as sparingly as may be." Most wise arrangement of the creature, if it were indeed the god of its God! Most considerate provision for the non-interference of its Maker, if it could but secure that He would not interfere with it for ever! Acute physical philosophy, which, by its omnipotent word, would undo the acts of God! Heartless, senseless, sightless world, which exists in God, is upheld by God, whose every breath is an effluence of God's love, and which yet sees Him not, thanks Him not, thinks it a greater thing to hold its own frail existence from some imagined law, than to be the object of the tender personal care of the Infinite God who is Love! Poor hoodwinked souls, which would extinguish for themselves the Light of the world, in order that it may not eclipse the rushlight of their own theory! And Jonah was in the belly of the fish - The time that Jonah was in the fish's belly was a hidden prophecy. Jonah does not explain nor point it. He tells the fact, as Scripture is accustomed to do so. Then he singles out one, the turning point in it. Doubtless in those three days and nights of darkness, Jonah (like him who after his conversion became Paul), meditated much, repented much, sorrowed much, for the love of God, that he had ever offended God, purposed future obedience, adored God with wondering awe for His judgment and mercy. It was a narrow home, in which Jonah, by miracle, was not consumed; by miracle, breathed; by miracle, retained his senses in that fetid place. Jonah doubtless, repented, marveled, adored, loved God. But, of all, God has singled out this one point, how, out of such a place, Jonah thanked God. As He delivered Paul and Silas from the prison, when they prayed with a loud voice to Him, so when Jonah, by inspiration of His Spirit, thanked Him, He delivered him. To thank God, only in order to obtain fresh gifts from Him, would be but a refined, hypocritical form of selfishness. Such a formal act would not be thanks at all. We thank God, because we love Him, because He is so infinitely good, and so good to us, unworthy. Thanklessness shuts the door to His personal mercies to us, because it makes them the occasion of fresh sins of our's. Thankfulness sets God's essential goodness free (so to speak) to be good to us. He can do what He delights in doing, be good to us, without our making His Goodness a source of harm to us. Thanking Him through His grace, we become fit vessels for larger graces . "Blessed he who, at every gift of grace, returns to Him in whom is all fullness of graces; to whom when we show ourselves not ungrateful for gifts received, we make room in ourselves for grace, and become meet for receiving yet more." But Jonah's was that special character of thankfulness, which thanks God in the midst of calamities from which there was no human exit; and God set His seal on this sort of thankfulness, by annexing this deliverance, which has consecrated Jonah as an image of our Lord, to his wonderful act of thanksgiving.
<urn:uuid:ea49b7b5-b7ff-41f6-9341-88d6b5bddf22>
CC-MAIN-2013-20
http://www.sacred-texts.com/bib/cmt/barnes/jon001.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98259
14,017
3
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
On September 19, the Cary Institute hosted a one-day conference on the impacts of tropical storms Irene and Lee on the Hudson River. Organized by the Hudson River Environmental Society, with leadership from Cary's Stuart Findlay, the forum examined how the river and estuary responded to the storms, which dropped an estimated 12-18 inches of rainfall throughout the Hudson Valley and Catskill regions. Topics included dredging, sediment transport, water quality, impacts to fish, and future management practices. In late October, Gary Lovett will present his assessment of the health of the Catskill Forest at the second Catskill Environmental Research & Monitoring Conference (CERM). The forum brings together research on the region, to better understand the effects of extreme weather, air pollution, invasive species, biodiversity loss, and habitat fragmentation. The Catskills provide the majority of New York City's drinking water supply; CERM forums help coordinate research and identify research agendas to protect these resources. In November, Cary Institute will hold a two-day conference examining the effects of climate change on plant, animal, and microbial species. The invitation-only event is being organized by Richard Ostfeld, Shannon LaDeau, and Amy Angert (University of British Columbia). With more than 50 invited experts, the conference's goal is to identify tools that will help lessen the negative effects of climate change on biodiversity, disease risk, extinction, and ecosystem function.
<urn:uuid:40f6fcd0-48f9-4b94-a078-36492fab999c>
CC-MAIN-2013-20
http://www.caryinstitute.org/newsroom/conferences?page=0%2525252C3%25252C2%252C2
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907748
290
2.703125
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "extreme weather" ], "nature": [ "biodiversity", "biodiversity loss", "ecosystem", "habitat", "invasive species" ] }
{ "strong": 6, "weak": 1, "total": 7, "decision": "accepted_strong" }
Sign up for our newsletter Gardening in the Rainy Zone. Pronounced: AL-lee-um ka-ra-tah-vee-EN-see Sunset zones 1-24. USDA zones: 5-9. Heat zones: 9-5. Height: 4-10 inches (10-25 cm). Two to three-inch diameter spherical umbel of 50 or more, flowers on a 6-inch stem. Just above ground level the thick, leathery, pleated, 6-inch long leaves. 'Ivory White' has a pale, pewter tone. Full sun to partial shade. Dry, well-drained, ordinary garden soil. Remove offsets in autumn and plant. Pests and Diseases: Bulb rot can occur during our damp conditions of fall through spring. Onion fly and thrips may be a problem. Rainy Side Notes Allium karataviense is a spunky plant with a large flower globe. It is a dwarf ornamental onion compared to the giant species in the Melanocrommyum group of alliums, in which it belongs. It grows exceptionally well in our Mediterranean climate, as the plants go dormant by the time our annual drought comes around. Some gardeners grow the species and its cultivars because it has the most attractive foliage in the entire genus. Its horizontal, long, leathery, pleated foliage is green with a striking purple cast. As the blossom first opens, it is nestled on top of the attractive foliage; the flowering stems continue to grow. It pushes the large, spherical umbels up and away from the leaves as they begin to look shabby. When you plant bulbs en masse in a garden bed or container, use companion perennials that are late in filling out, or annuals to fill in any bare soil the bulb's dormant state leaves behind. Sensitive to excess moisture, these alliums are prone to rot, so grow them in well-drained soil in the ground or in deep containers with excellent drainage. Some A. karataviense cultivars available are 'Ivory Queen', 'Lucky' and 'Red Globe'. Photographed in author's garden.
<urn:uuid:6eea884d-528f-40e3-9d46-b020db8089a9>
CC-MAIN-2013-20
http://rainyside.com/plant_gallery/bulbs/Allium_karataviense.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.841011
467
2.828125
3
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
This article published by Yongyut Trisurat, Anak Pattanavibool, George A. Gale and David H. Reed in CSIRO Publishing. Wildlife Research. 2010, 37, 401-412, demonstrates how the CAP principles helped assess wildlife population viability for multiple species in the Western Forest Complex in Thailand. If you wish to request a copy of the article, please contact Yongyut Trisurat ([email protected]). Context. Assessing the viability of animal populations in the wild is difficult or impossible, primarily because of limited data. However, there is an urgent need to develop methods for estimating population sizes and improving the viability of target species. Aims. To define suitable habitat for sambar (Cervus unicolor), banteng (Bos javanicus), gaur (Bos gaurus), Asian elephant (Elephas maximus) and tiger (Panthera tigris) in the Western Forest Complex, Thailand, and to assess their current status as well as estimate how the landscape needs to be managed to maintain viable populations. Methods. The present paper demonstrates a method for combining a rapid ecological assessment, landscape indices, GIS-based wildlife-habitat models, and knowledge of minimum viable population sizes to guide landscape-management decisions and improve conservation outcomes through habitat restoration. Key results. The current viabilities for gaur and elephant are fair, whereas they are poor for tiger and banteng. However, landscape quality outside the current distributions was relatively intact for all species, ranging from moderate to high levels of connectivity. In addition, the population viability for sambar is very good under the current and desired conditions. Conclusions. If managers in this complex wish to upgrade the viabilities of gaur, elephant, tiger and banteng within the next 10 years, park rangers and stakeholders should aim to increase the amount of usable habitat by ~2170 km2 or 17% of existing suitable habitats. The key strategies are to reduce human pressures, enhance ungulate habitats and increase connectivity of suitable habitats outside the current distributions. Implications. The present paper provides a particularly useful method for managers and forest-policy planners for assessing and managing habitat suitability for target wildlife and their population viability in protected-area networks where knowledge of the demographic attributes (e.g. birth and death rates) of wildlife populations are too limited to perform population viability analysis.
<urn:uuid:88f965df-95a7-4e97-bcdf-27d8ec0cad74>
CC-MAIN-2013-20
http://www.conservationgateway.org/ExternalLinks/Pages/improving-viability-large.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.875407
500
2.875
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecological", "habitat", "restoration" ] }
{ "strong": 3, "weak": 1, "total": 4, "decision": "accepted_strong" }
Philippa Ryan, Scientist, British Museum Archaeobotany is the study of ancient plant remains, and I joined the field team at Amara West in Sudan earlier this year to collect samples for archaeobotanical analysis. Charred plant materials were retrieved on-site from sediments through dry-sieving and flotation. These samples were subsequently brought back to the British Museum for further sorting and identification. At the moment I am analysing the charred seeds and fruits with Caroline Cartwright, who is also analysing the wood charcoal. The macroscopic plant remains are analysed using both a stereo microscope and a SEM (scanning electron microscope). Charred remains found so far include cereal grains (wheat and barley) and crop-processing waste, fruits such as figs and a wide range of wild plants. I am also processing sediment samples to extract phytoliths (microscopic plant remains), which are formed when soluble silica taken up in groundwater by plants is deposited within and between certain plant cells. These silicified cells are found within many different plant families such as grasses (which include cereals), sedges and palms. Phytoliths are difficult to identify, but have the advantage of surviving in both charred and non-charred contexts, so we can learn about the presence and use of plants in areas where seeds and grains don’t survive. At the moment, I am processing sediments to extract phytoliths, which includes the removal of carbonates, clays, organics, other remaining non-siliceous material through heavy liquid flotation, and finally mounting dried phytoliths onto slides. Phytoliths are then identified and counted using an optical (light) microscope. Analysis of these different types of plant remains helps us learn about the past uses of plants at Amara West in day-to-day life, such as for food, fuel and animal fodder. I am also looking at the distributions of seeds and phytoliths across the site to examine locations of plant based activities such as food processing, as well as whether there are any differences in diet between poorer and richer households, or across the history of the site. Plant remains can also help to provide information about the nearby vegetation, for instance the types of grasses, wetland plants and trees that grew near the ancient town.
<urn:uuid:35d563a2-5f98-4723-8212-690077a85293>
CC-MAIN-2013-20
http://blog.britishmuseum.org/2011/08/16/back-in-the-lab-archaeobotany-from-amara-west/?like=1&source=post_flair&_wpnonce=4e28c04ab9
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959191
486
3.546875
4
[ "nature" ]
{ "climate": [], "nature": [ "wetland" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Fracking is short for ‘hydraulic fracturing,’ a term used to describe the process of pumping millions of gallons of pressurized water, sand and chemicals down a newly drilled well to blast out the surrounding shale rock and gas. It’s a relatively new technique that‘s made shale gas more popular in recent years. For a long time, shale gas — a natural gas that’s embedded in ancient rocks known as shale — was deemed as being not worth the trouble by drilling companies because it was so difficult to recover. The gas is embedded in rocks and the best way to get it out is to drill in sideways, which only became possible in the 1980s and 1990s as the gas industry improved its directional drilling technology. Later, technological advances that let drillers use more water pressure made fracking into an economically viable option for obtaining shale gas from the rocks. Read more about 'fracking' Shale is scattered throughout the United States. The two hottest shale sites in America right now are the Barnett Shale in Texas and the Marcellus shale, which is buried beneath seven states and part of Lake Erie. Other large shale deposits are located in Arkansas, Louisiana, New Mexico, Oklahoma and Wyoming. Despite its potential, though, a movement has welled up lately to block the shale gas boom. Some critics say embracing natural gas so heartily will slow the rise of renewable energy, but the biggest beef with shale isn't as much about its gas — it's about how we get it out of the ground. Shale gas would likely still be a novelty fuel without modern advances in hydraulic fracturing, yet the need for fracking is also starting to seem like it could be shale's fatal flaw. The practice has sparked major environmental and public heath concerns near U.S. gas fields, from diesel fuel and unidentified chemicals in groundwater to methane seeping out of sink faucets and even blowing up houses.
<urn:uuid:75db20fa-e3dc-4304-905b-a8885026ad81>
CC-MAIN-2013-20
http://www.mnn.com/eco-glossary/fracking?page=3
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96927
395
3.453125
3
[ "climate" ]
{ "climate": [ "methane", "renewable energy" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Some unknown terrible person shot a defenseless pilot whale last month, leaving it to swim the Atlantic in agony for weeks before it finally beached itself on the New Jersey shore and died. Authorities are still looking for the shooter. The bullet wound caused a fulminant infection in the whale’s jaw that prevented it from eating, so it basically starved to death. This was determined during a necropsy, an autopsy for animals. Along with sympathy for the poor creature, this debacle aroused an interesting question: How does one autopsy a whale? With four-ton meat hooks, whaling knives and bone saws, actually. Michael Moore, a veterinarian and whale biologist at the Woods Hole Oceanographic Institution, does it all the time. Moore spends much of his time studying North Atlantic right whales, an endangered species whose name derives from whalers’ adage that these were the “right whales” to hunt, because they’re easy to spot and float when they die. They’re no longer hunted for their oil, but they are entangled in fishing lines and injured in ship collisions, often suffering for a great while and also succumbing to starvation. “It’s the most egregious animal welfare issue globally at this time,” in Moore’s words. But protecting them requires understanding how they died, and to do this Moore must take them apart, studying their broken bones and lobster net-tangled flukes to determine their exact causes of death.In partnership with the National Oceanic and Atmospheric Administration, Moore deploys on-call, toting a case full of knives to examine right whales that have beached or are floating in the open ocean. Right whales are baleen whales and at least two orders of magnitude bigger than the toothed pilot whale that was shot, so in most cases, they must be examined right where they‘re found — that means on the beach. They either beach themselves and die there, or they’re towed to shore once they have been located at sea. Moore and other rubber-suited biologists work amid 120,000 pounds of slick black-and-red whale flesh, clambering over and through the carcass to find out what went wrong. Time is of the essence, because the longer they wait, the more the animal’s internal organs break down, making it difficult to determine how it died. Moore uses a Japanese whaling hook, which is useful for pulling back sheets of blubber to get at the animal’s internal organs. He carries a bone saw — formerly his mother’s — to get through jaws and vertebrae to find the location of a fatal injury. He’s even visited indigenous Alaskan tribes to study their ancestral whale processing techniques. The pilot whale that died was small, so it was trucked to a necropsy facility at the the Marine Mammal Stranding Center in Brigantine, N.J., down the shore north of Atlantic City. It weighed about 740 pounds when it beached, quite gaunt for an animal that should normally weigh more than a ton. Researchers knew something was seriously wrong, but they had to perform a necropsy to determine what it was. The creatures are brought in on trucks and hoisted into the facility on chains rigged to the ceiling, attached to four-ton-rated meat hooks. They lay on negative-pressure steel tables, the same types used in human autopsy procedures, which suck out odors and pathogens as the biologists get to work. The lab also contains deep freezers for stringing up deceased animals; it harbors an overwhelming odor of chemical and organic substances. (It’s somewhat legendary at WHOI that Moore lost his sense of smell while in veterinary school, which he says enables him to get literally inside a rotting animal carcass without losing his lunch or his cool.) The 11-foot-long pilot whale died shortly after authorities reached its side on the beach on Sept. 24. But it wasn’t until a necropsy a couple weeks later that they knew what happened. The bullet entered near its blow hole, but the wound had closed and faded a bit, suggesting it had been shot about a month prior. The .30-caliber round lodged in its jaw, causing the infection. “This poor animal literally starved to death,” said Bob Schoelkopf, co-director of the Marine Mammal Stranding Center, in an interview with the AP. “It was wandering around and slowly starving to death because of the infection. Who would do that to an innocent animal?” That question is now in the hands of the authorities. For biologists like Moore and Schoelkopf, necropsies can at least answer the question of how. Why, of course, is something else entirely. Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more.
<urn:uuid:b6502306-e434-43de-977a-97e28d1218d6>
CC-MAIN-2013-20
http://www.popsci.com/science/article/2011-10/how-do-you-autopsy-whale
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967078
1,058
2.828125
3
[ "nature" ]
{ "climate": [], "nature": [ "endangered species" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
The atlas of climate change: mapping the world's greatest challenge University of California Press , 2007 - Science - 112 pages Today's headlines and recent events reflect the gravity of climate change. Heat waves, droughts, and floods are bringing death to vulnerable populations, destroying livelihoods, and driving people from their homes. Rigorous in its science and insightful in its message, this atlas examines the causes of climate change and considers its possible impact on subsistence, water resources, ecosystems, biodiversity, health, coastal megacities, and cultural treasures. It reviews historical contributions to greenhouse gas levels, progress in meeting international commitments, and local efforts to meet the challenge of climate change. With more than 50 full-color maps and graphics, this is an essential resource for policy makers, environmentalists, students, and everyone concerned with this pressing subject. The Atlas covers a wide range of topics, including: * Warning signs * Future scenarios * Vulnerable populations * Renewable energy * Emissions reduction * Personal and public action Copub: Myriad Editions
<urn:uuid:28c4b9cd-3a36-4bc3-8eec-b16faeb61b8e>
CC-MAIN-2013-20
http://books.google.ca/books?id=c5vuAAAAMAAJ&q=carbon+credits&dq=related:ISBN819000610X&source=gbs_word_cloud_r&cad=6
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.876604
224
3.125
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "greenhouse gas", "renewable energy" ], "nature": [ "biodiversity", "ecosystems" ] }
{ "strong": 5, "weak": 0, "total": 5, "decision": "accepted_strong" }
Sustainability and Renewable Merino Wool runs on grass The primary inputs for wool production are water, sunshine and fertile soils to grow grass – the primary diet of sheep. As such merino production inherently has the potential of being fully sustainable providing it is operated within the carrying capacity of the ecosystem. They must operate, or be in the process of implementing, a grazing management strategy based on long term sustainability. The management system must embrace holistic farming principals with the goal of developing a resilient ecosystem that can regenerate, withstand stress and rebuild itself when necessary. This goal should be to aim for 100% ground cover all year, with a focus on perennial grasses to regenerate the ecology and increase biodiversity. Holistic farming principles include: - Clear goals for the people involved – Clear goals for the land in use – Clear goals for the business - Clear goals for the management of livestock The management process should involve: - Reviewing these strategies regularly – Making decision with these goals in mind – Monitoring the health of your land, your business and the people involved
<urn:uuid:cac13ba5-da9c-4b7d-a7d7-8bdb919fb48d>
CC-MAIN-2013-20
http://newmerino.com.au/wp/land-management/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.920621
224
2.953125
3
[ "nature" ]
{ "climate": [], "nature": [ "biodiversity", "ecosystem" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
OFF THE CHARTS Oil supply rising, but demand may more than keep pace Published: Saturday, November 24, 2012 at 4:36 p.m. Last Modified: Saturday, November 24, 2012 at 4:36 p.m. It used to be taken for granted that as economies grew, they would use more oil. That was a major reason cited in warnings that the world would run out of oil, particularly if standards of living rose in developing countries. Well, standards of living are improving in developing countries, but the dire forecasts now appear to be wrong. In part that is because new discoveries and improving technologies have increased the amount of oil that can be produced. It also reflects conservation, in part, as cars become more efficient and as other steps are taken to reduce oil use. The International Energy Agency, in its 2012 World Energy Outlook, released last week, forecast that U.S. oil production, which began to rise in 2009 after decades of decline, would continue rising through at least 2020, when it could be about as high as it was in 1970, the year of peak production. At the same time it forecast that by 2035, U.S. oil consumption, which peaked in 2005, could decline to levels not seen since the 1960s, depending on how much conservation is encouraged. The IEA report also forecast that by around 2020, the United States could surpass Saudi Arabia as the world's largest oil producer, and that while the country was not likely to become a net exporter of oil, the North American continent as a whole could be by around 2030. But despite declining demand in some countries that historically were heavy users of oil, the world demand for oil seems likely to continue to rise. The IEA forecast that global energy demand – including demand for energy produced by other sources – is likely to rise by 35 percent by 2035, with a large part of the increase coming from China and India. In 1969, the United States consumed a third of the oil used in the world, while China used less than 1 percent. Last year the U.S. share was less than 22 percent, while the Chinese accounted for 11 percent. The IEA forecasts that by 2030, the U.S. share could be less than the Chinese one. By 2035, U.S. consumption of oil is expected to be as much as one-third less than it was last year. In China, oil consumption is expected to be up as much as two-thirds from the 2011 level, and India's is predicted to more than double. Reader comments posted to this article may be published in our print edition. All rights reserved. This copyrighted material may not be re-published without permission. Links are encouraged.
<urn:uuid:e09fd188-0b05-4c4e-9f7e-624cf2dd60d2>
CC-MAIN-2013-20
http://www.houmatoday.com/article/20121124/WIRE/121129800/-1/news18?Title=Oil-supply-rising-but-demand-may-more-than-keep-pace
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708766848/warc/CC-MAIN-20130516125246-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.983772
563
2.53125
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Feral cats occur right across the continent in every habitat type including deserts, forests and grasslands. Total population estimates vary from 5 million to 18 million feral cats. Each feral cat kills between 5-30 animals per day. Taking the lower figure in that range (five) – and multiplying it by a conservative population estimate of 15 million cats – gives a minimum estimate of 75 million native animals killed daily by feral cats. It is clear that cats are playing a critical role in the decline of our native fauna. They are recognised as a primary cause of several early mammal extinctions and are identified as a factor in the current declines of at least 80 threatened species. AWC has developed a practical strategy designed to minimise their impacts and facilitate the development of a long-term solution. This includes: - GROUND COVER: impairing the hunting efficiency of cats in grasslands and woodlands by manipulating ground cover through: - minimising the frequency and extent of late-season wildfires;and - reducing the density of feral herbivores. - DINGOES AS A BIOLOGICAL CONTROL: reducing the density of cats and affect hunting behaviour by promoting a stable Dingo population. - FERAL CAT-FREE AREAS: establishing feral cat-free areas to protect core populations of species most vulnerable to cats. AWC’s Scotia contains the largest cat-free area on the mainland; in total, AWC manages more feral cat-free land on mainland Australia than any other organisation. - STRATEGIC CONTROL: strategic implementation of control measures such as shooting and baiting to protect highly threatened species. - RESEARCH: generating scientific knowledge that will help design a long-term solution enabling the control of cats and their impacts across landscapes and, ideally, the eradication of feral cats. We need your help in the battle to save our wildlife from feral cats. Please make a tax deductible donation to support practical land management that will limit the impact of cats. Your donation will help protect native animals at risk from feral cats, such as the Bilby, the Mala and a host of our small northern mammals. To donate, please click here. To learn more about this project, please read pages 4-7 of the Summer 2012-2013 issue of Wildlife Matters here. Find out more at Australian Wildlife Conservancy
<urn:uuid:0c1505c4-4643-4374-9795-7d8a55f28cbc>
CC-MAIN-2013-20
http://www.shapeourworld.com.au/in-the-field/page/3
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.908985
479
3.890625
4
[ "nature" ]
{ "climate": [], "nature": [ "habitat" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Vanity Top Finishes Vanity Top Composition Geologically speaking, rocks are classified into 3 main categories: igneous, sedimentary, and metamorphic. Our sinks are made of all natural stones including granite, travertine, and cream marfil. Granite is a type of igneous rock, travertine is sedimentary while cream marfil is metamorphic. Cream Marfil is a type of marble, which is a metamorphic rock resulting from the metamorphism of limestone. Limestone is a sedimentary rock, formed mainly from the accumulation of organic remains (bones and shells of marine microorganisms and coral) from millions of years ago. The calcium in the marine remains combined with carbon dioxide in the water in turn forms calcium carbonate, which is the basic mineral structure of all limestone. When subjected to heat and pressure, the original limestone experiences a process of complete recrystallization (metamorphism), forming what we know as marble. The characteristic swirls and veins of many colored marble varieties, for example, cream marfil, are usually due to various mineral impurities. Cream marfil is formed with a medium density with pores. Travertine is a variety of limestone, a kind of sedimentary rock, formed of massive calcium carbonate from deposition by rivers and springs, especially hot bubbly mineral rich springs. When hot water passes through limestone beds in springs or rivers, it dissolves the limestone, taking the calcium carbonate from the limestone to suspension as well as taking that solution to the surface. If enough time comes about, water evaporates and calcium carbonate is crystallized, forming what we know as travertine stone. Travertine is characterized by pores and pitted holes in the surface and takes a good polish. It is usually hard and semicrystalline. It is often beautifully colored (from ivory to golden brown) and banded as a result of the presence of iron compounds or other (e.g., organic) impurities. Travertine is mined extensively in Italy; in the U.S., Yellowstone Mammoth Hot Springs are actively depositing travertine. It also occurs in limestone caves. Granite is a very common type of intrusive igneous rock, mainly composed of three minerals: feldspar, quartz, and mica, with the first being the major ingredient. Granite is formed when liquid magma?molten rock material?cools beneath the earth?s crust. Due to the extreme pressure within the center of the earth and the absence of atmosphere, granite is formed very densely with no pores and has a coarse-grained structure. It is hard, firm and durable.
<urn:uuid:a5a63392-3468-4388-85e9-d51ad43f7fd5>
CC-MAIN-2013-20
http://www.vintagetub.com/asp/product_detail.asp?item_no=GM-2206-40-BB
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00001-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939076
550
3.34375
3
[ "climate" ]
{ "climate": [ "carbon dioxide" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Green Power is electricity generated from renewable energy sources that are environmentally friendly such as solar, wind, biomass, and hydro power. New York State and the Public Service Commission have made a commitment to promote the use of Green Power and foster the development of renewable energy generation resources. GREEN POWER IN NEW YORK Electricity comes from a variety of sources such as natural gas, oil, coal, nuclear, hydro power, biomass, wind, solar, and solid waste. Green Power is electricity generated from renewable energy sources such as: Solar: Solar energy systems convert sunlight directly into electricity. Biomass: Organic wastes such as wood, other plant materials and landfill gases are used to generate electricity. Wind: Modern wind turbines use large blades to catch the wind, spin turbines, and generate electricity Hydropower: Small installations on rivers and streams use running or falling water to drive turbines that generate electricity. NY’s ENERGY MIX |The pie chart below shows the mix of energy sources that was used to generate New York’s electricity in 2003. Buying Green Power will help to increase the percentage of electricity that is produced using cleaner energy sources.| You have the power to make a difference For only a few pennies more a day, you can choose Green Power and make a world of difference for generations to come. - Produces fewer environmental impacts than fossil fuel energy. - Helps to diversify the fuel supply, increasing the reliability of the NY State electric system and contributing to more stable energy prices. - Reduces use of imported fossil fuels, keeping dollars spent on energy in the State’s economy. - Creates jobs and helps the economy by spurring investments in environmentally-friendly facilities. - Creates healthier air quality and helps to reduce respiratory illness. If just 10% of New York’s households choose Green Power for their electricity supply, it would prevent nearly 3 billion pounds of carbon dioxide, 10 million pounds of sulfur dioxide, and nearly 4 million pounds of nitrogen oxides from getting into our air each year. Green Power helps us all breathe a little easier. Your Energy…Your Choice Your electric service is made up of two parts, supply and delivery. In New York’s competitive electric market, you can now shop for your electric supply. You can support cleaner, sustainable energy solutions by selecting Green Power for some or all of your supply. No matter what electric supply you choose,your utility is still responsible for delivering your electricity safely and reliably, and will provide you with customer service and respond to emergencies. What happens when you choose to buy Green Power? The Green Power you buy is supplied to the power grid that delivers the electricity to all customers in your region. Your Green Power purchase supports the development of more environmentally-friendly electricity generation. You are helping to create a cleaner, brighter New York for future generations. You will continue to receive the safe, reliable power you’ve come to depend on. Switching to Green Power is as easy as: 1. Use the list below to contact the Green Power service providers in your area. 2. Compare the Green Power programs. 3. Choose the Green Energy Service Company program that is right for you. Using New York’s power to change the future Energy conservation, energy efficiency and renewable energy are critical elements in New York’s economic, security and energy policies. New York State is committed to ensuring that we all have access to reliable electricity by helping consumers use and choose energy wisely. Recently, the state launched two initiatives – one designed to educate the public about the environmental impacts of energy production, and one to encourage the development of Green Power programs. The Environmental Disclosure Label NY RENEWABLE ENERGY SERVICE INITIATIVES The New York State Public Service Commission is supporting development of renewable energy service programs in utility service territories across the state. These programs are spurring the development of new sources ofrenewable energy and the sale of Green Power to New York consumers. As a result, Green Power service providers are now offering a variety of renewable energy service options. Most New York consumers now have the opportunity to choose Green Power. Suppliers Offering Green Energy Products Green Power can be arranged through the following suppliers (may not operate in all utility territories.) PSC has created the following list of providers and does not recommend particular companies or products. |Agway Energy Services||1-888-982-4929||www.agwayenergy.com| |Amerada Hess||1-800-HessUSA (437-7872)||www.hess.com (Commercial and Industrial only) |Community Energy, Inc.||1-866-Wind-123 |Constellation New Energy||1-866-237-7693||www.integrysenergy.com (Commercial and Industrial only) |Energy Cooperative of New York||1-800-422-1475||www.ecny.org| |Green Mountain Energy Company||1-800-810-7300||www.greenmountain.com| |Integrys Energy NY||1-518-482-4615 |Juice Energy, Inc||1-888-925-8423||www.juice-inc.com| |NYSEG Solutions, Inc.||1-800-567-6520||www.nysegsolutions.com| |Pepco Energy Services, Inc. (NYC commercial and industrial only) |Just Energy (GeoPower – Con Ed territory)||1-866-587-8674||www.justenergy.com| |Just Energy (GeoGas – Con Ed, KeySpan, NFG territories)||1-866-587-8674||www.justenergy.com| |Central Hudson Gas and Electric||1-800-527-2714||www.centralhudson.com| |National Grid||1-800-642-4272 (upstate) 1-800-930-5003 (Long Island) |New York State Electric and Gas||1-800-356-9734||www.nyseg.com| |Orange and Rockland||1-877-434-4100||www.oru.com| |Rochester Gas and Electric||1-877-743-9463||www.rge.com| |Long Island Power Authority(LIPA)||1-800-490-0025||www.lipower.org|
<urn:uuid:c5ac2c27-7304-41b9-936a-59bd96158c32>
CC-MAIN-2013-20
http://ecoanchornyc.com/resources/clean-energy-programs/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.807671
1,372
3.5625
4
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "renewable energy" ], "nature": [ "conservation" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
Earth System Science Partnership (ESSP) The ESSP is a partnership for the integrated study of the Earth System, the ways that it is changing, and the implications for global and regional sustainability. The urgency of the challenge is great: In the present era, global environmental changes are both accelerating and moving the earth system into a state with no analogue in previous history. To learn more about the ESSP, clink on links to access Strategy Paper, brochure and a video presentation by the Chair of the ESSP Scientific Committee, Prof. Dr. Rik Leemans of Wageningen University, The Netherlands. The Earth System is the unified set of physical, chemical, biological and social components, processes and interactions that together determine the state and dynamics of Planet Earth, including its biota and its human occupants. Earth System Science is the study of the Earth System, with an emphasis on observing, understanding and predicting global environmental changes involving interactions between land, atmosphere, water, ice, biosphere, societies, technologies and economies. ESSP Transitions into 'Future Earth' (31/12/2012) On 31st December 2012, the ESSP will close and transition into 'Future Earth' as it develops over the next few years. During this period, the four global environmental Change research programmes (DIVERSITAS, IGBP, IHDP, WCRP) will continue close collaboration with each other. 'Future Earth' is currently being planned as a ten-year international research initiative for global sustainability (www.icsu.org/future-earth) that will build on decades of scientific excellence of the four GEC research programmes and their scientific partnership. Click here to read more. Global Carbon Budget 2012 Carbon dioxide emissions from fossil fuel burning and cement production increased by 3 percent in 2011, with a total of 34.7 billion tonnes of carbon dioxide emitted to the atmosphere. These emissions were the highest in human history and 54 percent higher than in 1990 (the Kyoto Protocol reference year). In 2011, coal burning was responsible for 43 percent of the total emissions, 34 percent for oil, 18 percent for gas and 5 percent for cement. For the complete 2012 carbon budget and trends, access the Global Carbon Project website. GWSP International Conference - CALL for ABSTRACTS The GWSP Conference on "Water in the Anthropocene: Challenges for Science and Governance" will convene in Bonn, Germany, 21 - 24 May 2014. The focus of the conference is to address the global dimensions of water system changes due to anthropogenic as well as natural influences. The Conference will provide a platform to present global and regional perspectives on the responses of water management to global change in order to address issues such as variability in supply, increasing demands for water, environmental flows, and land use change. The Conference will help build links between science and policy and practice in the area of water resources management and governance, related institutional and technological innovations and identify ways that research can support policy and practice in the field of sustainable freshwater management. Learn more about the Conference here. Global Carbon Project (GCP) Employment Opportunity - Executive Director The Global Carbon Project (GCP) is seeking to employ a highly motivated and independent person as Executive Director of the International Project Office (IPO) in Tsukuba, Japan, located at the Centre for Global Environmental Research at the National Institute for Environmental Studies (NIES). The successful candidate will work with the GCP Scientific Steering Committee (SSC) and other GCP offices to implement the science framework of the GCP. The GCP is seeking a person with excellent working knowledge of the policy-relevant objectives of the GCP and a keen interest in devising methods to integrate social and policy sciences into the understanding of the carbon-climate system as a coupled human/natural system. Read More. Inclusive Wealth Report The International Human Dimensions Programme on Global Environmental Change (IHDP) announces the launch of the Inclusive Wealth Report 2012 (IWR 2012) at the Rio +20 Conference in Brazil. The report presents a framework that offers a long-term perspective on human well-being and sustainability, based on a comprehensive analysis of nations' productive base and their link to economic development. The IWR 2012 was developed on the notion that current economic indicators such as Gross Domestic Product (GDP) and the Human Development Index (HDI) are insufficient, as they fail to reflect the state of natural resources or ecological conditions, and focus exclusively on the short-term, without indicating whether national policies are sustainable. Future Earth: Global platform for sustainability research launched at Rio +20 Rio de Janeiro, Brazil (14 June 2012) - An alliance of international partners from global science, research funding and UN bodies launched a new 10-year initiative on global environmental change research for sustainability at the Forum on Science and Technology and Innovation for Sustainable Development. Future Earth - research for global sustainability, will provide a cutting-edge platform to coordinate scientific research which is designed and produced in partnership with governments, business and, more broadly, society. More details. APN's 2012 Call for Proposals The Asia-Pacific Network for Global Change Research (APN) announces the call for proposals for funding from April 2013. The proposals can be submitted under two separate programmes: regional global change research and scientific capacity development. More details. State of the Planet Declaration Planet Under Pressure 2012 was the largest gathering of global change scientists leading up to the United Nations Conference on Sustainable Development (Rio +20) with over 3,000 delegates at the conference venue and over 3,500 that attended virtually via live web streaming. The plenary sessions and the Daily Planet news show continue to draw audiences worldwide as they are available On Demand. An additional number of organisations, including 150 Science and Technology Centres worldwide streamed the plenary sessions at Planet Under Pressure-related events reaching an additional 12,000 viewers. The first State of the Planet Declaration was issued at the conference. Global Carbon Budget 2010 Global carbon dioxide emissions increased by a record 5.9 per cent in 2010 following the dampening effect of the 2008-2009 Global Financial Crisis (GFC), according to scientists working with the Global Carbon Project (GCP). The GCP annual analysis reports that the impact of the GFC on emissions has been short-lived owing to strong emissions growth in emerging economies and a return to emissions growth in developed economies. Planet Under Pressure 2012 Debategraph Debategraph and Planet Under Pressure Conference participants and organisers are collaborating to distill the main arguments and evidence, risks and policy options facing humanity into a dynamic knowledge map to help convey and inform the global deliberation at United Nations Rio +20 and beyond. Join the debate! (http://debategraph.org/planet) Integrated Global Change Research The ESSP and partners - the German National Committee on Global Change Research (NKGCF), International Council for Science (ICSU) and the International Social Science Council (ISSC) is conducting a new study on 'Integrated Global Change Research: Co-designing knowledge across scientific fields, national borders and user groups'. An international workshop (funded by the German Research Foundation) convened in Berlin, 7 - 9 March 2012, designed to elucidate the dimensions of integration, to identify and analyse best practice examples, to exchange ideas about new concepts of integration, to discuss emerging challenges for science, and to begin discussions about balancing academic research and stakeholder involvement. The Future of the World's Climate The Future of the World's Climate (edited by Ann Henderson-Sellers and Kendal McGuffie) offers a state-of-the-art overview - based on the latest climate science modelling data and projections available - of our understanding of future climates. The book is dedicated to Stephen H Schneider, a world leader in climate interpretation and communication. The Future of the World's Climate summarizes our current understanding of climatic prediction and examines how that understanding depends on a keen grasp of integrated Earth system models and human interaction with climate. This book brings climate science up to date beyond the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report. More details. Social Scientists Call for More Research on Human Dimensions of Global Change Scientists across all disciplines share great concern that our planet is in the process of crossing dangerous biophysical tipping points. The results of a new large-scale global survey among 1,276 scholars from the social sciences and the humanities demonstrates that the human dimensions of the problem are equally important but severely under-addressed. The survey conducted by the International Human Dimensions Programme on Global Environmental Change (IHDP-UNU) Secretariat in collaboration with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the International Social Science Council (ISSC), identifies the following as highest research priority areas: 1) Equity/equality and wealth/resource distribution; 2) Policy, political systems/governance, and political economy; 3) Economic systems, economic costs and incentives; 4) Globalization, social and cultural transitions. Food Security and Global Environmental Change Food security and global environmental change, a synthesis book edited by John Ingram, Polly Ericksen and Diana Liverman of GECAFS has just been published. The book provides a major, accessible synthesis of the current state of knowledge and thinking on the relationship between GEC and food security. Click here for further information. GECAFS is featured in the latest UNESCO-SCOPE-UNEP Policy Brief - No. 12 entitled Global Environmental Change and Food Security. The brief reviews current knowledge, highlights trends and controversies, and is a useful reference for policy planners, decision makers and stakeholders in the community. GWSP Digital Water Atlas The Global Water System Project (GWSP) has launched its Digital Water Atlas. The purpose and intent of the Digital Water Atlas is to describe the basic elements of the Global Water System, the interlinkages of the elements and changes in the state of the Global Water System by creating a consistent set of annotated maps. The project will especially promote the collection, analysis and consideration of social science data on the global basis. Click here to access the GWSP Digital Water Atlas. The ESSP office was carbon neutral in its office operations and travel in 2011. The ESSP supported the Gujarat wind project in India. More details. The Global Carbon Project has published an ESSP commissioned report, "carbon reductions and offsets" with a number of recommendations for individuals and institutions who want to participate in this voluntary market. Click here to learn more and to download the report from the GCP website. The ESSP is a joint initiative of four global environmental change programmes:
<urn:uuid:120b0d29-fee9-445b-a798-b67b2cbeb131>
CC-MAIN-2013-20
http://www.essp.org/index.php?id=10&L=0%252F%252Fassets%252Fsnipp%20%E2%80%A6%2F%2Fassets%2Fsnippets%2Freflect%2Fsnippet.reflect.php%3Freflect_base%3D
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.894252
2,191
2.984375
3
[ "climate", "nature" ]
{ "climate": [ "carbon budget", "carbon dioxide", "climate change", "climate system", "food security", "ipcc" ], "nature": [ "ecological" ] }
{ "strong": 4, "weak": 3, "total": 7, "decision": "accepted_strong" }
To preserve our planet, scientists tell us we must reduce the amount of CO2 in the atmosphere from its current level of 392 parts per million ("ppm")to below 350 ppm. But 350 is more than a number—it's a symbol of where we need to head as a planet. At 350.org, we're building a global grassroots movement to solve the climate crisis and push for policies that will put the world on track to get to 350 ppm. Scientists say that 350 parts per million CO2 in the atmosphere is the safe limit for humanity. Learn more about 350—what it means, where it came from, and how to get there. Read More » Submit your success story from your work in the climate movement and we'll share the best ones on our blog and social networks! Stories from people like you are crucial tools in growing the climate movement. Help spread the word and look good while doing it—check out the 350 Store for t-shirts, buttons, stickers, and more.
<urn:uuid:ca6b4c4b-78a7-46de-b5b0-5091760e0798>
CC-MAIN-2013-20
http://350.org/en/about/blogs/story-endfossilfuelsubsidies
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928506
206
3.078125
3
[ "climate" ]
{ "climate": [ "co2" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Respiratory distress syndrome (RDS) occurs most often in infants who are born too early. RDS can cause breathing difficulty in newborns. If it is not properly treated, RDS can result in complications. This may include pneumonia, respiratory failure, chronic lung problems, and possibly asthma. In severe cases, RDS can lead to convulsions and death. RDS occurs when infant's lungs have not developed enough. Immature lungs lack a fluid called surfactant. This is a foamy liquid that helps the lungs open wide and take in air. When there is not enough surfactant, the lungs do not open well. This will make it difficult for the infant to breathe. The chance of developing RDS decreases as the fetus grows. Babies born after 36 weeks rarely develop this condition. A risk factor is something that increases your chance of getting a disease or condition. Factors that increase your baby's risk of RDS include: - Birth before 37 weeks; increased risk and severity of condition with earlier prematurity - Mother with insulin dependent diabetes - Multiple birth - Cesarean section delivery - Cold stress - Precipitous delivery - Previously affected infant - Being male - Hypertension (high blood pressure) during pregnancy The following symptoms usually start immediately or within a few hours after birth and include: - Difficulty breathing, apnea - Rapid, shallow breathing - Delayed or weak cry - Grunting noise with every breath - Flaring of the nostrils - Frothing at the lips - Blue color around the lips - Swelling of the extremities - Decreased urine output The doctor will ask about the mother's medical history and pregnancy. The baby will also be evaluated, as outlined here: Amniotic fluid is fluid that surrounds the fetus. It may be tested for indicators of well-developed lungs such as: - Lecithin:sphingomyelin ratio - Phosphatidyl glycerol - Laboratory studies—done to rule out infection - Physical exam—includes checking the baby's breathing and looking for bluish color around the lips or on trunk - Testing for blood gases—to check the levels of oxygen and carbon dioxide in the blood - Chest x-ray —a test that uses radiation to take a picture of structures inside the body, in this case the heart and lungs Treatment for a baby with RDS usually includes oxygen therapy and may also include: A mechanical respirator is a breathing machine. It is used to keep the lungs from collapsing and support the baby's breathing. The respirator also improves the exchange of oxygen and other gases in the lungs. A respirator is almost always needed for infants with severe RDS. Surfactant can be given to help the lungs open. Wider lungs will allow the infant to take in more oxygen and breathe normally. One type of surfactant comes from cows and the other is synthetic. Both options are delivered directly into the infant's windpipe. Inhaled Nitric Oxide Nitric oxide is a gas that is inhaled. It can make it easier for oxygen to pass into the blood. The gas is often delivered during mechanical ventilation. Newborns with RDS may be given food and water by the following means: - Tube feeding—a tube is inserted through the baby's mouth and into the stomach - Parenteral feeding—nutrients are delivered directly into a vein Preventing a premature birth is the best way to avoid RDS. To reduce your chance of having a premature baby: - Get good prenatal care. Start as early as possible in pregnancy. - Eat a healthful diet. Take vitamins as suggested by your doctor. - Do not smoke. Avoid alcohol or drug use. - Only take medicines that your doctor has approved. If you are at high risk of giving birth to a premature baby: - You may be given steroids about 24 hours before delivery. Steroids can help your baby's lungs develop faster. - Your doctor may do an amniocentesis. This test will check the maturity of your baby's lungs. The results will help determine the best time for delivery. - Reviewer: Michael Woods - Review Date: 09/2012 - - Update Date: 00/91/2012 -
<urn:uuid:7d8d5dec-597e-4805-b78e-8e623e0c5fb5>
CC-MAIN-2013-20
http://medtropolis.com/your-health/?/11599/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.926661
904
3.4375
3
[ "climate" ]
{ "climate": [ "carbon dioxide" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
According to Wikipedia, the definition of Earth Day is “a day early each year on which events are held worldwide to increase awareness and appreciation of the Earth’s natural environment.” Earth Day, started in 1970, and is celebrated nationally and in over 175 countries world wide. Do you celebrate Earth Day in your home? My family and I love to celebrate our mother earth not only on Earth Day, but everyday. Each step we make in our home to become a greener family means we are doing that much more to reduce our carbon footprint. Cox Communications, my local cable company, wanted a way to reduce their carbon footprint as well so they started Cox Conserves in 2007. Launched in 2007, Cox Conserves is the company’s national sustainability program that is designed to reduce Cox Enterprises’ carbon footprint by 20 percent. Cox Conserves seeks to reduce Cox Enterprises’ energy consumption by embracing alternative energy, conserving natural resources and inspiring eco-friendly behavior. The program engages each of the company’s major subsidiaries (Cox Communications, Manheim, Cox Media Group and AutoTrader.com) and encourages Cox Enterprises’ more than 50,000 employees and their families to engage in eco-friendly practices. For more information, visit: http://www.coxconserves.com/ How does Cox Conserves make a difference right here in Las Vegas? How does Cox Conserves make a difference nationally? Alternative Energy – Cox actively identifies opportunities to harness solar energy and employ fuel cell technology, annually preventing more than 17,400 tons of greenhouse gases from entering the environment through its alternative energy projects. Energy Conservation- Across Cox Enterprises, programs are in place to conserve energy, including producing alternative energy, constructing eco-friendly buildings, greening the company fleet, conserving resources and recycling materials. Waste Management- Cox Enterprises employs a holistic approach to waste management including waste reduction, strategic partnerships for e-waste and customer engagement. Water Conservation- Cox’s water conservation efforts save more than 20 million gallons of water annually and return high-quality reusable water to the community. How can you celebrate Earth Day everyday? 5 Tips for a Green Home! - Reduce your junk mail stream by joining a no-send list. This is where my family is listed. - Use compact fluorescent light (CFL) bulbs. They use 2/3 less energy than standard incandescent light bulbs and last up to 10 times longer. If every household in the U.S. replaced one light bulb with a CFL, it would prevent enough pollution equal to taking more than 800,000 cars off the road for an entire year. This was one of the first things we did when we moved into a new home. - Buy bulk items. A family of four can save $2,000/year in the supermarket by choosing large sizes instead of individual serving sizes. This also reduces production of plastic wrappers and boxes. We love shopping our local super store. - Avoid using non-native plants that may be invasive in your area. We have desert landscaping to help reduce our use of water. - Skip pre-packaged items for school lunches and cut up your own cheese, meats and veggies to cut down on wasteful packaging. Treat the Earth well. It was not given to you by your parents, it was loaned to you by your children.” ~ Kenyan proverb Disclosure: This post was written on behalf of Cox Communications. All my thoughts and opinions are my own. Latest posts by Lolo (Posts) - A Chat With Rosita and Sesame Street’s Newest Neighbor Mando - May 15, 2013 - Favorite Moments from the Week | Wordless Wednesday Linky - May 14, 2013 - Arm & Hammer Tooth Tunes Toothbrush Review #ToothTunes1D - May 12, 2013
<urn:uuid:a914b035-07ca-495a-bfbf-c1284c8a7e78>
CC-MAIN-2013-20
http://www.crazyaboutmybaybah.com/celebrate-earth-day-cox-communications/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931487
802
3.109375
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Darryl D’Monte continues his reportage of the Climate Action Network International meet on in Bangkok where, he says, two NGOs put forward blueprints that could be templates on which the new climate treaty is based Two NGOs -- the World Wide Fund for Nature (WWF) and a consortium led by Greenpeace -- have put forward blueprints that could be templates on which the new climate treaty, due to be negotiated in Copenhagen in December, is based. WWF employs the concept of greenhouse development rights (GDRs), which have earlier also been propagated by the Stockholm Environment Institute and others. This August, it released a report titled ‘Sharing the effort under a global carbon budget’. WWF says: “A strict global carbon budget between now and 2050 based on a fair distribution between rich and poor nations has the potential to prevent dangerous climate change and keep temperature rise well below 2 degrees Celsius.” The report is based on research and shows different ways to cut global emissions by at least 80% globally, by 2050, and by 30% by 2030, compared to 1990 levels. Both the EU and US have agreed to this 2050 target but differ drastically on the intermediate goals, which have a vital bearing on keeping global temperatures from rising above 2 degrees C, beyond which there will be catastrophic climate changes. “In order to avoid the worst and most dramatic consequences of climate change, governments need to apply the strictest measures to stay within a tight and total long-term global carbon budget,” said Stephan Singer, director of global energy policy at WWF. “Ultimately, a global carbon budget is equal to a full global cap on emissions.” According to the analysis, the total carbon budget -- the amount of tolerable global emissions over a period of time -- has to be set roughly at 1,600 Gt CO2 eq (gigatonnes of carbon dioxide equivalent) between 1990 and 2050. As the world has already emitted a large part of this, the budget from today until 2050 is reduced to 970 Gt CO2 eq, excluding land use changes. The report evaluates different pathways to reduce emissions, all in line with the budget. It describes three different methodologies which could be applied to distribute the burden and the benefits of a global carbon budget in a fair and equitable way. - Greenhouse development rights (GDRs), where all countries need to reduce emissions below business-as-usual based on their per capita emissions, poverty thresholds, and GDP per capita. - Contraction and convergence (C&C), where per capita allowances converge from a country’s current level to a level equal for all countries within a given period. - Common but differentiated convergence (CDC), where developed countries’ per capita emissions converge to an equal level for all countries and others converge to the same level once their per capita emissions reach a global average. The report says that by 2050, the GDR methodology requires developed nations as a group to reduce emissions by 157% (twice what they are contemplating). “Given that they cannot cut domestic emissions by more than 100%, they will need to finance emission reductions in other countries to reach their total.” While the greenhouse development rights method allows an increase for most developing countries, at least for the initial period, the two other methods give less room for emissions increase. Under the C&C and CDC methodology, China, for example, would be required to reduce by at least 70% and India by 2-7% by 2050, compared to 1990. The poorest countries will be allowed to continue to grow emissions until at least 2050 under the GDR methodology, but will be required to reduce them after 2025 under the two remaining allocation options. The Greenpeace proposal, which has WWF and other partners, was released at an earlier UN climate meet in Bonn this year. It also talks of a global carbon budget. Industrial countries would have to phase out their fossil fuel energy consumption by 2050. The trajectory would be as follows: 23% between 2013 and 2017, 40% by 2020 (twice the EU commitment), and 95% by 2050. Globally, deforestation emissions would need to be reduced by three-quarters by 2020, and fossil fuel consumption by developing countries would have to peak by 2020 and then decline. The proposal envisages that industrial countries will provide at least $160 billion a year from 2013 to 2017, “with each country assuming national responsibility for an assessed portion of this amount as part of its binding national obligation for the same period”. The main source of this funding, which could prove controversial, would be auctioning 10% of industrial countries’ emissions allocations. There would also be levies on aviation and shipping, since both add to global warming. Greenpeace proposes a Copenhagen climate facility which would apportion $160 billion as follows: - $56 billion for developing countries to adapt to climate change. - $7 billion a year as insurance against such risks. - $42 billion in reducing forest destruction and degradation. - $56 billion on mitigation and technology diffusion. Talks at Bangkok are deadlocked between the G77 and China that want to continue with the Kyoto Protocol, and the US which wants a new treaty. The EU is open to a continuation of the old treaty with a new track to include the US (which has not ratified Kyoto), as well as emerging developing countries. Where and how such proposals will dovetail with the document now being negotiated is by no means clear, and it will be nothing less than a catastrophe for the entire planet if Copenhagen ends in a stalemate. Infochange News & Features, October 2009
<urn:uuid:6325c38b-7f5b-4153-9540-46ea973e92c7>
CC-MAIN-2013-20
http://infochangeindia.org/environment/news/greenhouse-development-rights-and-global-carbon-budgets.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950729
1,164
3.234375
3
[ "climate", "nature" ]
{ "climate": [ "carbon budget", "carbon dioxide", "climate change", "co2", "global warming", "temperature rise" ], "nature": [ "deforestation" ] }
{ "strong": 6, "weak": 1, "total": 7, "decision": "accepted_strong" }
The Ancient Forests of North America are extremely diverse. They include the boreal forest belt stretching between Newfoundland and Alaska, the coastal temperate rainforest of Alaska and Western Canada, and the myriad of residual pockets of temperate forest surviving in more remote regions. Together, these forests store huge amounts of carbon, helping tostabilise climate change. They also provide a refuge for large mammalssuch as the grizzly bear, puma and grey wolf, which once ranged widelyacross the continent. In Canada it is estimated that ancient forest provides habitat forabout two-thirds of the country's 140,000 species of plants, animalsand microorganisms. Many of these species are yet to be studied byscience. The Ancient Forests of North America also provide livelihoods forthousands of indigenous people, such as the Eyak and Chugach people ofSouthcentral Alaska, and the Hupa and Yurok of Northern California. Of Canada's one million indigenous people (First Nation, Inuit andMétis), almost 80 percent live in reserves and communities in boreal ortemperate forests, where historically the forest provided their foodand shelter, and shaped their way of life. Through the Trees - The truth behind logging in Canada (PDF) On the Greenpeace Canada website: Interactive map of Canada's Borel forest (Flash) Fun animation that graphically illustrates the problem (Flash) Defending America's Ancient Forests
<urn:uuid:8b41dac0-bc58-4cef-9f95-8333e7c91598>
CC-MAIN-2013-20
http://www.greenpeace.org/international/en/campaigns/forests/north-america/?tab=0
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704392896/warc/CC-MAIN-20130516113952-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.906006
298
3.9375
4
[ "climate", "nature" ]
{ "climate": [ "climate change" ], "nature": [ "habitat" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Blue Ocean Research & Conservation Discovering the Ocean Our scientific research has focused recently on an often overlooked yet ecologically important part of many ecosystems–sponges. Numbering over 8000 species, they are common to many aquatic habitats from the Ross Shelf in Antarctica, Lake Baikal in Russia, rocky reefs in South Africa, to coral reefs throughout the Caribbean. Sponges are critical to coral reef survival. Sponges filter and clean the water, provide shelter for commercially important species like juvenile lobsters, and are eaten by many fish and turtles. On many coral reefs, sponges outnumber corals, with sponges providing three-dimensional structure that creates habitat and refuge for thousands of species. Because of their large size and kaleidoscope colors, sponges help fuel the popular and lucrative diving and tourism industries. As climate change results in warmer, more acidic oceans, all marine life is potentially affected. It is well-known that coral health declines under these conditions, but the effect on sponge growth and survival is unknown. Significant declines in sponge health and biomass would be catastrophic to coral reefs, reducing water quality and severely impacting thousands of species from symbiotic microbes to foraging hawksbill turtles. A major loss of sponges would not only negatively impact marine life, but also local communities that depend on reefs for coastal protection and food. Blue Ocean’s research scientist, Dr. Alan Duckworth, studied the effects of warmer, more acidic waters on the sponge Cliona celata, which bores into the shells of scallops and oysters, weakening and eventually killing them. Alan hypothesized that because climate change will result in shellfish having weaker shells, these sponges could cause greater losses of shellfish. This study has been done in collaboration with Dr. Bradley Peterson of Stony Brook University. Alan’s other area of study was the first climate change experiment focused on tropical sponges. It investigated the effects of warmer, more acidic water on the growth, survival, and chemistry of several Caribbean coral reef sponges. This study was based at the Discovery Bay Marine Lab in Jamaica and chemical analysis of sponge samples was completed by Dr. Lyndon West from Florida Atlantic University. Putting Teeth in Shark Conservation The goal of this fellowship is to help small, island nations by strengthening their ability to identify illegal shark fishing and enforce recently established shark sanctuaries. It will help provide much needed scientific research, training, outreach and DNA-testing tools which can then be used to help protect valuable marine sanctuaries worldwide.
<urn:uuid:eeb90433-e247-4f67-89ff-aa11c0db8add>
CC-MAIN-2013-20
http://blueocean.org/programs/blue-ocean-research/?imgpage=1&showimg=491
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939833
525
3.703125
4
[ "climate", "nature" ]
{ "climate": [ "climate change" ], "nature": [ "conservation", "ecosystems", "habitat" ] }
{ "strong": 4, "weak": 0, "total": 4, "decision": "accepted_strong" }
Robin Yapp, Contributor December 10, 2012 | 5 Comments Saudi Arabia's newly announced commitment to introducing solar-powered desalination plants marks a welcome and significant step in advancing the technology. In October 2012, Abdul Rahman Al-Ibrahim, governor of the country's Saline Water Conversion Corporation (SWCC), which procures the majority of its extensive municipal desalination assets, announced plans to establish three new solar-powered desalination plants in Haqel, Dhuba and Farasan. SWCC is the biggest producer of desalinated water worldwide, accounting for 18% of global output. Energy-intensive desalination plants have traditionally run on fossil fuels, but renewables, particularly solar power, are now beginning to play a part. Around half the operating cost of a desalination plant comes from energy use, and on current trends Saudi Arabia and many other countries in the region would consume most of the oil they produce on desalination by 2050. The dominant desalination technology at present, with around 60% of global capacity, is Reverse Osmosis (RO), which pushes brine water through a membrane that retains the salt and other impurities. Thermal desalination uses heat as well as electricity in distillation processes with saline feedwater heated to vaporise, so fresh water evaporates and the brine is left behind. Cooling and condensation are then used to obtain fresh water for consumption. Multi Stage Flash (MSF), the most common thermal technique accounting for around 27% of global desalination capacity, typically consumes 80.6 kWh of heat energy plus 2.5-3.5 kWh of electricity per m3 of water. Large scale RO requires only around 3.5-5 kWh/m3 of electricity. According to the International Renewable Energy Agency (IRENA), desalination with renewable energy can already compete cost-wise with conventional systems in remote regions where the cost of energy transmission is high. Elsewhere, it is still generally more expensive than desalination plants using fossil fuels, but IRENA states that it is 'expected to become economically attractive as the costs of renewable technologies continue to decline and the prices of fossil fuels continue to increase.' Solar Reducing Costs SWCC has taken a long view and aims to gradually convert all its desalination plants to run on solar power as part of a drive unveiled by the Saudi government earlier this year to install 41 GW of solar power by 2032. The Al-Khafji solar desalination project, near the border with Kuwait, will become the first large-scale solar-powered seawater reverse osmosis (SWRO) plant in the world, producing 30,000 m3 of water per day for the town's 100,000 inhabitants. Due for completion at the end of 2012, it has been constructed by King Abdulaziz City for Science and Technology (KACST), the Saudi national science agency, using technology developed in conjunction with IBM. Innovations include a new polymer membrane to make RO more energy efficient and protect the membrane from chlorine - which is used to pretreat seawater - and clogging with oil and marine organisms. The use of solar power will bring huge cuts to the facility's contribution to global warming and smog compared to use of RO or MSF with fossil fuels, according to the developers. Al-Khafji is the first step in KACST's solar energy programme to reduce desalination costs. For phase two, construction of a new plant to produce 300,000 m3 of water per day is planned by 2015, and phase three will involve several more plants by 2018. Historically, desalination plants have been concentrated in the Persian Gulf region, where there is no alternative for maintaining the public water supply. The region has excellent solar power prospects, suggesting that coupling of the two technologies may become commonplace. A pilot project to construct 30 small-scale solar desalination plants by the Environment Agency Abu Dhabi has already seen 22 plants in operation, each producing 25 m3 of potable water per day. But population increases and looming water scarcity have also prompted widespread investment in desalination. It is now practised in some 150 countries including the US, Europe, Australia, China and Japan and it is becoming an increasingly attractive option both financially and for supply security. Over the past five years the capacity of operational desalination plants has increased by 57% to 78.4 million m3 per day, according to the International Development Agency. Sharply falling technology costs have been a key driver of the trend and an EU-funded project is examining the case for expanding solar-powered desalination. Solar power may even offer a solution to an impending crisis in Yemen, where water availability per capita is less than 130 m3/year. Yemen's capital Sana'a, with a population of two million, faces running out of groundwater before 2025. It is estimated that a solar plant powered by a 1250 MW parabolic trough to desalinate water from the Red Sea and pump it 250 km to Sana'a could be constructed for around $6 billion. Around 700 million people in 43 countries are classified by the UN as suffering from water scarcity today - but by 2025 the figure is forecast to rise to 1.8 billion. With the global population expected to reach nine billion by 2050 and the US secretary of state openly discussing the threat of water shortages leading to wars, desalinated water has never been more important. Demand for desalinated water is projected to grow by 9% per year until 2016 due to increased consumption in the Middle East and North Africa (MENA) and in energy-importing countries such as the US, India and China.Population growth and depletion of surface and groundwater means desalination capacity in the MENA region is expected to grow from 21 million m3/day in 2007 to 110 million m3/day in 2030, according to the International Energy Agency. US President John F Kennedy, speaking in 1962, said: 'If we could produce fresh water from salt water at a low cost, that would indeed be a great service to humanity, and would dwarf any other scientific accomplishment.' In the half century since, the need for innovation to satisfy humanity's demand for clean water has become ever more urgent. While technological advances continue to improve the efficiency of desalination methods, it is vital that the sources of power used by desalination plants also continue to evolve. To add your comments you must sign-in or create a free account. With over 57,000 subscribers and a global readership in 174 countries around the world, Renewable Energy World Magazine covers industry, policy, technology, finance and markets for all renewable technologies. Content is aimed decision makers...
<urn:uuid:fdd0471b-1300-4783-b88d-a84868202b69>
CC-MAIN-2013-20
http://www.renewableenergyworld.com/rea/news/article/2012/12/solar-energy-and-water-solar-powering-desalination?cmpid=rss
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937564
1,390
3.53125
4
[ "climate" ]
{ "climate": [ "global warming", "renewable energy" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
S7 Technical Assistance Eastern prairie fringed orchid (Platanthera leucophaea) Determining whether eastern prairie fringed orchid (Platanthera leucophaea) may be present in a proposed project area in the following northeastern Illinois counties: Cook, DuPage, Kane, Lake, McHenry and Will. |Photo by Mike Redmer As part of Step 1 of the S7 process, you checked the Illinois species list and found that eastern prairie fringed orchid is present in the county where your proposed project is located. The next step is to determine whether eastern prairie fringed orchid "may be present" in the action area of your proposed project. Below is guidance to help you make that determination. Habitat: The eastern prairie fringed orchid (orchid) occurs in a wide variety of habitats, from wet to mesic prairie or wetland communities, including, but not limited to sedge meadow, fen, marsh, or marsh edge. It can occupy a very wide moisture gradient of prairie and wetland vegetation. It requires full sun for optimal growth and flowering, which ideally would restrict it to grass and sedge dominated plant communities. However, in some plant communities where there are encroaching species such as cattail and/or dogwood, the orchid may be interspersed or within the edge zones of these communities and thus can sometimes occur in partially shaded areas. The substrate of the sites where this orchid occurs include glacial soils, lake plain deposits, muck, or peat which could range from more or less neutral to mildly calcareous (Bowles et al. 2005, USFWS 1999). In some cases, the species may also occur along ditches or roadways where this type of habitat is present. Processes that maintain habitats in early or mid-successional phases may be important in providing the sunny, open conditions required by the orchid (USFWS 1999). Sedge meadow and marsh habitats that support this orchid are usually early- or mid- successional because of past grazing, drainage, or soil disturbance. Patch disturbances that expose the soil to this orchid’s seeds, and reduce competition from established plants, may be needed for seedling establishment. Hawkmoths are the pollinators of this orchid species. In Illinois the hawkmoth, Sphinx eremitus is a confirmed pollinator although there may be others. Eumorpha pandorus and Eumorpha achemon have been confirmed as pollinators in other states. Host plants for the caterpillars of Sphinx eremitus include various species of beebalm (Monarda spp.), mints (Mentha spp.), bugleweed (Lycopus spp.) and sage (Salvia spp.). Follow the steps below to determine whether the eastern prairie fringed orchid may be present in the action area of your proposed project. This guidance is specific to Cook, Lake, McHenry, DuPage, Kane, and Will counties in northeastern Illinois. 1) Define the action area – all areas to be affected directly or indirectly by the Federal action and not just in the immediate area involved in the action. (For example: downstream areas, adjacent off-site wetlands, etc.) 2) Does the action area support any wet to mesic prairie or wetland communities including, but not limited to sedge meadow, fen, or marsh edges? If the answer is yes, go to number 3 (below). If the answer is no, conclude that “the eastern prairie fringed orchid is not present,” document your finding for your records or provide this information to the federal action agency. No further consultation is required. 3) Conduct a floristic quality assessment for the proposed project site during the growing season or use a previous assessment that is not more than three years old and was conducted during the growing season. 4) If any wetland in the action area is determined to be high quality, (a Floristic Quality Index of 20 or greater and/or a Native Mean C of 3.5 or greater) proceed to number 5 (below) or contact the Chicago Field Office for further consultation. Wetlands that are not high quality will not support eastern prairie fringed orchid. For those wetlands, conclude that “the eastern prairie fringed orchid is not present,” document your finding for your records or provide this information to the federal action agency. No further consultation is required for those wetlands. 5) Compare the plant species list generated for each high quality wetland with our Associate Plant Species List for the Eastern Prairie Fringed Orchid. If four or more associates are listed, then proceed to number 6. If not, high quality wetlands that support three or less eastern prairie fringed orchid associate plant species are unlikely to support eastern prairie fringed orchids. Conclude that “the eastern prairie fringed orchid is not present” and document your finding for your records or provide this information to the federal action agency. No further consultation is necessary for those wetlands. 6) The eastern prairie fringed orchid may be present in your action area. You may either assume that the eastern prairie fringed orchid is present and proceed to Step 2 of the Section 7 Consultation Process or conduct a field search during the bloom date of the orchid; June 28 through July 11. Because northeastern Illinois orchid populations bloom sporadically rather than all plants blooming at the same time, searches should be conducted on a minimum of three non-consecutive days within this time period. Please notify the Chicago Field Office before conducting your survey. 7) If you assume that the eastern prairie fringed orchid may be present in the action area or a field search proves that the orchid is present, the next step in the S7 Consultation Process is to determine whether the proposed action may affect any eastern prairie fringed orchids. Go to Step 2 of the S7 Consultation Process to begin that determination. Please contact the U.S. Fish and Wildlife Service’s Chicago Illinois Field Office if you for more information or if you have any questions. Chicago Illinois Field Office 1250 South Grove, Suite 103 Barrington, Illinois 60010 Phone: 847/381-2253, ext. 20 1Swink, F. and G. Wilhelm. 1994. Plants of the Chicago Region, 4th ed. Indiana Academy of Science, Indianapolis. 921pp. Step 1 of the Section 7 Process Section 7 Technical Assistance
<urn:uuid:32572c52-4ace-40ed-8aca-40c4ad13e96f>
CC-MAIN-2013-20
http://www.fws.gov/midwest/endangered/section7/s7process/plants/epfos7guide.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.897309
1,392
2.8125
3
[ "nature" ]
{ "climate": [], "nature": [ "habitat", "pollinator", "wetland", "wetlands" ] }
{ "strong": 4, "weak": 0, "total": 4, "decision": "accepted_strong" }
A species on the brink – Freshwater Pearl Mussel They can live for over a century, they have one of the most bizarre life-cycles of any species you’re likely to find and they’re one of the reasons why the Romans invaded Britain. What’s more is that their future is in our hands. The freshwater pearl mussel might not have the glamour of some other iconic ‘Scottish’ species like the golden eagle or red squirrel, but they’re incredibly important. Scotland holds around half of the world’s population of this fascinating creature and they are currently balancing on a knife-edge. Populations are found in some of the rivers flowing into Loch Ness. Freshwater pearl mussels are very slow-growing and live at the bottom of clean, generally fast-flowing rivers. These animals, which spend the early part of their life harmlessly attached to the gills of trout and salmon before settling onto a suitable substrate are now extinct across most of their former range. Highland rivers are a stronghold for the species. As their name suggests, they very occasionally bear a pearl, and this has in many ways led to their downfall. The taking of mussels by ‘pearl-fishers’ has been the main reason for the massive declines in these populations, but they have also been affected by pollution and river-engineering works. The freshwater pearl mussel was given full legal protection in 1998 but unfortunately illegal activity still continues. Every year we still come across significant ‘kills’ where piles of hundreds of empty shells mark the scene of a few hours illegal fishing, where a whole population of this globally threatened species can be wiped out in a couple of hours. These threats to the species have meant that the freshwater pearl mussel is a UK Wildlife Crime priority. This means that the Police work closely with Scottish Natural Heritage, anglers, bailiffs, river users and a wide-range of other organisations to help tackle these crimes by improving awareness, collection of intelligence and better enforcement to safeguard the species. However, whilst the police and other organisations do their best to help tackle wildlife crime in this way they can’t do it alone, and the help of the public can be absolutely crucial. Pearl fishing is often carried out in remote locations, or very early in the morning when there is less chance of being detected, and often during the summer when daylight hours are long and the rivers are low. Fishing is often carried out by wading out into rivers and using glass-bottomed buckets to find the mussels and a cleft stick to recover them. If anyone sees or suspects that pearl fishing is taking place we urge people to report it to their local police station and Wildlife Crime Officer as soon as possible. At the same time there are numerous projects aimed at active conservation of the species, including a recent £3.5 million project funded by the European Commission’s LIFE+ fund and secured by Scottish National Heritage and 14 other organisations. The project will improve habitats for the species, encourage simple and effective positive management of rivers where they are present, improve awareness and understanding of the species as well as helping address wildlife crime issues. So it’s not all doom and gloom for this remarkable species, and everybody has a role in ensuring it’s survival – let’s hope that we can bring the freshwater pearl mussel back from the brink. If you would like more information on the species visit SNH’s website at http://www.snh.gov.uk/about-scotlands-nature/species/invertebrates/freshwater-invertebrates/freshwater-pearl-mussel/
<urn:uuid:26afdc2d-f05f-43e0-89e9-7e76e8af37f3>
CC-MAIN-2013-20
http://www.visitlochness.com/blog/category/environment/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955203
764
3.25
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
The Chesapeake Bay TMDL, Maryland's Watershed Implementation Plan and Maryland's 2012-2013 Milestone Goals The Chesapeake Bay TMDL: A Pollution Diet for the Chesapeake Watershed The Chesapeake Bay is a national treasure constituting the largest estuary in the United States and one of the largest and most biologically productive estuaries in the world. Despite significant efforts by federal, state, and local governments and other interested parties, pollution in the Chesapeake Bay prevents the attainment of existing water quality standards. The pollutants that are largely responsible for impairment of the Bay are nutrients, in the form of nitrogen and phosphorus, and sediment. The United States Environmental Protection Agency (EPA), in coordination with the Bay watershed jurisdictions of Maryland, Virginia, Pennsylvania, Delaware, West Virginia, New York, and the District of Columbia (DC), developed and, on December 29, 2010, established a nutrient and sediment pollution diet for the Bay, consistent with Clean Water Act requirements, to guide and assist Chesapeake Bay restoration efforts. This pollution diet is known as the Chesapeake Bay Total Maximum Daily Load (TMDL), or Bay TMDL. MDE took part in an ongoing, high-level decision-making process to create the essential framework for this complex, multi-jurisdictional TMDL that will address nutrient and sediment impairments throughout the entire 64,000 square mile Chesapeake Bay watershed. MDE participated in numerous inter-jurisdictional and inter-agency workgroups and committees over the last three years to provide technical expertise and guidance for developing the Bay TMDL in a manner consistent with the State’s water quality goals and responsibilities. In particular, MDE worked to ensure that the Bay TMDL addressed the nutrient and sediment impairments in all of Maryland’s tidal waters listed as impaired by those pollutants on the State’s Integrated Report of Surface Water Quality. MDE took the lead on developing an allocation process that will enable the State to meet a key requirement for the Bay TMDL and Maryland’s Watershed Implementation Plan: the sub-allocation of major basin loading caps of nutrient and sediment to each of 58 “segment-sheds” in Maryland – the land areas that drain to each impaired Bay water quality segment – and to each pollutant source sector in those areas. Maryland’s Watershed Implementation Plan for the Bay TMDL Concurrent with the development of the Bay TMDL, EPA charged the Bay watershed states and DC with developing watershed implementation plans in order to provide adequate “reasonable assurance” that the jurisdictions can and will achieve the nutrient and sediment reductions necessary to implement the TMDL within their respective boundaries. Maryland’s Phase I Plan provides a series of proposed strategies that will collectively meet the 2017 target (70% of the total nutrient and sediment reductions needed to meet final 2020 goals). After more than a year of cooperative work, MDE and the Departments of Natural Resources, Agriculture, and Planning released a Draft Phase I Plan for public review in October 2010 and, following extensive consideration of hundreds of public comments, submitted Maryland’s Final Phase I Watershed Implementation Plan to EPA on December 3, 2010. Maryland’s Phase II Plan provides a series of proposed strategies that will collectively meet the 2017 target (60% of the total nutrient and sediment reductions needed to meet final 2025 goals). This was changed from Phase I due to concerns that the implementation was not achievable with that timeframe. Maryland worked many partners in local jurisdictions to develop Phase II Watershed Implementation Plans with more detailed reduction targets and specific strategies to further ensure that the water quality goals of the Bay TMDL will be met. See Maryland's Development Support for the Chesapeake Bay Phase II WIP webpage. MDE is continuing to work with its partners in local jurisdictions to implement the Phase II WIP. See the Implementing Maryland’s WIP: Making Progress toward Bay Restoration Goals webpage. Please direct questions or comments concerning this project to Tom Thornton with Maryland's TMDL Program at (410) 537-3656 or email at [email protected]. |Acrobat® Reader is required to view and print the PDF files. If you do not have it click on the icon to the right.
<urn:uuid:94db5292-934e-4b80-b87c-67c8f1d93091>
CC-MAIN-2013-20
http://mde.maryland.gov/programs/Water/TMDL/ChesapeakeBayTMDL/Pages/programs/waterprograms/tmdl/cb_tmdl/index.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.905904
892
3.078125
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Close to standard |Points of interest|| Ancient Rakatan ruins Tandun III was originally surveyed approximately 12,293 BBY by Doctor Beramsh, who led an expedition from Ord Mantell. What Beramsh and his team discovered was a lush beautiful world with close to standard gravity and was suitable for both oxygen breathing humans and humanoids. Further exploration of the planet also revealed ruins of ancient population centers which Beramsh believed were constructed by the Rakata. Despite the survey report of ideal atmospheric conditions, its distance from the Hydian Way prevented Tandun III from ever being colonized. Historical records would indicate that a possible second survey was conducted generations later during Chancellor Finis Valorum's second term. At some point between 29 BBY and 19 BBY, the Republic Group took an Insignia of Unity that once stood on the podium of the Galactic Senate Rotunda. The group believed the insignia could be used as a symbol to restore Republic honor to the galaxy and brought it to Tandun III for safe-keeping. Enlisting the help of the Antarian Rangers, the Republic Group built a secret storehouse in one of the planet's ancient ruins and made the Stellar Envoy the key to accessing the facility. In 19 BBY the Republic Group and the Antarian Rangers were considered enemies of the Empire and thus the secret facility and the Insignia of Unity were left abandoned and forgotten for over six decades. During the Yuuzhan Vong's invasion of the galaxy, the Yuuzhan Vong settled the world and began Vongforming it to meet their needs. The land was transformed into cliffs of yorik coral, tampasis of s'teeni, with populations of scherkil hla, sparkbees, and other Yuuzhan Vong biots. Over the course of twenty years the Vongforming had taken its toll on the planet. Intact forests abundant with life still covered the southern hemisphere, while surface temperatures had rendered landmasses in the northern hemisphere unsuitable for all but the most extremophilic sentient species. Extensive regions of extreme volcanic and tectonic activity griped the planet in catastrophic forces that were likely to destroy it. The sky was filled with powerful winds and icy clouds that produced storms with sheets of rain, lightening, and hailstones. And while still breathable, dweebit beetles filled the atmosphere with high concentrations of carbon dioxide, methane, and sulfur. In 43 ABY the Solos, accompanied by Tobb Jadak and Flitcher Poste, came to Tandun III as part of their quest to uncover the history of the Millennium Falcon. They landed the Falcon inside of the abandoned Republic Group warehouse and discovered the Insignia of Unity. It was at that time that Lestra Oxic appeared and revealed that he had followed the Solos to Tandun III in order to claim the emblem for himself. He told all present the history behind how the emblem ended up at Tandun III and inspected it only to realize that the emblem before him was a fake. At the same time of the meeting, groundquakes increased in severity and volatility finally ending in the planet's strongest quake. Oxic and his associates, now accompanied by Jadak and Poste, left to continue the search for the true Insignia of Unity, as the Solos themselves fled the planet. It was shortly after the Millennium Falcon's return to space that Tandun III flared and erupted in a shock wave that hurtled enormous chunks of itself into the vacuum of space. Behind the scenesEdit C-3PO stated the first survey of the world took place around 12,293 BBY and that the expedition was launched from Ord Mantell. This is odd due to the fact that Ord Mantell was not colonized until 12,000 BBY. Additionally, C-3PO said Dr. Beramsh believed the ancient population centers were Rakatan in origin. Yet, the Rakata were unknown to the galaxy at large until the end of the Jedi Civil War in 3,956 BBY. He continued to claim that the world was not colonized most likely due to its distance from the Hydian Way which was pioneered in 3,705 BBY. Thus the reason why the world was never colonized in the thousands of years between its exploration and the founding of the Hydian Way remains a mystery.
<urn:uuid:a9a8cdf5-ec0d-4a29-9793-af0701a63e8d>
CC-MAIN-2013-20
http://starwars.wikia.com/wiki/Tandun_III
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961923
908
2.59375
3
[ "climate" ]
{ "climate": [ "carbon dioxide", "methane" ], "nature": [] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
CURRENT CZECH NAME: Ostroh Ostroh Uhersky, Uhersky Ostroh, Ungarisch-Ostra, Ungarisch-Ostroh LOCATION: Ostroh is a small town in Moravia, located at 48.59 longitude and 17.23 latitude, 75 km E of Brno, 9 km ENE of Bzenec (seeMap - by Mapquest, then click on your browser's "Back" button to return to this page). HISTORY: The earliest known Jewish community in this town dates from 1592. In 1635 there were 22 Jewish houses. In 1671 there were 16 Jewish houses with more than 30 Jewish families, including Isak Schulklopper, Salamon Lateiner, Israel Isak, Mandl, Salamon Chaska, Benesch, Friedrich Kojeteiner, Schmidl, Jekl Fleischhacker, Salamon Mojses, Rabiner, Mojses Stanjetz, Jakob Gutman, Israel Strimpfstricker, and Loebl Isak. From 1798-1848 there were 89 Jewish families. In 1848 the community numbered 478 members, but dropped to 220 after the First World War. The Jewish population was 70 in 1930. The present town population is 5,000 - 25,000 with fewer than 10 Jews. Noteworthy historical events involving or affecting the Jewish community were the separation of a Jewish quarter in 1727 and the existence of a self-standing political community from 1890-1920. The old Jewish cemetery was established in 17th century, with the last known Jewish burial in 1862. The Jewish congregation was Conservative. Birth, Death and Marriage record books for Ostroh may be located at the Czech State Archives in Prague, Statni istredni archiv, tr. Milady Horokove 133, CZ-166 21 Praha 6, Czech Republic, tel/fax: +42 (2) 333-20274. Search JewishGen/Internet resources for Ostroh. NOTABLE RESIDENTS AND DESCENDANTS: According to the entry in the International Association of Jewish Genealogical Societies Cemetery Project database, Chaim Weizmann, president of the State of Israel, once lived in Ostroh. The following rabbis worked in Ostroh: Salomo Mose (1592); David b. Samuel Halevy (1659); Mose b. Hakadosch Elchanan from Fuerth (1655); Jesaja b. Sabatai Scheftel (1659); Joel b. Samuel from Krakau (1668); Mhrr Pinchas (after 1700); Mhrr Salomo (before 1719); Kolonimos b. mhrr Baruch (from 1720); Loeb Steiniz (d. 1760); Mhrr Pinchas b. mhrr Aaron (1766); Jakob Hirsch b. Mose Loeb (Biach) Feilbogen (1790-1853); Mose Loeb b. mhrr JA ha-Kohen Mueller (d. 1853); Dr. Joel Mueller; Dr. S. Wolfsohn; (1876-1878); Dr. Israel Taglicht (1883-1893); Dr. Emanuel Lenke; Dr. D. Herzog (1897-1900); Dr. Simon Friedmann; Dr. Michael Halberstamm (from 1919). Other notables include: Mordechai b. Schalom (community elder and author of the statutes of the chevra kadisha in 1650); Schalom b. Jecheskel (landowner 1668); Mandl Salamon Steinitzer (land deputy 1732); Michl b. R. Sch. David (judge); Mose Singer (judge 1835); Mandl Duschak (judge 1858); Loeb Winter (judge 1860); Jesajas Braun (judge 1864); Sal. Kihn (judge 1876); Salamon Winter (judge 1880-1888); Jonatan Lamberg (judge); Max Kihn (judge 1898); Dr. Eduard Stern (judge 1902); Jehuda Diamand (judge 1903); Sigmund Klein (judge 1909); Loeb Nussbaum; Samuel Kornblueh; Sal. Sommer; Jakob Hahn; Jakob Strauss; Jechiel Gruenbaum. Rabbi Dr. Moritz Grünwald was born 29 March 1853 in Ostroh. He studied at the Universities of Vienna and Leipzig. He founded the Jüdische Centralblatt in Belovar. In 1883 he became rabbi of Pisek and later Jungbunzlau. He was the chief rabbi of Sofia from 1893 until his death in London on 10 June 1895. The great-great- grandson of Amalie Reif (b. Ostroh), E. Randol Schoenberg, is a frequent contributor to Jewishgen's Austria-Czech SIG and the submitter of this page. Tom Beer has submitted an interesting story about his great-grandfather, Beer (b. Ostroh). CEMETERIES: There are two Jewish cemeteries in Ostroh. The older cemetery location is urban, on flat land, separate but near other cemeteries, and not identified by any sign or marker. It is reached by turning directly off a public road. It is open to all. The cemetery is surrounded by no wall or fence and there is no gate. The approximate size of cemetery before WWII and now is 0.1277 hectares. The cemetery contains no special memorial monuments. The cemetery contains no known mass graves. Within the limits of the cemetery there are no structures. The municipality is the present owner of the cemetery property, which is now utilized for recreation (park, playground, sports field). Adjacent properties are commercial or industrial. The cemetery boundaries have not changed since 1939. Private visitors come rarely to the cemetery. The cemetery was vandalized during World War II and between 1945 and 1981. No maintenance has been done. Now there is occasional clearing or cleaning by authorities. There is a slight threat posed by pollution and proposed The new cemetery is located at 1.5 km to the E, Veselska-Str. This Jewish cemetery was established in 1862. The last known Jewish burials were in the 1950s and 1960s. The cemetery location is urban, on flat land, separate, but near other cemeteries, and not identified by any sign or marker. It is reached by crossing the public property of the town cemetery. It is open to all. A continuous masonry wall surrounds the cemetery. There is a gate that does not lock. The approximate size of the cemetery is now 0.2777 hectares; before WWII it was about 0.47 hectares. There are 100-500 stones. The cemetery has no special sections. The tombstones and memorial markers are made of marble, granite and sandstone. The tombstones vary among flat shaped stones, finely smoothed and inscribed stones, flat stones with carved relief decoration and obelisks. The cemetery has tombstones with bronze decorations or lettering and metal fences around graves. Inscriptions on tombstones are in Hebrew, German and Czech. The cemetery contains special memorial monuments to Holocaust victims. There are no known mass graves. Within the limits of the cemetery are no structures. The present owner of the cemetery property is the local Jewish community of Brno. The adjacent properties are other cemeteries. The current Jewish cemetery boundaries are smaller now than in 1939 because of the town cemetery. Private visitors come occasionally to the cemetery. The cemetery has been vandalized occasionally, mostly between 1981-91. No maintenance has been done. Local/municipal authorities and Jewish groups from within the country did restoration work, finally completed in 1991. Now there is occasional clearing or cleaning by authorities. There is a moderate threat posed by pollution, vegetation and vandalism; and slight threats are posed by uncontrolled access, weather erosion, and existing and proposed nearby development. These surveys were completed on 1.3.1992 by Ing. Arch. Jaroslav Klenovsky, Zebetinska 13, 623 00 Brno. CONTACTS: Town officials: Magistrate Jiri Chmelar, Mestsky urad Hradistska 305, 687 24 uhersky Ostroh, tel. 0632/91116. Regional officials: PhDr. Jana Spathova, Okresni urad, Referat Kultury, 686 01 Uherske Hradiste, tel. 0632/432. Interested parties: Slovacke muzeum, dir. PhDr. Ivo Frolec, Smetanovy sady, 686 01 Uherske Hradiste, tel. 0632/2262. Other sources: Bohumil Gelbkopf, Rybare 198, 687 24 Uhersky Ostroh, Tel. 0. SOURCES: Gedenkbuch der Untergegangenen Judengemeinden Mährens, Hugo Gold ed. (1974), pp.. 116-117; Die Juden und JudengemeindenMährens in Vergangenheit unde Gegenwart, Hugo Gold ed. (1929), pp: 563-570 (pictures); Jiri Fiedler, Jewish Sights of Bohemia and Moravia (1991), pp. 53-54; International Association of Jewish Genealogical Societies Cemetery Project, Czech Republic, Ostroh. SUBMITTER: E. Randol Schoenberg, 3436 Mandeville Canyon Road, Los Angeles, California 90049-1020 USA. Tel: 1-310-472-3122 (h), 1-213-473-2045 (w). Fax: 1-213-473-2222. Web Page: http://www.schoenberglaw.com Return to GemeindeView Return to Austria-Czech SIG Homepage
<urn:uuid:44402fd5-8293-4e40-b086-ef8eab40cc65>
CC-MAIN-2013-20
http://www.jewishgen.org/AustriaCzech/TOWNS/ostroh.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.843206
2,245
2.609375
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Hop into roo before it's too late A call for Australians to eat kangaroos to combat climate change might be a case of tuck in now before it's too late, research by an Australian biologist suggests. Writing in the December edition of Physiological and Biochemical Zoology, Dr Euan Ritchie, of James Cook University in Queensland, says population numbers of the iconic Australian marsupial are at threat from climate change. Ritchie, from the School of Marine and Tropical Biology, says a rise in average temperatures in northern Australia of just 2°C could reduce suitable habitat for kangaroo populations by as much as 50%. His findings follow a recent call by economist, Professor Ross Garnaut, of the Australian National University, that Australians help fight climate change by swapping their beef-eating habits for a taste of Skippy. Ritchie, who admits to being a committed roo eater, says his findings should not deter people from kangaroo steaks and may even help the animal survive. "The species [of kangaroo] currently being harvested are very well monitored," he says. "So it means we will pick up differences [in range and population] very quickly and will be in a position to respond to that." According to the study, the kangaroo species under greatest threat is the antilopine wallaroo. Ritchie says it is more vulnerable because it has a very defined range across the tropical savannas of far northern Australia from Cape York in Queensland across to the Kimberleys of Western Australia. Using climate change computer modelling, Ritchie and co-author Elizabeth Bolitho, also of James Cook University, found the 2°C temperature increase, predicted by 2030, would shrink the antilopine's range by 89%. A 6°C increase, the upper end of temperature increase predictions to 2070, may lead to their extinction if they are unable to adapt to the arid environment which results, Ritchie says. He says the main threat of climate change is not on the kangaroo itself, but on the habitat that sustains its populations. Among the impacts that will affect their geographic range are increased prevalence of fires and changes to vegetation and the availability of water. He says a 0.4°C increase would reduce the distribution of all species of kangaroos and wallaroos by 9%. An increase of 2°C saw the geographic range of the kangaroos reduced by as much as 50%. Weathering the changes However the news is not all bad. By contrast to the antilopine, Ritchie says the eastern gray kangaroo is in a strong position to weather climate changes because of its predominance in the cooler eastern seaboard of Australia. And he says the red kangaroo and common wallaroo are better adapted to sustain hotter climates. Professor Lesley Hughes, of the Climate Change Ecology Group at Macquarie University in Sydney, backs Ritchie's findings. "Virtually every time we do bioclimatic modelling you get this result [of species under threat]," she says. However she says few studies "go up to 6°C" because "the more you extrapolate into the future the more doubt you have". Hughes adds however that a 6°C rise in temperature would "wipe out" most native Australian species.
<urn:uuid:5480eda0-6c9b-47da-84e6-b4482d8deaf3>
CC-MAIN-2013-20
http://www.abc.net.au/science/articles/2008/10/16/2392960.htm?site=science&topic=latest&listaction=unsubscribe
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.955928
701
3.296875
3
[ "climate", "nature" ]
{ "climate": [ "2°c", "climate change" ], "nature": [ "habitat" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
Deep in a North Carolina marsh, a lone swamp sparrow sits on his perch in the middle of the water. He’s singing his usual song. But he’s also aggressively flapping one wing, trying to incite a nearby male into action. Onlookers are watching – just to see what happens. However, this is no ordinary territorial scuffle. This is bird research. The sparrow on the perch is a robot, and the chief, hip wader-clad onlooker – who is also in control of the robot’s movements – is Steve Nowicki, Ph.D., a biology, psychology, and neurobiology professor at Duke University. He’s testing whether the wing flap will actually prompt a fight. According to Nowicki, birdsong and signaling have a surprisingly close relationship with human speech. “It’s an unexpected and remarkable model for human speech control, development and perception,” he said. “Birds also learn their songs in much the same way humans learn to speak, and that’s an unusual trait. They have to learn their language from their parents.” His research, though, isn’t about merely studying how birds behave and communicate. He and his team watch signals and behaviors; they run simulations and analyze hormones; they record neurons and assemble protein sets. They’re deciphering how birds promote their survival and reproductive success. In short – they’re studying evolution, past and present. Nowicki, who is also dean and vice provost of undergraduate education, was almost the bird researcher who wasn’t. As a student at Tufts University in Boston, he was a declared music major. Late in his collegiate career, he discovered a love of biology – particularly the brain and behavior – and raced to complete a major in the subject. He then pursued his graduate degree in neurobiology at Cornell University. It was there he was first introduced to the siren song of birds. When it comes to communicating, birds have far less to say than humans. But they express themselves in equally complex ways, Nowicki said. “Humans use complicated signal communication, and we use an array of sounds to create words that have rich meanings,” he said. “When you look at sparrow songs – the number of notes per second and the frequency – it’s just as complicated as human speech. They’re just not saying much.” All the same, they’re getting their points across. Songs, signals and responses In addition to the aggressive response the swamp sparrow’s wing flap provokes, the absence or introduction of song or even a physical attribute can prompt birds to behave differently, Nowicki said. Birds, like most animals, are territorial and will, in most cases, defend their turf. But how will neighboring birds respond if a battle ensues? Will they come to help or avoid the fight? Will they treat the male differently if he loses to the interloper? Researchers can test this reaction, Nowicki said, by removing a bird from its environment, playing a recording of another male’s song, and, then, reintroducing the bird to see how the others respond. “It’s interesting to see what happens, because no one wants a floating male in the neighborhood,” he said. “Research has shown that with some birds, peer birds are more wary of the winner, but they might also try to encroach on a loser’s territory.” And, just as with other species, birds can use their physical attributes to signal to and communicate with each other. For example, a trait, such as a bright red neck and throat commonly seen in the male house finch, can broadcast a bird’s prowess or superior qualities. The red-throated male finch does attract more females, Nowicki said, but it isn’t because of the color. The pigment comes from a carotenoid-rich diet that gives these males a stronger immune system, making them better mates. Male song sparrows use their song repertoire in much the same way. The more songs they learn and exhibit, the more attractive they are to females. The reason, Nowicki said, is that birds with larger song selections appear to be smarter. They simply learn songs faster. “Males who sing better have better developed brains, and in theory that makes them better mates,” he said. “We’re still working out why having a better brain for learning song is better for the female, but it’s clear females prefer these males as their mates.” Impact on human activity Understanding the role and importance of birdsong and signaling doesn’t shed much light on the evolution of human communication, but knowing what songs and signals mean to birds can directly affect human choices and behavior. For example, researchers have evidence that stress directly affects a bird’s ability to develop song, which can ultimately impact pair bonding and mating. If scientists study the way birds living in both polluted and pristine environments sing, the data could play a role in accurately evaluating ecosystem health. This knowledge also can impact wildlife preservation efforts. It isn’t enough to allocate a certain amount of space to a population based only on the number of animals surveyed. There are often other factors at work, Nowicki said. In the case of the small warbler ovenbird, it’s important to know that females won’t be setting in an area with fewer than 10 males. This type of information can significantly alter conservation efforts, he said. Regardless of how the research of birdsong is used, Nowicki said, his work constantly reminds him of how intertwined birds and music are with our surroundings. “I keep coming back to birdsong not simply because it’s a good model,” Nowicki said. “When I wake up in the morning and hear birds singing, it’s part of the wonderful aesthetic world we live in, and my job to learn more about it is a privilege.”
<urn:uuid:9d5aedae-1c1e-4524-9e41-be13a6e09882>
CC-MAIN-2013-20
http://www.charlotteobserver.com/2013/02/03/3828440/what-songbirds-have-to-say.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959024
1,291
3.015625
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecosystem", "ecosystem health" ] }
{ "strong": 2, "weak": 1, "total": 3, "decision": "accepted_strong" }
The American Meteorological Society ) promotes the development and dissemination of information and education on the atmospheric Atmospheric sciences is an umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather... and related oceanic Oceanography , also called oceanology or marine science, is the branch of Earth science that studies the ocean... and hydrologic sciences Hydrology is the study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability... and the advancement of their professional applications. Founded in 1919, the American Meteorological Society has a membership of more than 14,000 professionals, professors, students, and weather enthusiasts. Some members have attained the designation "Certified Consulting Meteorologist (CCM)", many of whom have expertise in the applied meteorology discipline of atmospheric dispersion modeling Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that solve the mathematical equations and algorithms which simulate the pollutant dispersion... . To the general public, however, the AMS is best known for its "Seal of Approval" to television and radio meteorologists. The AMS publishes nine atmospheric and related oceanic and hydrologic journals (in print and online), issues position statements on scientific topics that fall within the scope of their expertise, sponsors more than 12 conferences annually, and offers numerous programs and services. There is also an extensive network of local chapters. The AMS headquarters are located at Boston, Massachusetts. It was built by the famous Boston architect Charles Bulfinch Charles Bulfinch was an early American architect, and has been regarded by many as the first native-born American to practice architecture as a profession.... , as the third Harrison Gray Otis House There are three houses named the Harrison Gray Otis House in Boston, Massachusetts. All were built by noted American architect Charles Bulfinch for the same man, Harrison Gray Otis.-First Harrison Gray Otis House:... in 1806 and was purchased and renovated by the AMS in 1958, with staff moving into the building in 1960. The AMS also maintains an office in Washington, D.C. Washington, D.C., formally the District of Columbia and commonly referred to as Washington, "the District", or simply D.C., is the capital of the United States. On July 16, 1790, the United States Congress approved the creation of a permanent national capital as permitted by the U.S. Constitution.... , at 1120 G Street NW. Seal of Approval The AMS Seal of Approval program was established in 1957 as a means of recognizing television and radio weather forecasters who display informative, well-communicated, and scientifically-sound weather broadcast presentations. The awarding of a Seal of Approval is based on a demonstration tape submitted by the applicant to six members of a review panel after paying an application fee. Although a formal degree in meteorology is not a requirement to obtain the original Seal of Approval, the minimal requirements of meteorological courses including hydrology, basic meteorology & thermodynamic meteorology including at least 20 core college credits must have been taken first before applying (ensuring that the forecaster has at least a minimal required education in the field). There is no minimum amount of experience required, but previous experience in weather forecasting and broadcasting is suggested before applying. It is worthy to note that many broadcasters who have obtained the Seal of Approval do in fact have formal degrees in Meteorology or related sciences and/or certifications from accredited University programs. Upon meeting the core requirements, having the seal, and working in the field for 3 years that broadcaster may then be referred to as a Meteorologist in the broadcast community. As of February 2007, more than 1,600 Seals of Approval have been granted, of which more than 700 are considered "active." Seals become inactive when a sealholder's membership renewal and annual seal fees are not paid. The original Seal of Approval program will be phased out at the end of 2008. Current applicants may either apply for the original Seal of Approval or the Certified Broadcast Meteorologist (CBM) Seal until December 31, 2008. After that date, only the CBM Seal will be offered. Current sealholders retain the right to use their seal in 2009 and onward, but new applications for the original Seal of Approval will not be accepted after December 31, 2008. Note: The NWA Seal of Approval is issued by the National Weather Association The National Weather Association is an American professional association with a mission to support and promote excellence in operational meteorology and related activities... and is independent of the AMS. Certified Broadcast Meteorologist (CBM) Seal The original Seal of Approval program was revamped in January 2005 with the introduction of the Certified Broadcast Meteorologist, or CBM, Seal. This seal introduced a 100-question multiple choice closed-book examination as part of the evaluation process. The questions on the exam cover many aspects of the science of meteorology, forecasting, and related principles. Applicants must answer at least 75 of the questions correctly before being awarded the CBM Seal. Persons who obtained or applied for the original Seal of Approval before December 31, 2004 and were not rejected are eligible for an upgrade of their Seal of Approval to the CBM Seal upon the successful completion of the CBM exam and payment of applicable fees. Upgrading from the original Seal of Approval is not required. New applicants for the CBM Seal must pay the application fee, pass the exam, and then submit demonstration tapes to the review board before being considered for the CBM Seal. While original sealholders do not have to have a degree in meteorology or a related field of study to be upgraded, brand new applicants for the CBM seal must have a degree in meteorology or a related field of study to be considered. In order to keep either the CBM Seal or the original Seal of Approval, sealholders must pay all annual dues and show proof of completing certain professional development programs every five years (such as educational presentations at schools, involvement in local AMS chapter events, attendance at weather conferences, and other activities of the like). As of February 2007, nearly 200 CBM seals have been awarded to broadcast weather forecasters, either upgraded from the original Seal of Approval or granted to new applicants. American Meteorological Society offers several awards in the fields of meteorology and oceanography. Atmospheric Research Awards Committee - The Carl-Gustaf Rossby Research Medal The Carl-Gustaf Rossby Research Medal is the highest award for atmospheric science of the American Meteorological Society. It is presented to individual scientists, who receive a medal... - The Jule G. Charney Award - The Verner E. Suomi Award - The Remote Sensing Prize - The Clarence Leroy Meisinger - The Henry G. Houghton Oceanographic Research Awards Committee - The Sverdrup Gold Medal Sverdrup Gold Medal Award - is the American Meteorological Society's award granted to researchers who make outstanding contributions to the scientific knowledge of interactions between the oceans and the atmosphere.-Recipients:... - The Henry Stommel Research Award The Henry Stommel Research Award is awarded by the American Meteorological Society to researchers in recognition of outstanding contributions to the advancement of the understanding of the dynamics and physics of the ocean. The award is in the form of a medallion and was named for Henry... - The Verner E. Suomi - The Nicholas P. Fofonoff Award The American Meteorological Society publishes the following scientific journals: - Bulletin of the American Meteorological Society The Bulletin of the American Meteorological Society is a scientific journal published by the American Meteorological Society.The official organ of the society, it is devoted to editorials, topical reports to members, articles, professional and membership news, conference announcements, programs and... - Journal of the Atmospheric Sciences The Journal of the Atmospheric Sciences is a scientific journal published by the American Meteorological Society... - Journal of Applied Meteorology and Climatology The Journal of Applied Meteorology and Climatology is a scientific journal published by the American Meteorological Society.... - Journal of Physical Oceanography Journal of Physical Oceanography is a peer-reviewed scientific journal published by the American Meteorological Society . It was established in January 1971 and is available on the web since 1996... - Monthly Weather Review The Monthly Weather Review is a scientific journal published by the American Meteorological Society.Topics covered by the journal include research related to analysis and prediction of observed and modeled circulations of the atmosphere, including technique development, data assimilation, model... - Journal of Atmospheric and Oceanic Technology The Journal of Atmospheric and Oceanic Technology is a scientific publication by the American Meteorological Society.The journal includes papers describing the instrumentation and methodology used in atmospheric and oceanic research including computational techniques, methods for data acquisition,... - Weather and Forecasting Weather and Forecasting is a scientific journal published by the American Meteorological Society.Articles on forecasting and analysis techniques, forecast verification studies, and case studies useful to forecasters... - Journal of Climate The Journal of Climate is a scientific journal published by the American Meteorological Society.The journal publishes articles on climate research, in particular those concerned with large-scale atmospheric and oceanic variability, changes in the climate system , and climate simulation and... - Journal of Hydrometeorology The Journal of Hydrometeorology is a scientific journal published by the American Meteorological Society. It covers the modeling, observing, and forecasting of processes related to water and energy fluxes and storage terms, including interactions with the boundary layer and lower atmosphere, and... - Weather, Climate, and Society (new journal, to start 2009) - Earth Interactions Earth Interactions is a scientific journal published by the American Meteorological Society, American Geophysical Union, and Association of American Geographers.... - Meteorological Monographs Meteorological Monographs is a publication of the American Meteorological Society.The AMS Monograph Series has two parts, historical and meteorological... The American Meteorological Society produces the following scientific databases: - Meteorological and Geoastrophysical Abstracts As a means of promoting "the development and dissemination of information and education on the atmospheric and related oceanic and hydrologic sciences and the advancement of their professional applications", the AMS periodically publishes policy statements on issues related to its competence on subjects such as drought A drought is an extended period of months or years when a region notes a deficiency in its water supply. Generally, this occurs when a region receives consistently below average precipitation. It can have a substantial impact on the ecosystem and agriculture of the affected region... Ozone depletion describes two distinct but related phenomena observed since the late 1970s: a steady decline of about 4% per decade in the total volume of ozone in Earth's stratosphere , and a much larger springtime decrease in stratospheric ozone over Earth's polar regions. The latter phenomenon... and acid deposition Acid rain is a rain or any other form of precipitation that is unusually acidic, meaning that it possesses elevated levels of hydrogen ions . It can have harmful effects on plants, aquatic animals, and infrastructure. Acid rain is caused by emissions of carbon dioxide, sulfur dioxide and nitrogen... In 2003, the AMS issued the position statement Climate Change Research: Issues for the Atmospheric and Related Sciences - Human activities have become a major source of environmental change. Of great urgency are the climate consequences of the increasing atmospheric abundance of greenhouse gases... Because greenhouse gases continue to increase, we are, in effect, conducting a global climate experiment, neither planned nor controlled, the results of which may present unprecedented challenges to our wisdom and foresight as well as have significant impacts on our natural and societal systems. - The Maury Project (a comprehensive national program of teacher enhancement based on studies of the physical foundations of oceanography)
<urn:uuid:969ea0b8-79ed-42ee-804d-ad3c197850b0>
CC-MAIN-2013-20
http://www.absoluteastronomy.com/topics/American_Meteorological_Society
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92667
2,528
2.765625
3
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "climate change", "climate system", "drought" ], "nature": [ "ecosystem" ] }
{ "strong": 4, "weak": 1, "total": 5, "decision": "accepted_strong" }
To be sustainable, old cities need new, smarter infrastructures, says HP Labs sustainability visionary Chandrakant Patel Since arriving at HP Labs in 1991, HP Fellow and director of HP’s Sustainable IT Ecosystem Lab Chandrakant Patel has worked to make IT systems more energy efficient. His early research in microprocessor system design led Patel to pioneer the concept of ‘smart data centers’ – data centers in which compute, power and cooling resources are provisioned based on the need. He now extends his vision of energy efficiency beyond the data center to what he calls ‘City 2.0.’ As nations look to rebuild their aging infrastructures and at the same time take on the challenge of global climate change, Patel argues that resource usage needs to be at the heart of their thinking. And, we must take a fundamental perspective in examining “available energy” in building and operating the infrastructure. Only if we use fewer resources to both build and run our infrastructures, he says, will we create cities that can thrive for generations to come. And we can only build in that way, he suggests, if we seamlessly integrate IT into the physical infrastructure to provision the resources – power, water, waste, etc - at city scale based on the need Chandrakant Patel recently described his vision of building City 2.0, enabled by a Sustainable IT Ecosystem. So you started out by addressing energy use in the data center? That’s right. When we created the Thermal Technology Research Program at HP Labs in the early 90s, our industry was not addressing power and cooling in the data center at all. But we thought the data center should be looked at as a system. And if you look at it that way, there are three key components to the data center: computing, power, and cooling. We felt all of these should be provisioned based on need. Just as you dedicate the right computing instrument to the workload, you supply the power and cooling on an as-needed basis. You use sensors and controls, so that when workload comes in, you decide what kind of workload it is and give it the right level of compute, power, and cooling. What kind of impact does this have on energy use? Well, we built a “smart” data center in Palo Alto and a large data center in Southern India as a proof of concept. In the data center in Southern India, we used 7,500 sensors to record the temperature of its various parts, which feed back to a system that automatically controls all the air conditioners. In addition to saving 40% in energy used by the cooling system, the fine grained sensing allowed us to dynamically place workloads and shut machines down that are not being used. Furthermore, with 7500 sensors polling every few seconds, we are able to mine sensor data to detect “anomalies” so we can extend the life of large scale physical systems such as compressors in the cooling plant. This type of sensing and control is critical for large scale physical installations. One wouldn’t run a house without a thermostat, so why should one run a multi-megawatt data center without fine grained measurement and control? A ceiling fan in a house uses a few hundred watts, and it has a knob so one can change its speed based on the need. The blowers in air handling units inside a data center use10 kilowatts, and are often running at full speed all the time regardless of the data center’s needs! How do you apply this kind of approach over the entire IT ecosystem? First, you need to ask: what is the ecosystem? The world has billions of service-oriented client devices, like our laptops and handhelds. Then it has thousands of data centers, and thousands of print factories. That’s the ecosystem. Then you need to ask if that ecosystem is as energy efficient as it can be. To do that we take a life cycle approach. We look at the energy it takes to build and operate IT products over their life-cycle. If you do that, you can see that you might design, build and operate them in completely different ways – through appropriate choice of energy conversion means and appropriate choice of materials - ultimately leading to least energy, least material designs. Indeed, we believe that taking such an “end to end” view in design and management is required to reduce the cost of IT services that will enable the billions to use IT ecosystem to meet their needs. Can you give an example? Take a laptop as an example. How much energy is required to build a laptop - to extract the material, to manufacture it, operate it and ultimately reclaim it? Using Joules of available energy consumed as the currency, one can examine the supply chain and design the laptop with appropriate choice of materials to minimize the consumption of available energy. Such a technique also allows one to examine the carbon emission across a product life cycle. This type of proactive approach is good for the environment and good for business. Good for business because, in our opinion, such an approach will lead to lowest-cost products and services. Is there an impact on IT services too? Absolutely. Today, I can reserve train tickets online for rail travel in India from my home in the US. But most of the 700 million people in India must take a motorized rickshaw to the train station, and spend half a day, to get the ticket. They can ill afford to spend the time. Couldn't we give them appropriately priced IT services so they can do it online? That's what Web 2.0 is about for me -- meeting the fundamental needs of a society. Furthermore, these kinds of services would reduce congestion and reduce consumption of available energy. We can ask - and we need to ask - the same kinds of questions when we are talking about bringing people all kinds of resources more effectively. How do you get the information you need to make decisions based on energy used over the life of a product? Firstly, at design time, the IT ecosystem enables us to create a tool for analysis based on scientific principles rather than anecdotes and rules of thumb. Secondly, the IT ecosystem provides us the ability to avail energy and material data for lifecycle analysis in design phase e.g. the available energy used in extracting Aluminum from Bauxite. Next, during operation, you use sensors and controls to manage your resources. Take traffic flow in a city. All you need to manage it is a backbone, the sensors, the data center and a panel where you can collect all that information and manage it. With that we can manage the flow so that available energy is being provisioned based on the need. You can do the same with electricity, water, waste, etc. Thus, you are using the IT ecosystem to have a net positive impact by deconstructing conventional business models – you're creating a sustainable ecosystem using IT. Is that what you mean by the City 2.0 ? Yes. We started the Sustainable IT Ecosystem Lab at HP Labs because we wanted to integrate the IT ecosystem into the next generation of cities - what I've called City 2.0. If you had to build a city all over again, how would you build it? Are you going to just build a city with more roads, more bridges? Or are you going to use the IT ecosystem so that more people can use less of those physical resources more effectively? Wouldn't you think it would be better if a data center was there, and it managed all the resources? Wouldn't it be better to harvest the rain that falls in the area and have a lot of local reservoirs? Wouldn't it be good to have a local power grid instead of bringing power from somewhere else? Those are the kinds of questions that we are wrestling with. How can HP contribute to building the City 2.0? HP has the breadth and the depth – the billions of service-oriented client devices, the thousands of data centers and the thousands of print factories. HP covers all aspects of the IT ecosystem. And we have a great history in measurement, communication, and computation. What I’d like to see us do is leverage the past to create the future. A future where we address the fundamental needs of society by right provisioning the resources so that future generations can have the same quality of life as we do. The US and many other countries are in recession. Building the City 2.0 is an expensive proposition, so why is it worth doing? First of all, I think building a smart infrastructure could revitalize our economy by providing businesses with the opportunity to apply their new technologies for solving age-old problems like water distribution and energy management. And secondly, if governments around the world are going to spend on infrastructure, we probably want to do it in a smart way: not just building things for the sake of building them. We can - and should - do it in a planned, sustainable way where we also create new, high-paying and long-lasting jobs. More information about HP Labs is available at: www.hpl.hp.com/about/ please sign in to rate this article
<urn:uuid:21b3c90f-240f-44e0-a781-bc179e8931d4>
CC-MAIN-2013-20
http://www.telecomtv.com/comspace_newsDetail.aspx?n=45357&id=c26cc842-5ba0-470e-9b9d-c92b4a93db96
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698924319/warc/CC-MAIN-20130516100844-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944234
1,891
2.59375
3
[ "climate", "nature" ]
{ "climate": [ "climate change" ], "nature": [ "ecosystem" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
In 1929, the wild financial speculation of the Roaring Twenties came to a sudden halt in October when the stock market began to slide. Banker's Committee Stops Panic of '29 Worries spread through the economic community about the passing of the Smoot-Hawley Tariff Act. Tariffs had always been a point of contention among Americans, even spurring South Carolina to threaten secession over the Tariff Act of 1828. Producers such as farmers and manufacturers called for protective tariffs while merchants and consumers demanded low prices. The American economy soared while post-war Europe rebuilt in the '20s, and the Tariff Act of 1922 skimmed valuable revenue from the nation's income that would otherwise have been needed as taxes. The country barely noticed, and the economy surged forward as new technological luxuries became available as well as new disposable income. Meanwhile, however, the nation faced an increasingly difficult drought while food prices continued to drop during Europe's recovery. Farmers were stretched thinner and thinner, prompting calls for protective agricultural tariffs and cheaper manufactured goods. In his 1928 presidential campaign, Herbert Hoover promised just that, and as the legislature met in 1929, talks on a new tariff began. Led by Senator Reed Smoot (R-Utah) and Representative Willis C. Hawley (R-Oregon), the bill quickly became more than Hoover and the farmers had bargained for as rates would increase to a level exceeding 1828 for industrial products as well as agricultural. A new story by Jeff ProvineThe revenue would be a great boon, but it unnerved economists, who wondered if it could kill the economic growth already slowing by a dipping real estate market. The weakened nerves shifted from economists to investors, who took the heated debate in the Senate as a clue that times may become rough and decided to get out of the stock market while they could. Prices had skyrocketed over the course of the '20s as the middle class blossomed and minor investors came into being. Another hallmark of the '20s, credit, enabled people to buy stock on margin, borrowing money they could invest at what they hoped would be a higher percentage. The idea of a "money-making machine" spread, and August of 1929 showed more than $8.5 billion in loans, more than all of the money in circulation in the United States. The market peaked on September 3 at 381.17 and then began a downward correction. At the rebound in late October, panicked selling began. On October 24, what became known as "Black Thursday", the market fell more than ten percent. On Friday, it did the same, and the initial outlook for the next week was dire. Amid the early selling in October, financiers noted that a crash was coming and met on October 24 while the market plummeted. The heads of firms and banks such as Chase, Morgan, and the National City Bank of New York collaborated and finally placed vice-president of the New York Stock Exchange Richard Whitney in charge of stopping the disaster. Forty-one-year-old Whitney was a successful financier with an American family dating back to 1630 and numerous connections in the banking world who had purchased a seat on the NYSE Board of Governors only two years after starting his own firm. Whitney's initial strategy was to replicate the cure for the Panic of 1907: purchasing large amounts of valuable stock above market price, starting with the "blue chip" favorite U.S. Steel, the world's first billion-dollar corporation. On his way to make the purchase, however, Whitney bumped into a junior who was analyzing the banking futures based on the increase of failing mortgages from failing farms and a weakening real estate market. He suggested that the problems of the new market were caused from the bottom-up, and a top-down solution would only put off the inevitable. Instead of his ostentatious show of purchasing to show the public money was still to be had, Whitney decided to use the massive banking resources behind him to support the falling. He made key purchases late on the 24th, and then his staff worked through the night determining what stocks were needlessly inflated, what were solid, and what could be salvaged (perhaps even at a profit). Stocks continued to tumble that Friday, but by Monday thanks to word-of-mouth and glowing press from newspapers and the new radio broadcasts, Tuesday ended with a slight upturn in the market of .02 percent. Numerically unimportant, the recovery of public support was the key success. With the initial battle won, Whitney spearheaded a plan to salvage the rest of the crisis as real estate continued to fall and banks (which were quickly running out of funds as they seized more and more of the market) would soon have piles of worthless mortgaged homes and farms. Banks organized themselves around the Federal Reserve, founded in 1913 after a series of smaller panics and determined rules that would keep banks afloat. Further money came from lucrative deals with the wealthiest men in the country such as John D. Rockefeller, Henry Ford, and the Mellons of Pittsburgh. Businesses managed to continue work despite down-turning sales through loans, though the unemployment rate did increase from 3 to 5 percent over the winter. The final matter was the question of international trade. As the Smoot-Hawley Tariff Act continued in the Senate, economists predicted retaliatory tariffs from other countries to kill American exports, but Washington turned a deaf ear. Whitney decided to protect his investments in propping up the economy by investing with campaign contributions. Democrats took the majority as the Republicans fell to Whitney's use of the press to blame the woes of the economy on Congressional "airheads". Representative Hawley himself lost his seat in the House, which he had held since 1907, to Democrat William Delzell. President Hoover, a millionaire businessman before entering politics, noted the shift, but remained quiet and dutifully vetoed the new tariff. By 1931, it became steadily obvious that America had shifted to an oligarchy. The banks propped up the market and were propped up themselves by a handful of millionaires. If Rockefeller wanted, he could single-handedly pull his money and collapse the whole of the American nation. Whitney took greater power as Chairman of the Federal Reserve, whose new role controlled indirectly everything of economic and political worth. As the Thirties dragged on, the havoc of the Dust Bowl made food prices increase while simultaneously weakening the farming class, and Whitney gained further power by ousting Secretary of Agriculture Arthur Hyde and installing his own man as a condition for Hoover's reelection in '32. Chairman Whitney would "rule" the United States, wielding public relations power and charisma to give Americans a strong sense of national emergency and patriotism during times like the Japanese War in '35 (which secured new markets in East Asia) and the European Expedition in '39. He employed the Red Scare to keep down ideas of insurrection and used the FBI as a secret police, but his ultimate power would be that, at any point, he could tamper with interest rates or stock and property value, and the country would spiral into rampant unemployment and depression, dragging the rest of the world with it.
<urn:uuid:c15e4abb-1d2c-4120-8226-b38f82c7defa>
CC-MAIN-2013-20
http://www.todayinah.co.uk/[email protected]&story=39750-Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.978055
1,457
3.765625
4
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
BP still has to pay the government for that little slip-up that happened last year. The Clean Water Act imposes punitive damages for any act of pollution carried out in US waters, with fines proportional to the magnitude of the environmental impact. For oil spills, damages are calculated according to the amount of hydrocarbons leaked into the water column. The Macondo well hemorrhaged oil for 89 days. That’s a whole lotta fine dollaz. Damages look set to total between $5.4 and $21 billion dollars, depending government’s final estimate of oil spilled. But Congress has to decide how that money gets spent – and thankfully, the House and Senate are trying to move quickly to pass the bill. The fine money is tentatively set to be divided amongst five areas: - 35 percent would be divided equally among the five Gulf states for economic and ecological recovery activities along the coast. - 30 percent would be dedicated to the development and implementation of a comprehensive restoration plan. A new Gulf Coast Restoration Council — made up of representatives from all five states — would dictate the scope of that plan. - 30 percent would be disbursed by the council to Gulf Coast states, with the allocation dictated in part by spill impact. - 5 percent would go to a new long-term science and fisheries endowment and to a Gulf Coast research, science and technology program. Last October, President Barack Obama established the Gulf Coast Ecosystem Restoration Task Force to come up with a plan to not only restore ecosystems damaged by the spill but also to repair decades of past damage done by efforts to reengineer the Mississippi River and expand oil and gas drilling in gulf marshes. The task force, made up of senior federal officials and representatives from the five Gulf Coast states of Alabama, Florida, Louisiana, Mississippi, and Texas, held a series of public meetings over the past year and sought input from a wide range of scientists. The result is a set of sweeping but relatively general recommendations aimed at bolstering both the science and the political support needed to tackle some highly complex restoration challenges. It sets out four major goals—restoring and conserving habitat, restoring water quality, replenishing and protecting living coastal and marine resources, and enhancing community resilience—along with 19 “major actions” needed to accomplish the goals. The need for better science gets plenty of ink—including calls for more comprehensive gulf monitoring and data-collection systems. But the report also notes that “the dire state of many elements of the Gulf ecosystem cannot wait for scientific certainty and demand immediate action.” To avoid delays, the panel proposes an “adaptive management” process of “learning by doing, wherein flexibility is built into projects, and actions can be changed based on” new science and progress toward goals.
<urn:uuid:0203e876-3b6d-4ea7-b2e0-f3c7c956a9c9>
CC-MAIN-2013-20
http://deepseanews.com/2011/10/divvying-up-bps-fine-for-restoration-in-the-gulf/comment-page-1/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948789
573
3.046875
3
[ "nature" ]
{ "climate": [], "nature": [ "ecological", "ecosystem", "ecosystems", "habitat", "restoration" ] }
{ "strong": 4, "weak": 1, "total": 5, "decision": "accepted_strong" }
|Paradise in Peril: Chilkoot's Brown Bears| by Lincoln Larson March 10, 2004 : The Chilkoot Brown Bear Project I uttered the habitual salutation as I began the trek down from my isolated research cabin to the banks of Southeast Alaskaís Chilkoot River. Although I saw no evidence of recent bear activity through the light of my headlamp, I had no intention of startling a 700-pound sow with cubs in the predawn of that cold, rainy September day. At 5:00 AM, I was the only human being walking along the river. As I approached my observation post, I heard splashing nearby. Straining my eyes in the darkness, I struggled to find the source of the commotion. Through my binoculars, I could just a make out the silhouette of a subadult grizzly devouring salmon carcasses on the riverís far bank, 60 meters away. At that distance, I was in no danger. Suddenly I froze. A large, dark form shot into my field of view, no more than 10 meters away. Even in the dark, I immediately recognized the shape of a large female brown bear. As I slowly and quietly backed away, two cubs ran up the bank and onto the road where I was standing. I managed to move behind the trunk of a spruce tree. My trembling fingers clutched the cannister of bear spray fastened to my beltóa last resort in case of a grizzly charge. The bears were obviously aware of my presence, but upwind, and in the dark, they were unable to locate my exact position. After over a minute of intense sniffing and scanning, the large sow eventually decided to saunter off downriver. The cubs bounded off behind her, and I breathed a heavy sigh of relief. I had emerged unscathed and transformed after my closest grizzly encounter while working on the Chilkoot Brown Bear Project. Chilkoot Study Area Inquisitive Subadult Bear : Studying the Chilkoot Bears The Chilkoot River flows about one mile out of Chilkoot Lake and into Lutak Sound, 10 miles north of Haines, Alaska. Every year, four different species of salmon (pink, sockeye, chum and coho) return to the river to spawn. The largest run occurs in September, when over 100,000 pinks and sockeyes (see note below) enter the icy waters to breed. The abundance of salmon attracts many predators to the Chilkoot each fall, including bald eagles and grizzly bears. Hundreds of fishermen and tourists also follow the salmon upstream to capitalize on the extraordinary angling and wildlife-viewing opportunities. This unique situation, a wild Alaskan River with easy road access, raises multiple conservation issues centered on the preservation of Chilkootís grizzly bears. Taking advantage of the ideal setting, the Brown Bear Project focuses on bear foraging ecology and habitat use patterns in relation to the riverís human activity. For two months beginning in mid-August, the research team observed and recorded bear and human behavior along the river corridor. We were stationed at designated points along the river for three-hour shifts at sunrise and sunset (mixed with the occasional midday observation). Though spectators avoided the river on cold, rainy days, adverse weather conditions did not deter the bears or the researchers. We logged many hours huddled in freezing rain, inspired by the rapture of field biology and our majestic surroundings. The observation posts were concealed and did not obstruct bear access routes to the river. As a result, both tourists and bruins were rarely aware of our presence. Using video cameras, binoculars, and tape recorders, we documented human and bear activity throughout the three-hour periods. From August to October, we viewed 16 different bears (not counting cubs) foraging along the Chilkoot River. Each of these bears was either an adult female (some with litters of up to four cubs) or a subadult (age 2-3 years). The large males presumably stayed deeper in the mountains, farther from the threats posed by hunters and civilization. During the project, I became very familiar with all the individual bears and their idiosyncrasies. The grizzlies displayed a variety of different fishing techniques. While many of the subadults ran, jumped, and futilely flailed at fish in the swiftly moving water, the older bears had clearly refined their skills. One mother preferred snorkeling for salmon, another opted to wait patiently before plunging on unfortunate passersby, and another had mastered a herding technique, chasing fish into pools with no outlets. The cubs enthusiastically attempted to imitate their mothersí tactics, but experienced little success. They often left the water and frolicked on the riverbanks, anxiously awaiting the delivery of their next meal. As the season progressed, the bears began to consume fish carcasses at a higher rate. Live fish offer a richer energy content but, with hibernation looming, bears desperate to pack on the pounds seemed to prefer quantity instead of quality. Individual bears developed certain routines that made their spatial and temporal habitat use patterns very predictable. One adult female emerged from the same spot in the forest at exactly the same time (virtually down to the minute) for five consecutive days, constantly fishing the same segment of the river. With many bears in a relatively small area (up to seven bears were sometimes visible along a 100 meter stretch of river), confrontations and chases between grizzlies unwilling to share fishing spots often occurred. Sows with cubs dominated the riverís feeding hierarchy, and smaller subadults were frequently forced to retreat. All the bears faced one common obstacle, however: human disturbances. NOTE: Fish counts are conducted at the Weir, a man-made structure composed of a series of closely-spaced bars designed to block fish movement upstream while permitting the free flow of water. A few bars are lifted several times a day, creating a small opening through which fish can pass. Fish & Game Officials count the salmon as they swim by. After spawning, the salmon die and their bodies float downstream. Bears often congregate at the Weir early in the morning to pick the fish carcasses off the bars. Fishing Along the Chilkoot Subadult Feeding on Carcasses : The Impacts of Human Activity Evidence indicates that human activity greatly influences brown bear activity. Bears were most active in the extreme early morning and late evening, corresponding to the minimum in angler and vehicle densities. Bears in more remote locations, such as Alaskaís Katmai National Park, prefer to fish during the daylight hours when live fish capture rates are much higher. The Chilkoot grizzlies, however, choose to forage in the dark to eliminate human disturbances. With humans in the vicinity, the bears experienced decreased fishing success and often resorted to eating fish carcasses while maintaining constant vigilance. In some instances, vehicle traffic along the road was so heavy that bears were denied access to the river. Sows with cubs generally showed less tolerance for humans, possibly due to the recent shootings of several young bears that some members of the local community perceived as threats. In most cases, these cubs discovered garbage that had not been properly disposed, and they began to associate humans with food. Food-conditioned bears are reluctant to exploit the valuable salmon resource of the river, electing to scavenge trash cans and fishermenís coolers in search of an easier meal. As people from around the world flock to the Chilkoot River each fall to witness the amazing bear-feeding spectacle, the number and intensity of bear-human interactions will continue to grow. Men and bears are capable of coexistence, but the volatile situation along the Chilkoot demonstrates that proper management techniques are necessary to ensure a relationship beneficial to both species. We must give the animals some space in order to encourage and appreciate their natural behavior. As my experience with the Chilkoot brown bears confirms, the common perception of grizzlies as menacing monsters and man-killers is completely unwarranted. While the bears certainly offer an imposing, commanding presence, they are generally benign, intelligent creatures that should be revered, not feared. The Chilkoot River System provides an excellent case study for wildlife management techniques around Alaskaís salmon streams. If we can understand the effects of human habitat use and recreation in this river ecosystem, we can begin to develop strategies for protecting bears in other areas. Special thanks to Lori and Anthony Crupi, founder and director of the Chilkoot Brown Bear Project. About the author: Lincoln Larson is a recent graduate of Duke University (Durham, North Carolina) and an aspiring field biologist.
<urn:uuid:6d1e3acd-864d-488a-86d4-9b513e7ffb9e>
CC-MAIN-2013-20
http://www.fieldtripearth.org/print-article.xml?id=963
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95188
1,791
2.515625
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecosystem", "habitat" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
I love the fall and how the leaves change from deep greens to reds and orange and gold. This natural riot of color takes place wherever there are trees with leaves and there’s almost no place better to watch the leaves change than in the Northeast. This part of the four-seasoned ritual of life attracts tourists from far and wide and tugs at me to make a special trip to our home in the mountains there. And this reminds me every year about the natural changes that are a constant in our lives. Ever wonder why and how the leaves change colors? • As summer ends and autumn comes, the days get shorter and shorter. This is how the trees "know" to begin getting ready for winter. The trees will begin to rest and live off the food they stored during the summer. The green chlorophyll disappears from the leaves. As the bright green fades away, we begin to see yellow and orange colors. Small amounts of these colors have been in the leaves all along - we didn’t them in the summer because they were covered up by the green chlorophyll. The bright reds and purples we see in leaves are made mostly in the fall. In some trees, like maples, glucose is trapped in the leaves after photosynthesis stops. Sunlight and the cool nights of autumn cause the leaves to turn this glucose into a red color. It’s the combination of all these things that makes the beautiful fall colors we enjoy each year. Ever hear of Thomas Cole’s The Voyage of Life series? In 1840 he did this series of paintings that represent an allegory of the four stages, or seasons, of human life: • In childhood, the infant glides from a dark cave into a rich, green landscape. • As a youth, the boy takes control of the boat and aims for a shining castle in the sky. • In manhood, the adult relies on prayer and religious faith to sustain him through rough waters and a threatening landscape. • Finally, the man becomes old and the angel guides him to heaven across the waters of eternity. In each painting, accompanied by a guardian angel, the voyager rides the boat on the River of Life. The landscape, corresponding to the seasons of the year, plays a major role in telling the story. And in those paintings you can clearly see the leaves changing colors in the season (manhood) that represents the fall of the voyager’s life. So what’s this mean to you and me? Things change! Always! Life is full of changes and most of us are creatures of habit. And because we don’t know what’s next, we tend to cling to what we already have and know and are comfortable with. We reminisce about and cherish the past because it’s familiar, it’s already happened and we know how the movie ends. And while that’s generally true, it’s the half of the story that we tend to recognize. The other half is that the things we learn from the past should continually be updating our knowledge of life, and how to process the new things we see and experience, and how to better understand the meaning of who and what we are – that’s the harder part of the story to accept. With each passing season, and the changes that occur, we need to grow and become wiser. And that wisdom should create the stuff we need to constantly be better, to do the things we’re called upon to do each day better, and to help those around us to become better. But you won’t learn anything or get better if you’re not open to the changes – natural or man-made – that occur every day. I wish you could join me here at our camp to look across the lake at the beauty that is unfolding. The scene is constant; the colors let me know that time is marching on. On the one hand I could worry that the seasons of my life are marching on, or, on the other, I could be challenged by the things I’ve learned this year that will help me to be wiser and more thoughtful in the future. One stunts natural growth; the other invigorates a sense of wonder about the world around us and the endless possibilities that potentially exist. The choice is ours. And while these leaves will begin to fade and fall soon, the inspiration that they trigger should last a lifetime. That’s the voyage of life, and I’m sure glad to be on it! My message this week is about being inspired to dream about improving our lives: “You are never too old to set another goal or to dream a new dream.” -C.S. Lewis Clive Staples Lewis (1898 – 1963), commonly referred to as C. S. Lewis and known to his friends and family as "Jack", was a British novelist, academic, medievalist, literary critic, essayist, lay theologian and Christian apologist from Ireland. Got any new dreams today? Not the ones you try to remember and think about when you wake, but the kind that have you excited to try something really new. Everyone can dream, but not everyone has the curiosity, energy, courage and stamina to try to attempt and achieve their dreams. Most want things to be smooth and easy, with no surprises or challenges that can potentially make you look silly. Fact is, without those challenges or knowing how to recover from looking silly you’ll never get to experience what it is to learn from trying something new. You can tell the ones who are into this – the twinkle in their eye, the bounce in their step, the way they carry themselves. If that’s you, and you’ll know if it is, then set another goal today, dream another dream today and make a pledge to be creative and innovative today. Go ahead – you’re never too old! Friday, September 30, 2011 at 5:24 AM Friday, September 23, 2011 “Everyone wants to be true to something, and we’re true to you” - that’s the marketing tagline for Jet Blue’s travel rewards program. I know because it kept scrolling across the little screen on the back of the seat in front of me when I recently flew across country. It’s okay in the context of what they’re trying to promote, but it also might apply to more than just loyalty programs. And it may be that because people naturally want to be ‘true blue’ to so many things, it becomes overused and almost trite. That’s too bad. Because being ‘true blue’ can be a good thing. First: ever wonder where the term ‘true blue’ comes from? • Loyal and unwavering in one's opinions or support for a cause or product. • 'True blue' is supposed to derive from the blue cloth that was made at Coventry, England in the late middle-ages. The town's dyers had a reputation for producing material that didn't fade with washing, i.e. it remained 'fast' or 'true'. The phrase 'as true as Coventry blue' originated then and is still used (in Coventry at least). • True Blue is an old naval/sailing term meaning honest and loyal to a unit or cause. • And dictionaries say that true blue refers to “people of inflexible integrity or fidelity”. And second: does ‘true blue’ really mean anything in this era of fast food and slick advertising? There are lots of loyalty programs – hotels, airlines, slot clubs, retail stores, pop food brands, credit cards, clothing, wine, restaurants, movie theaters, travel sites, theme parks, computer games and countless more – and they all try to get you to stick with them by rewarding you in all kinds of ways: points, miles, free gifts, shows, food and on and on. But it seems a bit contrived, as if there’s some Oz-like character behind a curtain trying to entice you with these awards (read: bribes). Imagine if this kind of thing were done with going to school or work, singing in a choir, participating in some community event, volunteering your time to some worthy cause, remaining friends or staying in a relationship… doesn’t seem as appropriate in those, does it? Think of someone or something you really like: do you really and truly like them or it, or do you need to be bribed with rewards to feel that way. Of course you don’t. So why do the airlines and hotels and all those other things we purchase have to bribe us like them? But – there are companies out there that do understand what it takes to win your loyalty: • Southwest Airlines was one of the first companies that made having fun and using common sense part of their strategy for success. Singing the safety jingle, devising a different boarding routine and setting the record for on-time departures set them apart and won over customers. They got it! • Zappos doesn’t give you anything extra to make you want to come back – they believe that great service plus free shipping and returns will do that. Everyone said that nobody would buy shoes online – wrong. Zappos gets it! • Apple wins and keeps their customer’s loyalty by incubating and introducing cool new ideas and products all the time. And they’re just about the biggest and most successful and most admired company on the planet. They get it! But for every Southwest Airlines-type great experience there are hundreds of others that under perform and underwhelm. So they sign you up and hope that rewarding your loyalty overcomes the other things they do that destroys your loyalty. Seems to me they just don’t get it? Jet Blue says they give you more leg room – that’s true if you pay extra for those few rows that have it. How come they just don’t make eye contact and smile more? How come they can’t get the bags to the conveyor in less than 30 minutes (which may not seem like much to them but after a cross country flight an extra 30 minutes is painful). How come they don’t get it? I want to join their loyalty program so I can get another trip with them like I want to have my teeth drilled. And then they spend so much time and energy trying to give you that free round trip ticket if you apply for their credit card – you know, the one that has annual fees and high interest rates. How come they don’t get it? Why can’t they just treat me like a loyal and valued customer, like someone they genuinely like and appreciate, like they’d like to be treated if they had to fly on someone else’s airline. Seems to me they just don’t get it. Most of the good things in life are rooted in quality, trust and respect. People you work with and for, family that you live with and love, things you do for fun and relaxation, games you gladly play with others, friendships you’re lucky enough to have, clubs you join and actively participate in, activities you sign up for – they’re all based on the simple premise that things that are good are that way because they are genuinely good and fun and worthwhile. And that’s why you stick with them loyally. But all these other kinds of loyalty programs are contrived. And yet we sign up for them like they’re free and worthwhile. They’re not free – we pay for the increased costs of these rewards. And they’re not worthwhile - we’re treated poorly by those who have the attitude that the cheap rewards they give are enough to overcome the thoughtless and robotic service they go through the motions of providing. Next time someone asks if I’ve signed up for their loyalty program I’m going to give them a tip: treat me nicely, treat me fairly, treat me respectfully, act like you really do care, thank me like you really mean it and treat me like you really do want me as a customer – and I’ll come back as often as I can or need to, willingly and freely. When are all these marketing geniuses going to wake up? When are they going to be ‘true blue’ to the Golden Rule? My message this week is about how excellence can lead to greatness: ”If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -Thomas J. Watson Thomas John Watson, Sr. (1874 – 1956) was president of International Business Machines (IBM) and oversaw that company's growth into a global force from 1914 to 1956. Watson developed IBM's distinctive management style and corporate culture, and turned the company into a highly-effective selling organization. He was called the world's greatest salesman. Do you want to achieve excellence? Some people don’t – they’re content to work alongside others, doing just enough to get by and satisfy their basic needs, content to have a few toys, take life easy and not make waves. But is that what you want – would that be enough for you? If not, then you’ve got to decide right now to start going farther, looking to help others, caring more, trying harder, and being more of what you can be today. You’ve got to take it to the next level – in commitment, in energy, in enthusiasm, in being a role model, in paying closer attention to details, in always striving to do and be all that you’re capable of. As of this second, you’ve got to quit doing less-than-excellent work. That’s how YOU can achieve excellence - (note: the emphasis is on YOU)! at 5:34 AM Friday, September 16, 2011 Where were you on 9/11? For most of us the answers are permanently etched in our minds. Like the attack on Pearl Harbor and VE Day for our parents, or the moment John Kennedy was shot or Armstrong set foot on the moon for the baby boomers, 9/11 has become one of the iconic moments in time for all who were alive then. I remember exactly where I was, what I was doing, who told me and how I felt the day Kennedy was killed; and like most people I was watching on our little black and white TV when Ruby shot Oswald the next day. I remember my teacher bringing me into the assembly hall to watch when Armstrong took “one small step for man, one giant leap for mankind”. There have been literally trillions of moments in my life, but these iconic ones stand out, frozen in time and in my mind. And then there was 9/11. In these weekly blogs I try to write about things that catch my attention. These stories tend to take on meanings beyond the specific incidents I mention, meanings that relate to life’s larger issues and that can possibly teach us something. But this one goes way beyond any of the moments and incidents that caught my attention - 9/11 caught the attention of everyone on the planet. There aren’t many things that reach that level, things that stop time, that leave indelible memories about where we were and who we were with, that immediately bring back visceral feelings and emotions of a long ago but clearly remembered moment in time. 9/11 does all of those things and more. My wife and I were in NYC: preparing to get on the George Washington Bridge to go into Manhattan when the first plane hit; coming to a complete stop on the road and in our lives; watching in fear and confusion as the second plane hit; staring in horror as first one and then the other building fell; hearing about the other plane crashes in Washington and Pennsylvania; staying glued to the radio and then the television while the world stood still. We drove away from the City that day in fear and confusion – trying to get as far away as possible and to make sense of how and why this happened. As we drove we came upon a rise in the road where all the cars were stopped; people were standing beside their cars and looking back in the direction we came from, so we stopped too. In the distance there was smoke where the towers so recently stood; nobody was talking; everyone was crying. We eventually made it to our home in the Adirondack Mountains, safe and overwhelmed by the fear and confusion that enveloped the world as we knew it. I can see and feel that day now as if were yesterday. I guess that’s what an iconic moment is: something we remember – clearly and forever. And now, in what seems like no time at all, ten years have passed and the memorial to those killed has been unveiled. The reading of the names this past Sunday stopped and stunned us all over again. The tolling of the bells in New York, Washington and Shanksville brought us back to that moment in time. The sight of the grieving families and friends as they touched and etched the names of their fathers, brothers, mothers, sisters, relatives and friends brought us together now as we were back then. The pettiness and partisanship that dominates the news was pushed aside for just a moment as we all stood in solemn and shared tribute to something that transcended all the comparatively meaningless stuff that normally seeks to grab our attention. As sad as the memories are, the togetherness helps us get through the memories now like it did when this terrible tragedy first happened. Why can’t we make that feeling last? A man named Al DiLascia from Chicopee, Mass. wrote a letter to the editor of the New York Times this week that summed this up: For one brief moment on September 11, 2011, time seemed to stand still. People sought family members and recognized the importance of family. Acts of charity were plentiful. There was an assessment of life and what is really important. Places of worship were full. People unashamedly prayed. For one brief moment... Let’s try to remember – not just the events that make up these iconic moments, but what they really mean, and what’s really important. Don’t let a day pass that you don’t tell those you love how much you care and to show it in thoughtful and meaningful ways, to touch the people and things that are most important to you, to reach out and give to those in need, and to quietly count and give thanks for all the blessings that are in your life. Do whatever you have to do to make the meaning of your iconic moments last! My message this week is about being loyal to the people and things that are important in your life: “Loyalty is something you give regardless of what you get back, and in giving loyalty, you're getting more loyalty; and out of loyalty flow other great qualities.” Colonel Charles Edward ("Chuck") Jones (1952 – 2001) was a United States Air Force officer, a computer programmer, and an astronaut in the USAF Manned Spaceflight Engineer Program. He was killed in the attacks of September 11, 2001 aboard American Airlines Flight 11, the first plane to hit the first World Trade Center building at 8:46am. All of the great values we read and write about seem to be interconnected, and loyalty may be the one at the hub of them all. Think of the people and things you’re loyal to, and then note the other great qualities that come from that loyalty. Friendship, success, pride, humility, professionalism, integrity, team spirit and passion are a few that immediately come to mind. These are the qualities and values that you hope to find in others, and certainly they’re the ones to which you should always aspire. But to get loyalty you need to give it, and that means you must be true to your work and family and friends, forgiving in your nature, humble in your approach to others, sincere in your dealings with all, and understanding in the complex and competitive world that we live in. Look for ways to give loyalty today without attaching any strings for reciprocity. And don’t be surprised if you then start to get loyalty and all the other great qualities flowing back to you in return. Stay well. And please say a prayer for these heroes and all the others in your life who’ve passed. at 6:20 AM Friday, September 9, 2011 Vacation homes in the Adirondacks are commonly referred to as camps – my family is fortunate to have one and, as you know from some of my previous blogs, we’ve spent a lot of time there this year. These are not to be confused with day and overnight camps that parents send their kids to. This is about the second kind of camp. I went to an overnight camp as a kid and loved it, but that’s a story for another time. This tale begins at Camp Nazareth (that’s the name of the overnight camp at the end of our lake). Its run by the local Catholic Diocese which has had little success in recent years attracting enough kids. More often than not, this wonderful facility – it can hold up to 300 kids at any one time - is terribly under used. Fortunately, it seems that they’ve now discovered ways to attract alternate users like family reunions, corporate retreats and, just this past week, a high school crew team (Google “rowing sport” to learn more about this sport on Wikipedia). And that crew team caught our attention. Our family’s camp (we call it “The Point”) is on the water and we can easily see when anyone is on the lake. While sitting on our dock one morning we were surprised to see this crew team go by. If you’ve never seen a crew team before, they operate in long narrow boats (like large kayaks) that are referred to as “sculls” – these are two to eight-person boats that are rowed by that many team members, each of whom operates one oar. In this case, there were two eight-person sculls (one with all men and the other all women) that were practicing. Mind you, this is not an everyday sight – there are a few motorboats and a lot of canoes and kayaks on our lake, so the sight of these two sculls was a bit of a surprise. Alongside these two sculls was a small motorboat in which sat the coach who had a megaphone and was giving instructions and commands. On the first day of what appeared to be one of their initial practice sessions, these two sculls were having what was obviously some beginner’s training. And here’s another key bit of information: the team has to row in very close order for the boat to move along smoothly. If any of the rowers is out of synch (even a little) the boat can very easily (and visibly) miss a beat. And if any of those misses are overly pronounced the boats can stop altogether or even capsize. So at the beginning of this training the coach definitely wanted to take it slow. As the week progressed, however, the boats began to move more smoothly, and over time they got smoother and faster. And since the object of crew is to beat the competition, smooth and fast is definitely better. In order to get smoother and faster, the individual team members all have to practice at learning not only how to improve their own skills but also how to be in better synch with all the other members of their team. In crew, as in so many other aspects of life, both are critical (as in one without the other is not worth much). As we watched this unfold before us, we started to reflect on how the basic lessons being learned out on the lake apply to just about everything we do in life (and here I need to confess that my wife realized this before I did). Being effective and functional at anything – playing with friends on the school yard, getting along as a family, working with colleagues, participating on a sports team, singing in a choir, building something with others, participating in community events – really is about learning how to improve your own skills while also performing in concert with others. Learning anything alone is one thing, learning it together and then interacting with others is a whole different thing. The key to life is learning both, because one without the other is really not worth much. And here was a live metaphor for this right on the lake in front of us – and just like that my whole professional life flashed before me as I watched this training unfold. Each of these young athletes was working hard to learn how to be the best they could be, they and their team mates were learning how to interact with each other more effectively, the coaches were seeing the results of their hard work and practice, and those of us on the sidelines were rewarded by seeing how things can and should work when effective instructions, practice and coaching all come together. We don’t often get to see things so clearly, or watch how the rituals of cause and effect play out so clearly. Simply put: this was a real lesson about life. And, in part because of where we were, and also because of what we saw and then realized, we were again moved to exclaim “that’s the Point! My message this week is about finding things you can be passionate about, because they define who and what you are. “I know that I have found fulfillment. I have an object in life, a task ... a passion.” Amantine Lucile Aurore Dupin, later Baroness Dudevant (1804 – 1876), best known by her pseudonym George Sand, was a French novelist and memoirist. Have you found fulfillment? Not just a momentary or fleeting sense of accomplishment, but a lasting and on-going feeling that “this is it”. We all do lots of little and mostly disconnected things – chores, work, hobbies – and these achieve short-term goals or complete individual assignments. But every now and then one big thing comes along that is more about defining our style or purpose, and these make us who and what we are. Now it could be a car or a job – those certainly say a lot about you. But to find fulfillment – to know that something is really about the “you” that is truly you – that’s a real find. And that’s the kind of thing that passion is truly built upon. Something you love deeply, that you can’t stop thinking about, that you can’t wait to get up and do each day, and that you truly care more about than almost anything else. That’s the kind of passion that is truly a treasure – and that’s the kind of object in life that you want to be on the lookout for – today and every day. That’s the Point! at 5:14 AM Friday, September 2, 2011 Last week was something else – an earthquake and a hurricane and tornados and sunshine and hot and cold… I'm having trouble remembering where I am. I grew up in upstate New York and experienced four distinct seasons each year – but there were no earthquakes or tornados. I later moved to Nevada for nearly a quarter century and experienced dry heat – but there were never any hurricanes or tornados. I then moved to the beaches of California where the sun shines 300+ days a year, the temperature rarely gets above 75 and earthquakes and wild fires are a nuisance – but there are no tornados or hurricanes. And now I’m back in New York (city and upstate) and just about everything but wild fires have hit here in the past 8 months. What’s going on? I didn’t own a winter coat – and the record snow falls and cold last winter drove me to Land’s End with a singleness of purpose. I didn’t own boots or an umbrella, and the wet snow and rains taught me a lot about what it means to stay dry. I’m used to driving wherever I want to go and not having a car here to help navigate through the varying weather patterns has made me a fan of the Weather Channel. I never thought about the weather, never worried about what I’d wear or looked at the skies for clues to what’s coming, and now that the weather changes in the blink of an eye I am obsessed with meteorology. But last week, depending where you were in the path of all this weather, meteorologists either got it right, mostly right, or wrong. Hey – they’re human so maybe we shouldn’t hold them to such a high standard as always being right. I mean, is anybody always right? Maybe we should take what they say and apply some old fashioned lore to this inexact science – such as: Red sky at night, sailor's delight, Red sky in the morning, sailors take warning. When the wind is blowing in the North No fisherman should set forth, When the wind is blowing in the East, 'Tis not fit for man nor beast, When the wind is blowing in the South It brings the food over the fish's mouth, When the wind is blowing in the West, That is when the fishing's best! When halo rings the moon or sun, rain's approaching on the run. When windows won't open, and the salt clogs the shaker, The weather will favor the umbrella maker! No weather is ill, if the wind be still. When sounds travel far and wide, A stormy day will betide. If clouds move against the wind, rain will follow. A coming storm your shooting corns presage, And aches will throb, your hollow tooth will rage. I wouldn’t normally be thinking about these things, but all this crazy weather has me spooked. Is it global warming or just the fact that weather seems unpredictable? Were the winters way more intense when we were kids, or did it just seem that way because we were kids? Can weather really be predicted correctly all the time by these meteorologists, or should we take what they say with a “grain of salt”? Or should we rely more on our own common sense as aided by some of these old fashioned sayings? Here in New York last week the mayor and the meteorologists got it wrong – but not by much. The winds blew and the rains fell and, though there was less flooding and damage than predicted here, they made damn sure we were prepared by scaring the daylights out of us with their dire warnings. Now some people are complaining because they scared us; but those same people complained when they didn’t scare us before last winter’s massive snow storm, or that they didn’t scare others enough before Katrina. Fact is, lots of people are never happy, especially if they’re inconvenienced. But potentially saving lives is better than trying to apologize for not saving lives: isn’t that what ‘better safe than sorry’ is all about? Maybe we expect too much from the elected officials who we don’t really like or trust anyways (especially when they are inconveniencing us). I guess they’re damned if they do and damned if they don’t. I’ve even read some editorials about how this should make us either for or against big government. Come on, it was just a storm. And even though lots of people got flooded out, and there was lots of damage to homes and fields and trees and power lines, and lots of high water and wind, I’m relieved because it was less than predicted here on my street. I’m really sad for those to whom it was as much or more than predicted. And even though I don’t blame anyone, I sure as hell would like to know what all this crazy weather means, and whether a red sky at night really does mean a sailor’s delight? My message this week is about loyalty, and whether we need to think about how loyal we are to others and how loyal we need to be to ourselves: “Loyalty to petrified opinion never yet broke a chain or freed a human soul.” -Mark Twain Mark Twain achieved great success as a writer and public speaker. His wit and satire earned praise from critics and peers, and he was a friend to presidents, artists, industrialists, and European royalty. Loyalty can be both good and bad. People often remain loyal long after the reason for doing so has ended. If the reason you became loyal has petrified then you need to re-examine your motives and goals; you need to break free when the times demand it and it’s the right thing to do. Loyalty should be given to the best ideas, the highest principles, the most ethical leaders, the greatest challenges, and to the most extraordinary opportunities. But sometimes we remain loyal just because we are afraid to appear disloyal or we’re afraid to re-examine that loyalty. This conflict can be a Catch 22, or it can be a moment of re-commitment and rebirth. And just like a plant that’s been sitting for a long time, it’s a good idea to re-pot our beliefs to make sure that our roots continue to grow deeper and stronger. So look at your loyalties today and make sure they’re where they should be. Stay warm, dry and well! at 5:36 AM
<urn:uuid:bdc6dcf7-611f-45db-9e3c-f35343ffd87a>
CC-MAIN-2013-20
http://www.thearteofmotivation.blogspot.com/2011_09_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00002-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968623
7,071
2.859375
3
[ "climate" ]
{ "climate": [ "global warming" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Acidosis is a condition in which there is excessive acid in the body fluids. It is the opposite of alkalosis (a condition in which there is excessive base in the body fluids). Causes, incidence, and risk factors: The kidneys and lungs maintain the balance (proper pH level) of chemicals called acids and bases in the body. Acidosis occurs when acid builds up or when bicarbonate (a base) is lost. Acidosis is classified as either respiratory acidosis or metabolic acidosis . Respiratory acidosis develops when there is too much carbon dioxide (an acid) in the body. This type of acidosis is usually caused by a decreased ability to remove carbon dioxide from the body through effective breathing. Other names for respiratory acidosis are hypercapnic acidosis and carbon dioxide acidosis. Causes of respiratory acidosis include: - Chest deformities, such as kyphosis - Chest injuries - Chest muscle weakness - Chronic lung disease - Overuse of sedative drugs Metabolic acidosis develops when too much acid is produced or when the kidneys cannot remove enough acid from the body. There are several types of metabolic acidosis: Diabetic acidosis (also called diabetic ketoacidosis and DKA) develops when substances called ketone bodies (which are acidic) build up during uncontrolled diabetes . - Hyperchloremic acidosis results from excessive loss of sodium bicarbonate from the body, as can happen with severe diarrhea. Lactic acidosis is a buildup of lactic acid . This can be caused by: - Exercising vigorously for a very long time - Liver failure - Low blood sugar (hypoglycemia) - Medications such as salicylates - Prolonged lack of oxygen from shock, heart failure, or severe anemia Other causes of metabolic acidosis include: Signs and tests: - Arterial or venous blood gas analysis - Serum electrolytes - Urine pH An arterial blood gas analysis or serum electrolytes test, such as a basic metabolic panel, will confirm that acidosis is present and indicate whether it is metabolic acidosis or respiratory acidosis. Other tests may be needed to determine the cause of the acidosis. Treatment depends on the cause. See the specific types of acidosis. Acidosis can be dangerous if untreated. Many cases respond well to treatment. See the specific types of acidosis. Calling your health care provider: Although there are several types of acidosis, all will cause symptoms that require treatment by your health care provider. Prevention depends on the cause of the acidosis. Normally, people with healthy kidneys and lungs do not experience significant acidosis. Seifter JL. Acid-base disorders. In: Goldman L, Ausiello D, eds. Cecil Medicine. 23rd ed. Philadelphia, Pa: Saunders Elsevier; 2007:chap 119. |Review Date: 11/15/2009| Reviewed By: David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
<urn:uuid:56f207a8-441d-4c6f-8ce6-55bdb1401bf6>
CC-MAIN-2013-20
http://www.texashealth.org/well-being_template_connected-inner.cfm?id=5351&action=detail&AEProductID=Adam2004_1&AEArticleID=001181&AEArticleType=Disease
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.881313
782
3.9375
4
[ "climate" ]
{ "climate": [ "carbon dioxide" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
The National Museum of the American Indian on the National Mall opened in September 2004. Fifteen years in the making, it is the first national museum in the country dedicated exclusively to Native Americans. The five-story, 250,000-square-foot, curvilinear building is clad in a golden-colored Kasota limestone that is designed to evoke natural rock formations that have been shaped by wind and water over thousands of years. The museum is set in a 4.25-acre site and is surrounded by simulated wetlands. The museum's east-facing entrance, its prism window and its 120-foot-high space for contemporary Native performances are direct results of extensive consultations with Native peoples. The museum's architect and project designer is the Canadian Douglas Cardinal (Blackfoot); its design architects are GBQC Architects of Philadelphia and architect Johnpaul Jones (Cherokee/Choctaw). Disagreements during construction led to Cardinal being removed from the project, but the building retains his original design intent, and his continued input enabled its completion. The museum's project architects are Jones & Jones Architects and Landscape Architects Ltd. of Seattle and SmithGroup of Washington, D.C., in association with Lou Weller (Caddo), the Native American Design Collaborative, and Polshek Partnership Architects of New York City; Ramona Sakiestewa (Hopi) and Donna House (Navajo/Oneida) also served as design consultants. The landscape architects are Jones & Jones Architects and Landscape Architects Ltd. of Seattle and EDAW Inc., of Alexandria, Virginia.
<urn:uuid:8e769fb3-ca1c-4701-bed1-0ea9640d89bd>
CC-MAIN-2013-20
http://contentdm.unl.edu/cdm/singleitem/collection/archivision/id/26342/rec/25
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946256
321
2.84375
3
[ "nature" ]
{ "climate": [], "nature": [ "wetlands" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Part 2 - Those Who Are Unable to See the Fact of Creation The theory of evolution is a philosophy and a conception of the world that produces false hypotheses, assumptions and imaginary scenarios in order to explain the existence and origin of life in terms of mere coincidences. the roots of this philosophy go back as far as antiquity and ancient Greece. All atheist philosophies that deny creation, directly or indirectly embrace and defend the idea of evolution. the same condition today applies to all the ideologies and systems that are antagonistic to religion. The evolutionary notion has been cloaked in a scientific disguise for the last century and a half in order to justify itself. Though put forward as a supposedly scientific theory during the mid-19th century, the theory, despite all the best efforts of its advocates, has not so far been verified by any scientific finding or experiment. Indeed, the "very science" on which the theory depends so greatly has demonstrated and continues to demonstrate repeatedly that the theory has no merit in reality. Laboratory experiments and probabilistic calculations have definitely made it clear that the amino acids from which life arises cannot have been formed by chance. the cell, which supposedly emerged by chance under primitive and uncontrolled terrestrial conditions according to evolutionists, still cannot be synthesised even in the most sophisticated, high-tech laboratories of the 20th century. Not a single "transitional form", creatures which are supposed to show the gradual evolution of advanced organisms from more primitive ones as neo-Darwinist theory claims, has ever been found anywhere in the world despite the most diligent and prolonged search in the fossil record. In their attempts to gather evidence for evolution, evolutionists have unwittingly proven by their own efforts that evolution cannot have happened at all! The person who originally put forward the theory of evolution, essentially in the form that it is defended today, was an amateur English biologist by the name of Charles Robert Darwin. Darwin first published his ideas in a book entitled the Origin of Species by Means of Natural Selection in 1859. Darwin claimed in his book that all living beings had a common ancestor and that they evolved from one another by means of natural selection. Those that best adapted to the habitat transferred their traits to subsequent generations, and by accumulating over great epochs, these advantageous qualities transformed individuals into totally different species from their ancestors. the human being was thus the most developed product of the mechanism of natural selection. in short, the origin of one species was another species. Darwin's fanciful ideas were seized upon and promoted by certain ideological and political circles and the theory became very popular. the main reason was that the level of knowledge of those days was not yet sufficient to reveal that Darwin's imaginary scenarios were false. When Darwin put forward his assumptions, the disciplines of genetics, microbiology, and biochemistry did not yet exist. If they had, Darwin might easily have recognised that his theory was totally unscientific and thus would not have attempted to advance such meaningless claims: the information determining species already exists in the genes and it is impossible for natural selection to produce new species by altering genes. While the echoes of Darwin's book reverberated, an Austrian botanist by the name of Gregor Mendel discovered the laws of inheritance in 1865. Although little known before the end of the century, Mendel's discovery gained great importance in the early 1900s with the birth of the science of genetics. Some time later, the structures of genes and chromosomes were discovered. the discovery, in the 1950s, of the DNA molecule, which incorporates genetic information, threw the theory of evolution into a great crisis, because the origin of the immense amount of information in DNA could not possibly be explained by coincidental happenings. Besides all these scientific developments, no transitional forms, which were supposed to show the gradual evolution of living organisms from primitive to advanced species, have ever been found despite years of search. These developments ought to have resulted in Darwin's theory being banished to the dustbin of history. However, it was not, because certain circles insisted on revising, renewing, and elevating the theory to a scientific platform. These efforts gain meaning only if we realise that behind the theory lie ideological intentions rather than scientific concerns. Nevertheless, some circles that believed in the necessity of upholding a theory that had reached an impasse soon set up a new model. the name of this new model was neo-Darwinism. According to this theory, species evolved as a result of mutations, minor changes in their genes, and the fittest ones survived through the mechanism of natural selection. When, however, it was proved that the mechanisms proposed by neo-Darwinism were invalid and minor changes were not sufficient for the formation of living beings, evolutionists went on to look for new models. They came up with a new claim called "punctuated equilibrium" that rests on no rational or scientific grounds. This model held that living beings suddenly evolved into another species without any transitional forms. in other words, species with no evolutionary "ancestors" suddenly appeared. This was a way of describing creation, though evolutionists would be loath to admit this. They tried to cover it up with incomprehensible scenarios. for instance, they said that the first bird in history could all of a sudden inexplicably have popped out of a reptile egg. the same theory also held that carnivorous land-dwelling animals could have turned into giant whales, having undergone a sudden and comprehensive transformation. These claims, totally contradicting all the rules of genetics, biophysics, and biochemistry are as scientific as fairy-tales of frogs turning into princes! Nevertheless, being distressed by the crisis that the neo-Darwinist assertion was in, some evolutionist paleontologists embraced this theory, which has the distinction of being even more bizarre than neo-Darwinism itself. The sole purpose of this model was to provide an explanation for the gaps in the fossil record that the neo-Darwinist model could not explain. However, it is hardly rational to attempt to explain the gap in the fossil record of the evolution of birds with a claim that "a bird popped all of a sudden out of a reptile egg", because, by the evolutionists' own admission, the evolution of a species to another species requires a great and advantageous change in genetic information. However, no mutation whatsoever improves the genetic information or adds new information to it. Mutations only derange genetic information. Thus, the "gross mutations" imagined by the punctuated equilibrium model, would only cause "gross", that is "great", reductions and impairments in the genetic information. The theory of punctuated equilibrium was obviously merely a product of the imagination. Despite this evident truth, the advocates of evolution did not hesitate to honour this theory. the fact that the model of evolution proposed by Darwin could not be proved by the fossil record forced them to do so. Darwin claimed that species underwent a gradual change, which necessitated the existence of half-bird/half-reptile or half-fish/half-reptile freaks. However, not even one of these "transitional forms" was found despite the extensive studies of evolutionists and the hundreds of thousands of fossils that were unearthed. Evolutionists seized upon the model of punctuated equilibrium with the hope of concealing this great fossil fiasco. As we have stated before, it was very evident that this theory is a fantasy, so it very soon consumed itself. the model of punctuated equilibrium was never put forward as a consistent model, but rather used as an escape in cases that plainly did not fit the model of gradual evolution. Since evolutionists today realise that complex organs such as eyes, wings, lungs, brain and others explicitly refute the model of gradual evolution, in these particular points they are compelled to take shelter in the fantastic interpretations of the model of punctuated equilibrium. Is there any Fossil Record to Verify the Theory of Evolution? The theory of evolution argues that the evolution of a species into another species takes place gradually, step-by-step over millions of years. the logical inference drawn from such a claim is that monstrous living organisms called "transitional forms" should have lived during these periods of transformation. Since evolutionists allege that all living things evolved from each other step-by-step, the number and variety of these transitional forms should have been in the millions. If such creatures had really lived, then we should see their remains everywhere. in fact, if this thesis is correct, the number of intermediate transitional forms should be even greater than the number of animal species alive today and their fossilised remains should be abundant all over the world. Since Darwin, evolutionists have been searching for fossils and the result has been for them a crushing disappointment. Nowhere in the world – neither on land nor in the depths of the sea – has any intermediate transitional form between any two species ever been uncovered. Darwin himself was quite aware of the absence of such transitional forms. It was his greatest hope that they would be found in the future. Despite his hopefulness, he saw that the biggest stumbling block to his theory was the missing transitional forms. This is why, in his book the Origin of Species, he wrote: Darwin was right to be worried. the problem bothered other evolutionists as well. A famous British paleontologist, Derek V. Ager, admits this embarrassing fact: The gaps in the fossil record cannot be explained away by the wishful thinking that not enough fossils have yet been unearthed and that these missing fossils will one day be found. Another evolutionist paleontologist, T. Neville George, explains the reason: Life Emerged on Earth Suddenly and in Complex Forms When terrestrial strata and the fossil record are examined, it is seen that living organisms appeared simultaneously. the oldest stratum of the earth in which fossils of living creatures have been found is that of the "Cambrian", which has an estimated age of 530-520 million years. Living creatures that are found in the strata belonging to the Cambrian period emerged in the fossil record all of a sudden without any pre-existing ancestors. the vast mosaic of living organisms, made up of such great numbers of complex creatures, emerged so suddenly that this miraculous event is referred to as the "Cambrian Explosion" in scientific literature. Most of the organisms found in this stratum have highly advanced organs like eyes, or systems seen in organisms with a highly advanced organisation such as gills, circulatory systems, and so on. There is no sign in the fossil record to indicate that these organisms had any ancestors. Richard Monestarsky, the editor of Earth Sciences magazine, states about the sudden emergence of living species: Not being able to find answers to the question of how earth came to overflow with thousands of different animal species, evolutionists posit an imaginary period of 20 million years before the Cambrian Period to explain how life originated and "the unknown happened". This period is called the "evolutionary gap". No evidence for it has ever been found and the concept is still conveniently nebulous and undefined even today. In 1984, numerous complex invertebrates were unearthed in Chengjiang, set in the central Yunnan plateau in the high country of southwest China. Among them were trilobites, now extinct, but no less complex in structure than any modern invertebrate. The Swedish evolutionist paleontologist, Stefan Bengston, explains the situation as follows: The sudden appearance of these complex living beings with no predecessors is no less baffling (and embarrassing) for evolutionists today than it was for Darwin 135 years ago. in nearly a century and a half, they have advanced not one step beyond the point that stymied Darwin. As may be seen, the fossil record indicates that living things did not evolve from primitive to advanced forms, but instead emerged all of a sudden and in a perfect state. the absence of the transitional forms is not peculiar to the Cambrian period. Not a single transitional form verifying the alleged evolutionary "progression" of vertebrates – from fish to amphibians, reptiles, birds, and mammals – has ever been found. Every living species appears instantaneously and in its current form, perfect and complete, in the fossil record. In other words, living beings did not come into existence through evolution. They were created. Deceptions in Drawings The fossil record is the principal source for those who seek evidence for the theory of evolution. When inspected carefully and without prejudice, the fossil record refutes the theory of evolution rather than supporting it. Nevertheless, misleading interpretations of fossils by evolutionists and their prejudiced representation to the public have given many people the impression that the fossil record indeed supports the theory of evolution. The susceptibility of some findings in the fossil record to all kinds of interpretations is what best serves the evolutionists' purposes. the fossils unearthed are most of the time unsatisfactory for reliable identification. They usually consist of scattered, incomplete bone fragments. for this reason, it is very easy to distort the available data and to use it as desired. Not surprisingly, the reconstructions (drawings and models) made by evolutionists based on such fossil remains are prepared entirely speculatively in order to confirm evolutionary theses. Since people are readily affected by visual information, these imaginary reconstructed models are employed to convince them that the reconstructed creatures really existed in the past. Evolutionist researchers draw human-like imaginary creatures, usually setting out from a single tooth, or a mandible fragment or a humerus, and present them to the public in a sensational manner as if they were links in human evolution. These drawings have played a great role in the establishment of the image of "primitive men" in the minds of many people. These studies based on bone remains can only reveal very general characteristics of the creature concerned. the distinctive details are present in the soft tissues that quickly vanish with time. with the soft tissues speculatively interpreted, everything becomes possible within the boundaries of the imagination of the reconstruction's producer. Earnst A. Hooten from Harvard University explains the situation like this: Studies Made to Fabricate False Fossils Unable to find valid evidence in the fossil record for the theory of evolution, some evolutionists have ventured to manufacture their own. These efforts, which have even been included in encyclopaedias under the heading "evolution forgeries", are the most telling indication that the theory of evolution is an ideology and a philosophy that evolutionists are hard put to defend. Two of the most egregious and notorious of these forgeries are described below. Charles Dawson, a well-known doctor and amateur paleoanthropologist, came forth with a claim that he had found a jawbone and a cranial fragment in a pit in the area of Piltdown, England, in 1912. Although the skull was human-like, the jawbone was distinctly simian. These specimens were christened the "Piltdown Man". Alleged to be 500 thousand years old, they were displayed as absolute proofs of human evolution. for more than 40 years, many scientific articles were written on the "Piltdown Man", many interpretations and drawings were made and the fossil was presented as crucial evidence of human evolution. In 1949, scientists examined the fossil once more and concluded that the "fossil" was a deliberate forgery consisting of a human skull and the jawbone of an orang-utan. Using the fluorine dating method, investigators discovered that the skull was only a few thousand years old. the teeth in the jawbone, which belonged to an orang-utan, had been artificially worn down and the "primitive" tools that had conveniently accompanied the fossils were crude forgeries that had been sharpened with steel implements. in the detailed analysis completed by Oakley, Weiner and Clark, they revealed this forgery to the public in 1953. the skull belonged to a 500-year-old man, and the mandibular bone belonged to a recently deceased ape! the teeth were thereafter specially arranged in an array and added to the jaw and the joints were filed in order to make them resemble that of a man. Then all these pieces were stained with potassium dichromate to give them a dated appearance. (These stains disappeared when dipped in acid.) Le Gros Clark, who was a member of the team that disclosed the forgery, could not hide his astonishment: In 1922, Henry Fairfield Osborn, the director of the American Museum of Natural History, declared that he had found a molar tooth fossil in western Nebraska near Snake Brook belonging to the Pliocene period. This tooth allegedly bore the common characteristics of both man and ape. Deep scientific arguments began in which some interpreted this tooth to be that of Pithecanthropus erectus while others claimed it was closer to that of modern human beings. This fossil, which aroused extensive debate, was popularly named "Nebraska Man". It was also immediately given a "scientific name": "Hesperopithecus Haroldcooki". Many authorities gave Osborn their support. Based on this single tooth, reconstructions of Nebraska Man's head and body were drawn. Moreover, Nebraska Man was even pictured with a whole family. In 1927, other parts of the skeleton were also found. According to these newly discovered pieces, the tooth belonged neither to a man nor to an ape. It was realised that it belonged to an extinct species of wild American pig called Prosthennops. Did Men and Apes Come from a Common Ancestor? According to the claims of the theory of evolution, men and modern apes have common ancestors. These creatures evolved in time and some of them became the apes of today, while another group that followed another branch of evolution became the men of today. Evolutionists call the so-called first common ancestors of men and apes "Australopithecus" which means "South African ape". Australopithecus, nothing but an old ape species that has become extinct, has various types. Some of them are robust, while others are small and slight. Evolutionists classify the next stage of human evolution as "Homo", that is "man". According to the evolutionist claim, the living beings in the Homo series are more developed than Australopithecus, and not very much different from modern man. the modern man of our day, Homo sapiens, is said to have formed at the latest stage of the evolution of this species. The fact of the matter is that the beings called Australopithecus in this imaginary scenario fabricated by evolutionists really are apes that became extinct, and the beings in the Homo series are members of various human races that lived in the past and then disappeared. Evolutionists arranged various ape and human fossils in an order from the smallest to the biggest in order to form a "human evolution" scheme. Research, however, has demonstrated that these fossils by no means imply an evolutionary process and some of these alleged ancestors of man were real apes whereas some of them were real humans. Now, let us have a look at Australopithecus, which represents to evolutionists the first stage of the scheme of human evolution. Australopithecus: Extinct Apes Evolutionists claim that Australopithecus are the most primitive ancestors of modern men. These are an old species with a head and skull structure similar to that of modern apes, yet with a smaller cranial capacity. According to the claims of evolutionists, these creatures have a very important feature that authenticates them as the ancestors of men: bipedalism. The movements of apes and men are completely different. Human beings are the only living creatures that move freely about on two feet. Some other animals do have a limited ability to move in this way, but those that do have bent skeletons. According to evolutionists, these living beings called Australopithecus had the ability to walk in a bent rather than an upright posture like human beings. Even this limited bipedal stride was sufficient to encourage evolutionists to project onto these creatures that they were the ancestors of man. However, the first evidence refuting the allegations of evolutionists that Australopithecus were bipedal came from evolutionists themselves. Detailed studies made on Australopithecus fossils forced even evolutionists to admit that these looked "too" ape-like. Having conducted detailed anatomical research on Australopithecus fossils in the mid-1970s, Charles E. Oxnard likened the skeletal structure of Australopithecus to that of modern orang-utans: What really embarrassed evolutionists was the discovery that Australopithecus could not have walked on two feet and with a bent posture. It would have been physically very ineffective for Australopithecus, allegedly bipedal but with a bent stride, to move about in such a way because of the enormous energy demands it would have entailed. By means of computer simulations conducted in 1996, the English paleoanthropologist Robin Crompton also demonstrated that such a "compound" stride was impossible. Crompton reached the following conclusion: a living being can walk either upright or on all fours. A type of in-between stride cannot be sustained for long periods because of the extreme energy consumption. This means that Australopithecus could not have been both bipedal and have a bent walking posture. Probably the most important study demonstrating that Australopithecus could not have been bipedal came in 1994 from the research anatomist Fred Spoor and his team in the Department of Human Anatomy and Cellular Biology at the University of Liverpool, England. This group conducted studies on the bipedalism of fossilised living beings. Their research investigated the involuntary balance mechanism found in the cochlea of the ear, and the findings showed conclusively that Australopithecus could not have been bipedal. This precluded any claims that Australopithecus was human-like. The Homo Series: Real Human Beings The next step in the imaginary human evolution is "Homo", that is, the human series. These living beings are humans who are no different from modern men, yet who have some racial differences. Seeking to exaggerate these differences, evolutionists represent these people not as a "race" of modern man but as a different "species". However, as we will soon see, the people in the Homo series are nothing but ordinary human racial types. According to the fanciful scheme of evolutionists, the internal imaginary evolution of the Homo species is as follows: First Homo erectus, then Homo sapiens archaic and Neanderthal Man, later Cro-Magnon Man and finally modern man. Despite the claims of evolutionists to the contrary, all the "species" we have enumerated above are nothing but genuine human beings. Let us first examine Homo erectus, who evolutionists refer to as the most primitive human species. The most striking evidence showing that Homo erectus is not a "primitive" species is the fossil of "Turkana Boy", one of the oldest Homo erectus remains. It is estimated that the fossil was of a 12-year-old boy, who would have been 1.83 meters tall in his adolescence. the upright skeletal structure of the fossil is no different from that of modern man. Its tall and slender skeletal structure totally complies with that of the people living in tropical regions in our day. This fossil is one of the most important pieces of evidence that Homo erectus is simply another specimen of the modern human race. Evolutionist paleontologist Richard Leakey compares Homo erectus and modern man as follows: Leakey means to say that the difference between Homo erectus and us is no more than the difference between Negroes and Eskimos. the cranial features of Homo erectus resulted from their manner of feeding, and genetic emigration and from their not assimilating with other human races for a lengthy period. Another strong piece of evidence that Homo erectus is not a "primitive" species is that fossils of this species have been unearthed aged twenty-seven thousand years and even thirteen thousand years. According to an article published in Time – which is not a scientific periodical, but nevertheless had a sweeping effect on the world of science – Homo erectus fossils aged twenty-seven thousand years were found on the island of Java. in the Kow swamp in Australia, some thirteen thousand year-old fossils were found that bore Homo Sapiens-Homo Erectus characteristics. All these fossils demonstrate that Homo erectus continued living up to times very close to our day and were nothing but a human race that has since been buried in history. Archaic Homo Sapiens and Neanderthal Man Archaic Homo sapiens is the immediate forerunner of contemporary man in the imaginary evolutionary scheme. in fact, evolutionists do not have much to say about these men, as there are only minor differences between them and modern men. Some researchers even state that representatives of this race are still living today, and point to the Aborigines in Australia as an example. Like Homo sapiens, the Aborigines also have thick protruding eyebrows, an inward-inclined mandibular structure, and a slightly smaller cranial volume. Moreover, significant discoveries have been made hinting that such people lived in Hungary and in some villages in Italy until not very long ago. Evolutionists point to human fossils unearthed in the Neander valley of Holland which have been named Neanderthal Man. Many contemporary researchers define Neanderthal Man as a sub-species of modern man and call it "Homo sapiens neandertalensis". It is definite that this race lived together with modern humans, at the same time and in the same areas. the findings testify that Neanderthals buried their dead, fashioned musical instruments, and had cultural affinities with the Homo sapiens sapiens living during the same period. Entirely modern skulls and skeletal structures of Neanderthal fossils are not open to any speculation. A prominent authority on the subject, Erik Trinkaus from New Mexico University writes: In fact, Neanderthals even had some "evolutionary" advantages over modern men. the cranial capacity of Neanderthals was larger than that of the modern man and they were more robust and muscular than we are. Trinkaus adds: "One of the most characteristic features of the Neanderthals is the exaggerated massiveness of their trunk and limb bones. All of the preserved bones suggest a strength seldom attained by modern humans. Furthermore, not only is this robustness present among the adult males, as one might expect, but it is also evident in the adult females, adolescents, and even children." To put it precisely, Neanderthals are a particular human race that assimilated with other races in time. All of these factors show that the scenario of "human evolution" fabricated by evolutionists is a figment of their imaginations, and that men have always been men and apes always apes. Can Life Result from Coincidences as Revolution Argues? The theory of evolution holds that life started with a cell that formed by chance under primitive earth conditions. Let us therefore examine the composition of the cell with simple comparisons in order to show how irrational it is to ascribe the existence of the cell – a structure which still maintains its mystery in many respects, even at a time when we are about to set foot in the 21st century – to natural phenomena and coincidences. With all its operational systems, systems of communication, transportation and management, a cell is no less complex than any city. It contains power stations producing the energy consumed by the cell, factories manufacturing the enzymes and hormones essential for life, a databank where all necessary information about all products to be produced is recorded, complex transportation systems and pipelines for carrying raw materials and products from one place to another, advanced laboratories and refineries for breaking down imported raw materials into their usable parts, and specialised cell membrane proteins for the control of incoming and outgoing materials. These constitute only a small part of this incredibly complex system. Far from being formed under primitive earth conditions, the cell, which in its composition and mechanisms is so complex, cannot be synthesised in even the most sophisticated laboratories of our day. Even with the use of amino acids, the building blocks of the cell, it is not possible to produce so much as a single organelle of the cell, such as mitochondria or ribosome, much less a whole cell. the first cell claimed to have been produced by evolutionary coincidence is as much a figment of the imagination and a product of fantasy as the unicorn. Proteins Challenge Coincidence And it is not just the cell that cannot be produced: the formation, under natural conditions, of even a single protein of the thousands of complex protein molecules making up a cell is impossible. Proteins are giant molecules consisting of amino acids arranged in a particular sequence in certain quantities and structures. These molecules constitute the building blocks of a living cell. the simplest is composed of 50 amino acids; but there are some proteins that are composed of thousands of amino acids. the absence, addition, or replacement of a single amino acid in the structure of a protein in living cells, each of which has a particular function, causes the protein to become a useless molecular heap. Incapable of demonstrating the "accidental formation" of amino acids, the theory of evolution founders on the point of the formation of proteins. We can easily demonstrate, with simple probability calculations anybody can understand, that the functional structure of proteins can by no means come about by chance. There are twenty different amino acids. If we consider that an average-sized protein molecule is composed of 288 amino acids, there are 10300 different combinations of acids. of all of these possible sequences, only "one" forms the desired protein molecule. the other amino-acid chains are either completely useless or else potentially harmful to living things. in other words, the probability of the coincidental formation of only one protein molecule cited above is "1 in 10300". the probability of this "1" occurring out of an "astronomical" number consisting of 1 followed by 300 zeros is for all practical purposes zero; it is impossible. Furthermore, a protein molecule of 288 amino acids is rather a modest one compared with some giant protein molecules consisting of thousands of amino acids. When we apply similar probability calculations to these giant protein molecules, we see that even the word "impossible" becomes inadequate. If the coincidental formation of even one of these proteins is impossible, it is billions of times more impossible for approximately one million of those proteins to come together by chance in an organised fashion and make up a complete human cell. Moreover, a cell is not merely a collection of proteins. in addition to proteins, cells also include nucleic acids, carbohydrates, lipids, vitamins, and many other chemicals such as electrolytes, all of which are arranged harmoniously and with design in specific proportions, both in terms of structure and function. Each functions as a building block or component in various organelles. As we have seen, evolution is unable to explain the formation of even a single protein out of the millions in the cell, let alone explain the cell. Prof. Dr. Ali Demirsoy, one of the foremost authorities of evolutionist thought in Turkey, in his book Kalitim ve Evrim (Inheritance and Evolution), discusses the probability of the accidental formation of Cytochrome-C, one of the essential enzymes for life: After these lines, Demirsoy admits that this probability, which he accepted just because it was "more appropriate to the goals of science", is unrealistic: The correct sequence of proper amino acids is simply not enough for the formation of one of the protein molecules present in living things. Besides this, each of the twenty different types of amino acid present in the composition of proteins must be left-handed. Chemically, there are two different types of amino acids called "left-handed" and "right-handed". the difference between them is the mirror-symmetry between their three dimensional structures, which is similar to that of a person's right and left hands. Amino acids of either of these two types are found in equal numbers in nature and they can bond perfectly well with one another. Yet, research uncovers an astonishing fact: all proteins present in the structure of living things are made up of left-handed amino acids. Even a single right-handed amino acid attached to the structure of a protein renders it useless. Let us for an instant suppose that life came into existence by chance as evolutionists claim. in this case, the right and left-handed amino acids that were generated by chance should be present in nature in roughly equal amounts. the question of how proteins can pick out only left-handed amino acids, and how not even a single right-handed amino acid becomes involved in the life process is something that still confounds evolutionists. in the Britannica Science Encyclopaedia, an ardent defender of evolution, the authors indicate that the amino acids of all living organisms on earth and the building blocks of complex polymers such as proteins have the same left-handed asymmetry. They add that this is tantamount to tossing a coin a million times and always getting heads. in the same encyclopaedia, they state that it is not possible to understand why molecules become left-handed or right-handed and that this choice is fascinatingly related to the source of life on earth.13 It is not enough for amino acids to be arranged in the correct numbers, sequences, and in the required three-dimensional structures. the formation of a protein also requires that amino acid molecules with more than one arm be linked to each other only through certain arms. Such a bond is called a "peptide bond". Amino acids can make different bonds with each other; but proteins comprise those and only those amino acids that join together by "peptide" bonds. Research has shown that only 50 % of amino acids, combining at random, combine with a peptide bond and that the rest combine with different bonds that are not present in proteins. to function properly, each amino acid making up a protein must join with other amino acids with a peptide bond, as it has only to be chosen from among the left-handed ones. Unquestionably, there is no control mechanism to select and leave out the right-handed amino acids and personally make sure that each amino acid makes a peptide bond with the other. Under these circumstances, the probabilities of an average protein molecule comprising five hundred amino acids arranging itself in the correct quantities and in sequence, in addition to the probabilities of all of the amino acids it contains being only left-handed and combining using only peptide bonds are as follows: As you can see above, the probability of the formation of a protein molecule comprising five hundred amino acids is "1" divided by a number formed by placing 950 zeros after a 1, a number incomprehensible to the human mind. This is only a probability on paper. Practically, such a possibility has "0" chance of realisation. in mathematics, a probability smaller than 1 over 1050 is statistically considered to have a "0" probability of realisation. While the improbability of the formation of a protein molecule made up of five hundred amino acids reaches such an extent, we can further proceed to push the limits of the mind to higher levels of improbability. in the "haemoglobin" molecule, a vital protein, there are five hundred and seventy-four amino acids, which is a much larger number than that of the amino acids making up the protein mentioned above. Now consider this: in only one out of the billions of red blood cells in your body, there are "280,000,000" (280 million) haemoglobin molecules. the supposed age of the earth is not sufficient to afford the formation of even a single protein, let alone a red blood cell, by the method of "trial and error". the conclusion from all this is that evolution falls into a terrible abyss of improbability right at the stage of the formation of a single protein. Looking for Answers to the Generation of Life Well aware of the terrible odds against the possibility of life forming by chance, evolutionists were unable to provide a rational explanation for their beliefs, so they set about looking for ways to demonstrate that the odds were not so unfavourable. They designed a number of laboratory experiments to address the question of how life could generate itself from non-living matter. the best known and most respected of these experiments is the one known as the "Miller Experiment" or "Urey-Miller Experiment", which was conducted by the American researcher Stanley Miller in 1953. With the purpose of proving that amino acids could have come into existence by accident, Miller created an atmosphere in his laboratory that he assumed would have existed on primordial earth (but which later proved to be unrealistic) and he set to work. the mixture he used for this primordial atmosphere was composed of ammonia, methane, hydrogen, and water vapour. Miller knew that methane, ammonia, water vapour and hydrogen would not react with each other under natural conditions. He was aware that he had to inject energy into the mixture to start a reaction. He suggested that this energy could have come from lightning flashes in the primordial atmosphere and, relying on this supposition, he used an artificial electricity discharge in his experiments. Miller boiled this gas mixture at 100 0C for a week, and, in addition, he introduced an electric current into the chamber. At the end of the week, Miller analysed the chemicals that had been formed in the chamber and observed that three of the twenty amino acids, which constitute the basic elements of proteins, had been synthesised. This experiment aroused great excitement among evolutionists and they promoted it as an outstanding success. Encouraged by the thought that this experiment definitely verified their theory, evolutionists immediately produced new scenarios. Miller had supposedly proved that amino acids could form by themselves. Relying on this, they hurriedly hypothesised the following stages. According to their scenario, amino acids had later by accident united in the proper sequences to form proteins. Some of these accidentally formed proteins placed themselves in cell membrane-like structures, which "somehow" came into existence and formed a primitive cell. the cells united in time and formed living organisms. the greatest mainstay of the scenario was Miller's experiment. However, Miller's experiment was nothing but make-believe, and has since been proven invalid in many respects. The Invalidity of Miller's Experiment Nearly half a century has passed since Miller conducted his experiment. Although it has been shown to be invalid in many respects, evolutionists still advance Miller and his results as absolute proof that life could have formed spontaneously from non-living matter. When we assess Miller's experiment critically, without the bias and subjectivity of evolutionist thinking, however, it is evident that the situation is not as rosy as evolutionists would have us think. Miller set for himself the goal of proving that amino acids could form by themselves in earth's primitive conditions. Some amino acids were produced, but the conduct of the experiment conflicts with his goal in many ways, as we shall now see. F Miller isolated the amino acids from the environment as soon as they were formed, by using a mechanism called a "cold trap". Had he not done so, the conditions of the environment in which the amino acids formed would immediately have destroyed the molecules. It is quite meaningless to suppose that some conscious mechanism of this sort was integral to earth's primordial conditions, which involved ultraviolet radiation, thunderbolts, various chemicals, and a high percentage of free oxygen. Without such a mechanism, any amino acid that did manage to form would immediately have been destroyed. F the primordial atmospheric environment that Miller attempted to simulate in his experiment was not realistic. Nitrogen and carbon dioxide would have been constituents of the primordial atmosphere, but Miller disregarded this and used methane and ammonia instead. Why? Why were evolutionists insistent on the point that the primitive atmosphere contained high amounts of methane (CH4), ammonia (NH3), and water vapour (H2O)? the answer is simple: without ammonia, it is impossible to synthesise an amino acid. Kevin McKean talks about this in an article published in Discover magazine: After a long period of silence, Miller himself also confessed that the atmospheric environment he used in his experiment was not realistic. F Another important point invalidating Miller's experiment is that there was enough oxygen to destroy all the amino acids in the atmosphere at the time when evolutionists thought that amino acids formed. This oxygen concentration would definitely have hindered the formation of amino acids. This situation completely negates Miller's experiment, in which he totally neglected oxygen. If he had used oxygen in the experiment, methane would have decomposed into carbon dioxide and water, and ammonia would have decomposed into nitrogen and water. On the other hand, since no ozone layer yet existed, no organic molecule could possibly have lived on earth because it was entirely unprotected against intense ultraviolet rays. F in addition to a few amino acids essential for life, Miller's experiment also produced many organic acids with characteristics that are quite detrimental to the structures and functions of living things. If he had not isolated the amino acids and had left them in the same environment with these chemicals, their destruction or transformation into different compounds through chemical reactions would have been unavoidable. Moreover, a large number of right-handed amino acids also formed. the existence of these amino acids alone refuted the theory, even within its own reasoning, because right-handed amino acids are unable to function in the composition of living organisms and render proteins useless when they are involved in their composition. To conclude, the circumstances in which amino acids formed in Miller's experiment were not suitable for life forms to come into being. the medium in which they formed was an acidic mixture that destroyed and oxidised any useful molecules that might have been obtained. Evolutionists themselves actually refute the theory of evolution, as they are often wont to do, by advancing this experiment as "proof". If the experiment proves anything, it is that amino acids can only be produced in a controlled laboratory environment where all the necessary conditions have been specifically and consciously designed. That is, the experiment shows that what brings life (even the "near-life" of amino acids) into being cannot be unconscious chance, but rather conscious will – in a word, Creation. This is why every stage of Creation is a sign proving to us the existence and might of Allah. The Miraculous Molecule: DNA The theory of evolution has been unable to provide a coherent explanation for the existence of the molecules that are the basis of the cell. Furthermore, developments in the science of genetics and the discovery of the nucleic acids (DNA and RNA) have produced brand-new problems for the theory of evolution. In 1955, the work of two scientists on DNA, James Watson and Francis Crick, launched a new era in biology. Many scientists directed their attention to the science of genetics. Today, after years of research, scientists have, largely, mapped the structure of DNA. Here, we need to give some very basic information on the structure and function of DNA: The molecule called DNA, which exists in the nucleus of each of the 100 trillion cells in our body, contains the complete construction plan of the human body. Information regarding all the characteristics of a person, from the physical appearance to the structure of the inner organs, is recorded in DNA by means of a special coding system. the information in DNA is coded within the sequence of four special bases that make up this molecule. These bases are specified as A, T, G, and C according to the initial letters of their names. All the structural differences among people depend on the variations in the sequence of these bases. There are approximately 3.5 billion nucleotides, that is, 3.5 billion letters in a DNA molecule. The DNA data pertaining to a particular organ or protein is included in special components called "genes". for instance, information about the eye exists in a series of special genes, whereas information about the heart exists in quite another series of genes. the cell produces proteins by using the information in all of these genes. Amino acids that constitute the structure of the protein are defined by the sequential arrangement of three nucleotides in the DNA. At this point, an important detail deserves attention. An error in the sequence of nucleotides making up a gene renders the gene completely useless. When we consider that there are 200 thousand genes in the human body, it becomes more evident how impossible it is for the millions of nucleotides making up these genes to form by accident in the right sequence. An evolutionist biologist, Frank Salisbury, comments on this impossibility by saying: The number 41000 is equivalent to 10600. We obtain this number by adding 600 zeros to 1. As 10 with 11 zeros indicates a trillion, a figure with 600 zeros is indeed a number that is difficult to grasp. Evolutionist Prof. Ali Demirsoy was forced to make the following admission on this issue: In addition to all these improbabilities, DNA can barely be involved in a reaction because of its double-chained spiral shape. This also makes it impossible to think that it can be the basis of life. Moreover, while DNA can replicate only with the help of some enzymes that are actually proteins, the synthesis of these enzymes can be realised only by the information coded in DNA. As they both depend on each other, either they have to exist at the same time for replication, or one of them has had to be "created" before the other. American microbiologist Jacobson comments on the subject: The quotation above was written two years after the disclosure of the structure of DNA by James Watson and Francis Crick. Despite all the developments in science, this problem remains unsolved for evolutionists. to sum up, the need for DNA in reproduction, the necessity of the presence of some proteins for reproduction, and the requirement to produce these proteins according to the information in the DNA entirely demolish evolutionist theses. Two German scientists, Junker and Scherer, explained that the synthesis of each of the molecules required for chemical evolution, necessitates distinct conditions, and that the probability of the compounding of these materials having theoretically very different acquirement methods is zero: In short, the theory of evolution is unable to prove any of the evolutionary stages that allegedly occur at the molecular level. To summarise what we have said so far, neither amino acids nor their products, the proteins making up the cells of living beings, could ever be produced in any so-called "primitive atmosphere" environment. Moreover, factors such as the incredibly complex structure of proteins, their right-hand, left-hand features, and the difficulties in the formation of peptide bonds are just parts of the reason why they will never be produced in any future experiment either. Even if we suppose for a moment that proteins somehow did form accidentally, that would still have no meaning, for proteins are nothing at all on their own: they cannot themselves reproduce. Protein synthesis is only possible with the information coded in DNA and RNA molecules. Without DNA and RNA, it is impossible for a protein to reproduce. the specific sequence of the twenty different amino acids encoded in DNA determines the structure of each protein in the body. However, as has been made abundantly clear by all those who have studied these molecules, it is impossible for DNA and RNA to form by chance. The Fact of Creation With the collapse of the theory of evolution in every field, prominent names in the discipline of microbiology today admit the fact of creation and have begun to defend the view that everything is created by a conscious Creator as part of an exalted creation. This is already a fact that people cannot disregard. Scientists who can approach their work with an open mind have developed a view called "intelligent design". Michael J. Behe, one of the foremost of these scientists, states that he accepts the absolute being of the Creator and describes the impasse of those who deny this fact: The result of these cumulative efforts to investigate the cell – to investigate life at the molecular level – is a loud, clear, piercing cry of "design!" the result is so unambiguous and so significant that it must be ranked as one of the greatest achievements in the history of science. This triumph of science should evoke cries of "Eureka" from ten thousand throats. But, no bottles have been uncorked, no hands clapped. Instead, a curious, embarrassed silence surrounds the stark complexity of the cell. When the subject comes up in public, feet start to shuffle, and breathing gets a bit laboured. in private people are a bit more relaxed; many explicitly admit the obvious but then stare at the ground, shake their heads, and let it go like that. Why does the scientific community not greedily embrace its startling discovery? Why is the observation of design handled with intellectual gloves? the dilemma is that while one side of the elephant is labelled intelligent design, the other side must be labelled God.19 Today, many people are not even aware that they are in a position of accepting a body of fallacy as truth in the name of science, instead of believing in Allah. Those who do not find the sentence "Allah created you from nothing" scientific enough can believe that the first living being came into being by thunderbolts striking a "primordial soup" billions of years ago. As we have described elsewhere in this book, the balances in nature are so delicate and so numerous that it is entirely irrational to claim that they developed "by chance". No matter how much those who cannot set themselves free from this irrationality may strive, the signs of Allah in the heavens and the earth are completely obvious and they are undeniable. Allah is the Creator of the heavens, the earth and all that is in between. The signs of His being have encompassed the entire universe. 1. Charles Darwin, the Origin of Species: By Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life, London: Senate Press, 1995, p. 134. 2. Derek A. Ager. "The Nature of the Fossil Record." Proceedings of the British Geological Association, vol. 87, no. 2, (1976), p. 133. 3. T.N. George, "Fossils in Evolutionary Perspective", Science Progress, vol.48, (January 1960), p.1-3 4. Richard Monestarsky, Mysteries of the Orient, Discover, April 1993, p.40. 5. Stefan Bengston, Nature 345:765 (1990). 6. Earnest A. Hooton, Up From the Ape, New York: McMillan, 1931, p.332. 7. Stephen Jay Gould, Smith Woodward's Folly, New Scientist, 5 April, 1979, p. 44. 8. Charles E. Oxnard, the Place of Australopithecines in Human Evolution: Grounds for Doubt, Nature, No. 258, p. 389. 9. Richard Leakey, the Making of Mankind, London: Sphere Books, 1981, p. 116 10. Eric Trinkaus, Hard Times Among the Neanderthals, Natural History, No. 87, December 1978, p. 10, R.L. Holoway, "The Neanderthal Brain: What was Primitive?", American Journal of Physical Anthrophology Supplement, No. 12, 1991, p. 94 11. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Yayinlari 1984, p. 61 12. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Yayinlari 1984, p. 61 13. Fabbri Britannica Science Encyclopaedia, Vol. 2, No. 22, p. 519 14. Kevin McKean, Bilim ve Teknik, No. 189, p. 7 15. Frank B. Salisbury, "Doubts about the Modern Synthetic Theory of Evolution", American Biology Teacher, September 1971, p. 336. 16. Ali Demirsoy, Kalitim ve Evrim (Inheritance and Evolution), Ankara: Meteksan Publishing Co., 1984, p. 39. 17. Homer Jacobson, "Information, Reproduction and the Origin of Life", American Scientist, January, 1955, p.121. 18. Reinhard Junker & Siegfried Scherer, "Entstehung Gesiche Der Lebewesen", Weyel, 1986, p. 89. 19. Michael J. Behe, Darwin's Black Box, New York: Free Press, 1996, pp. 232-233.
<urn:uuid:fe9bd018-5373-4204-9f99-710c805250b0>
CC-MAIN-2013-20
http://harunyahya.com/en/books/531/Allah_Is_Known_Through_Reason/chapter/100/Evolution_Deceit
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.960514
10,884
2.859375
3
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "methane" ], "nature": [ "habitat" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
Gasoline prices have increased rapidly during the past several years, pushed up mainly by the sharply rising price of oil. A gallon of gasoline in the US rose from $1.50 in 2002 to $2 in 2004 to $2.50 in 2006 to over $4 at present. Gasoline prices almost trebled during these 6 years compared to very little change in nominal gas prices during the prior fifteen years. The US federal tax on gasoline has remained at 18.4 cents per gallon during this period of rapid growth in gasoline prices, while state excise taxes add another 21.5 cents per gallon. In addition, many local governments levy additional sales and other taxes on gasoline. Gasoline taxes have not risen much as the price of gasoline exploded upward. The price of gasoline is much lower than in other rich countries mainly because American taxes are far smaller. For example, gasoline taxes in Germany and the United Kingdom amount to about $3 per gallon. Some economists and environmentalists have called for large increases in federal, state, and local taxes to make them more comparable to gasoline taxes in other countries. Others want these taxes to rise by enough so that at least they would have kept pace with the sharply rising pre-tax fuel prices. At the same time two presidential candidates, Hillary Clinton and John McCain, proposed a temporary repeal during this summer of the federal tax in order to give consumers a little relief from the higher gas prices. We discuss the optimal tax on gasoline, and how the sharp increase in gas prices affected its magnitude. Taxes on gasoline are a way to induce consumers to incorporate the "external" damages to others into their decision of how much to drive and where to drive. These externalities include the effects of driving on local and global pollution, such as the contribution to global warming from the carbon emitted into the atmosphere by burnt gasoline. One other important externality is the contribution of additional driving to road congestion that slows the driving speeds of everyone and increases the time it takes to go a given distance. Others include automobile accidents that injure drivers and pedestrians, and the effect of using additional gasoline on the degree of dependence on imported oil from the Middle East and other not very stable parts of the world. A careful 2007 study by authors from Resources for the Future evaluates the magnitudes of all these externalities from driving in the US (see Harrington, Parry, and Walls, "Automobile Externalities and Policies", Journal of Economic Literature, 2007, pp 374-400). They estimate the total external costs of driving at 228 cents per gallon of gas used, or at 10.9 cents per mile driven, with the typical car owned by American drivers. Their breakdown of this total among different sources is interesting and a little surprising. They attribute only 6 cents of the total external cost to the effects of gasoline consumption on global warming through the emission of carbon into the atmosphere from the burning of gasoline, and 12 cents from the increased dependency on imported oil. Perhaps their estimate of only 6 cents per gallon is a large underestimate of the harmful effects of gasoline use on global warming. Yet even if we treble their estimate, that only raises total costs of gasoline use due to the effects on global warming by 12 cents per gallon. That still leaves the vast majority of the external costs of driving to other factors. They figure that local pollution effects amount to 42 cents per gallon, which makes these costs much more important than even the trebled cost of global warming. According to their estimates, still more important costs are those due to congestion and accidents, since these are 105 cents and 63 cents per gallon, respectively. Their figure for the cost of traffic accidents is likely too high –as the authors' recognize- because it includes the cost in damages to property and person of single vehicle accidents, as when a car hits a tree. Presumably, single vehicle accidents are not true externalities because drivers and their passengers would consider their possibility and internalize them into their driving decisions. Moreover, the large effect of drunk driving on the likelihood of accidents should be treated separately from a gasoline tax by directly punishing drunk drivers rather than punishing also sober drivers who are far less likely to get into accidents. On the surface, these calculations suggest that American taxes on gasoline, totaling across all levels of government to about 45 cents per gallon, are much too low. However, the federal tax of 18.4 cents per gallon is almost exactly equal to their figure of 18 cents per gallon as the external costs of global warming and oil dependency. To be sure, a trebled estimate for global warming would bring theirs up to 30 cents per gallon. However, the federal government also taxes driving through its mandated fuel efficiency standards for cars, although this is an inefficient way to tax driving since it taxes the type of car rather than driving. Still, the overall level of federal taxes does not fall much short, if at all, from the adjusted estimate of 30 cents per gallon of damages due to the effects of gasoline use on global warming and oil dependency. Any shortfall in taxes would be at the state and local levels in combating externalities due to local pollution effects, and to auto accidents and congestion on mainly local roads. Here too, however, the discrepancy between actual and optimal gasoline taxes is far smaller than it may seem, and not only because single vehicle accidents are included in their estimate of the cost of car accidents, and accidents due to drunk driving should be discouraged through punishments to drunk drivers. One important reason is that congestion should be reduced not by general gasoline taxes, but by special congestion taxes- as used in London and a few other cities- that vary in amount with degree of congestion (see our discussion of congestion taxes on February 12, 2006). Congestion taxes are a far more efficient way to reduce congestion than are general taxes on gasoline that apply also when congestion is slight. In addition and often overlooked, the sharp rise in pre-tax gasoline prices has partly accomplished the local pollution and auto accident goals that would be achieved by higher gas taxes. For higher prices have cut driving, just as taxes would, and will cut driving further in the future as consumers continue to adjust the amount and time of their driving to gasoline that costs more than $4 a gallon. Reduced driving will lower pollution and auto accidents by reducing the number of cars on the road during any time period, especially during heavily traveled times when pollution and accidents are more common. The effects of high gas prices in reducing congestion, local pollution, and accident externalities could be substantial. These authors estimate the size of local driving externalities, aside from congestion costs, at 105 cents per gallon. Even after the sharp run up in gas prices, this may still exceed the 28 cents per gallon of actual state and local taxes, but the gap probably is small. It surely is a lot smaller than it was before gas prices exploded on the back of the climb in the cost of oil. In effect, by reducing driving, higher gasoline prices have already done much of the work in reducing externalities that bigger gas taxes would have done when prices were lower.
<urn:uuid:f233a3f5-7030-4ac7-86a7-3c91855cbc39>
CC-MAIN-2013-20
http://www.becker-posner-blog.com/2008/07/should-us-taxes-on-gasoline-be-higher-becker.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963967
1,427
2.953125
3
[ "climate" ]
{ "climate": [ "global warming" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Like butterflies? Enjoy birds? Garden with Vermont plants. Native plants can help turn your yard into a wonderful wildlife preserve. They provide butterflies, birds and bees the habitat and food they need to survive and reproduce, in turn supporting other forms of wildlife that get energy directly from plants or by eating something that has already eaten a plant. When land is converted to lawn or planted with non-native plants imported from Europe or Asia, native insects have a hard time finding something to eat because they are adapted to use particular plant species for food, lay eggs and hide from predators. Insect diversity and populations drop significantly when native plant species disappear - gardens with native plants have three times as many insect species and 35 times as much insect biomass as those with non-natives. Smaller populations means insects are not fulfilling the critical role they play in an ecosystem's food web. Here are a few things you can do to create a wildlife and insect haven in your yard: Plant native perennials, shrubs and trees. They offer birds the best variety and most abundant source of seasonal seeds, fruits and insects. Click here for a list of native Vermont plants and the insects they support. Replace lawns with woodlands, meadows, native gardens and shrubs. Reduce the need for labor-intensive, gas-consuming lawn care by planting low-maintenance, native vegetation. Provide pollinators a range of species that bloom throughout the season. Perennial beds do not include the seasonal variety that bees and other pollinators need. Include late-blooming native asters, goldenrods and sunflowers in your borders and meadows. Whenever possible, leave leaf litter alone. Many insects overwinter in leaf litter, so leaving it on the ground allows many species of moths and butterflies to make it through Vermont’s long winters. For example, leaf litter frequently contains moth eggs, so those moths cannot survive if leaf litter is bagged and sent to the landfill. Maintain naturally "messy" areas - leave woody debris alone. Many bees live alone or in colonies in twigs, stumps, dead branches or wood, and some birds nest on the ground and forage in leaf litter for insects.
<urn:uuid:146c8a80-7758-4fa5-a06a-13aa0490050f>
CC-MAIN-2013-20
http://www.vtinvasives.org/plants/invasive-free-gardening/go-native
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940799
459
3.578125
4
[ "nature" ]
{ "climate": [], "nature": [ "ecosystem", "habitat" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Maui’s death in set net takes species one step closer to extinction WWF-New Zealand’s Executive Director Chris Howe says: “This death of a Maui’s dolphin is a tragedy for a species that is down to only about 100 individuals. Set nets in Maui’s habitat continue to pose an unacceptable risk to these dolphins. Until we get set nets out of the shallow coastal waters where they live, more Maui’s will needlessly get entangled and drown. The species could be extinct within our generation without urgent action.” Maui’s dolphins, a subspecies of the South Island’s Hector’s dolphins, are found only off the west coast of the North Island. They are the world’s rarest marine dolphin, classified internationally as critically endangered. The Ministry of Agriculture and Forestry (MAF) yesterday released a statement saying they believe that the dead animal was a Maui’s, not a Hector’s dolphin as originally reported, because of the location of its death. The dead dolphin was returned to the sea by the fisher. MAF claimed the death “occurred outside of the current known range of Maui’s dolphins, as well as outside the current restrictions.” However there have been independent verified sightings of Maui’s dolphins in the coastal waters off Taranaki in recent years, and WWF-New Zealand is urging MAF and the government to extend protection measures throughout the Maui’s historical range to give the species the best chance of survival and recovery. Despite fishing restrictions announced in 2008, Maui’s are not currently protected throughout their entire range. WWF is calling on the government to extend protection measures into harbours and the southern extent of their current range, along with better monitoring and policing of regulations. WWF- New Zealand is urging all members of the public who see a Maui’s dolphin – noted for their rounded dorsal fin - to report it to a special sightings hotline, 0800 4 MAUIS. Mr Howe says: “Every sighting of one of these rare and precious dolphins matters. The more we know about where Maui’s range and their movements, the better we can protect them. “WWF will continue to speak out on behalf of all those New Zealanders who want to stop the extinction of Maui’s dolphins, and urge the government to extend the current protection measures before it is too late.”
<urn:uuid:3ee25cce-6153-43bc-b473-20713cecef3e>
CC-MAIN-2013-20
http://wwf.panda.org/who_we_are/wwf_offices/new_zealand/?203366/Mauis-death-in-set-net-takes-species-one-step-closer-to-extinction
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.947137
524
2.65625
3
[ "nature" ]
{ "climate": [], "nature": [ "habitat" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
There is an overwhelming amount of news coverage related to the state of the economy, international politics, and various domestic programs. The environment and environmental policy on the other hand, are being ignored. When environmental policy is discussed many citizens and representatives champion the need for substantial policy reform. However, when actual policies are introduced, they are typically ignored or delayed. Due to the current state of the environment, politicians need to place a higher priority on environmental policy. First, the public needs to understand exactly what environmental policy is and how it affects them. Environmental policy is defined as “any action deliberately taken to manage human activities with a view to prevent, reduce, or mitigate harmful effects on nature and natural resources, and ensuring that man-made changes to the environment do not have harmful effects on humans or the environment,” (McCormick 2001). It generally covers air and water pollution, waste management, ecosystem management, biodiversity protection, and the protection of natural resources, wildlife, and endangered species. Issues like these, affect everyone across the globe and cannot be ignored. Environmental policy became a national issue under Theodore Roosevelt, when National Parks were established in hopes of preserving wildlife for future generations. The modern environmental movement began in the 1970s during the Nixon administration, when a large amount of environmental legislation started rolling out. Nixon signed the National Environmental Policy Act (NEPA) which established a national policy promoting the enhancement of the environment and set requirements for all government agencies to prepare Environmental Assessments and Environmental Impact Statements. Nixon also established the President’s Council on Environmental Quality. Legislation of the time established the Environmental Protection Agency (EPA), The Clean Air Act, and the Federal Water Pollution Control Act. The EPA has received a lot of notoriety recently, mostly for republican’s desire to get rid of it, though it is still a source vital to protecting the environmental. Rising gas prices in the 1970s inspired a wave of greener vehicles, a phenomenon witnessed again in 2008. High energy costs motivated Jimmy Carter to install solar panels on the white house roof, a clear message that helping the environment was everyone’s responsibility. Focus on environmental policy began dwindling in the 1980s though, under the Regan administration. As the Soviet Union began to weaken and fall, the restructuring of Europe became a priority and the environment quickly slipped to the backburner. Many of the environmental issues that the public faced in the late twentieth century are still issues today, including: climate change, lack of fossil fuels, sustainable energy solutions, ozone depletion, and resource depletion. Today, with the plethora of issues currently affecting the environment, it needs to become a priority again. As gas prices reached record highs in 2007 and 2008, there was a surge in Green Startups to help companies struggling with high fuel costs. As fuel costs decreased, the focus on these Green Startups decreased as well. However, now that gas prices are again on the rise, there will likely be a green resurgence in the market. These green initiatives should not rise and fall with the cost of gas. Environmental issues have impacts and implications far greater than the bottom dollar. Rising sea levels, droughts, and other extreme weather events have enormous human impacts, killing or displacing scores of people each year. In an Oxfam International study at the University of Belgium, the earth is currently experiencing approximately 500 natural disasters a year, affecting over 250 million people (Guitierrez 2008). It is paramount that our government focus significant attention and funding on environmental policy. If we continue to disregard the environment, the planet might be degraded to the point where it is no longer habitable. We only have one Earth; we need to do our best to preserve it. Gutierrez, David. “Natural Disasters Up More Than 400 Percent in Two Decades.” NaturalNews.com, June 5, 2008. Accessed March 10, 2012. http://www.naturalnews.com/023362.html.
<urn:uuid:601a56d2-b3ec-413e-80c1-296b8bc9080e>
CC-MAIN-2013-20
http://www.nupoliticalreview.com/?p=1545
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939738
812
3.4375
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "extreme weather" ], "nature": [ "biodiversity", "ecosystem", "endangered species" ] }
{ "strong": 5, "weak": 0, "total": 5, "decision": "accepted_strong" }
Antoine Pierre Joseph Marie Barnave Barnave, Antoine Pierre Joseph Marie (äNtwänˈ pyĕr zhōzĕfˈ märēˈ bärnävˈ) [key], 1761–93, French revolutionary. A member of the States-General of 1789 from Grenoble, he was a brilliant speaker and leader of the Jacobins. After Louis XVI and Marie Antoinette fled to Varennes in 1791, Barnave believed that the king might finally be persuaded to accept a constitutional government, thereby avoiding the impending political anarchy. He began a correspondence with Marie Antoinette, encouraging her to convert the monarchy to the Revolution; this correspondence was later used as evidence of Barnave's treasonous activities. In July, 1791, he spoke in the assembly in favor of the restoration of the king as a constitutional monarch and appealed for an end to the Revolution. He retired to Grenoble, and was tried for treason and guillotined (1793). His history of the French Revolution, written during his imprisonment, is considered a major work that tried to put the Revolution into a broader political and social framework. See E. Chill, Power, Property, and History: Barnave's Introduction to the French Revolution and Other Writings (1971). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: French History: Biographies
<urn:uuid:c91386dc-6bf4-4c36-85b2-28ef3b7a2664>
CC-MAIN-2013-20
http://www.infoplease.com/encyclopedia/people/barnave-antoine-pierre-joseph-marie.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701459211/warc/CC-MAIN-20130516105059-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915327
306
3.421875
3
[ "nature" ]
{ "climate": [], "nature": [ "restoration" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Fragmented across Central Asia, China, and Tibet, snow leopards (Uncia uncia) roam through mountain corridors and montane habitats. Today, scientists think there are between 4,000 and 7,500 snow leopards exist in the wild with most of them living within China. But accurate estimates are difficult to make when dealing with this elusive master of camouflage in its unwelcoming home. Snow leopards have developed many interesting characteristics that help it survive in the cold, mountainous terrain they call home. Their long tails help them balance on steep, rocky terrain, and their large paws and furry feet act as snow shoes in deep mountain snows. Living at heights of 2,000 to 6,000 kilometers, they also have special nasal cavities that help them breath low-oxygen mountain air. Did you know? Snow leopards can leap vertically 6-10 meters (20-30 feet). This helps them hunt prey on steep, rocky mountainsides. Because food is scarce at such high altitudes, snow leopards are solitary animals, hunting and eating whatever meat they are able to find – including deer, marmots, boars and farm livestock. This also means, however, that they are not as possessive of their home ranges. In areas where food is more plentiful, a snow leopard might have a small range of 12-15 km2, whereas if food is not available, a snow leopard may roam up to 40 km2 to find sufficient food. Threats to the Snow Leopard Snow leopards are hunted for their bones, which are in demand to replace tiger bones in Chinese medicinal traditions. Additionally, in eastern Asia, fur coats and similar items made from snow leopard skins have become increasingly popular. Habitat loss and changing patterns of land use have also threatened snow leopard populations. In the Central Asian mountains, the animals that snow leopards prey on have been hunted to dangerously low numbers, threatening the food supply of these sleek animals. Decreasing wild food supplies have often driven snow leopards to small farms, resulting in unfortunate clashes between these threatened animals and local farmers. Although some conservation projects have been put in place to conserve the habitats and environments for the snow leopard populations, additional projects focusing on local education and corridor-building would make an important contribution to snow leopard conservation.
<urn:uuid:ae4cebe0-ba30-4a86-9dec-1562bcd008ae>
CC-MAIN-2013-20
http://www.conservation.org/learn/biodiversity/species/profiles/leopards/Pages/snow_leopards.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95107
485
3.765625
4
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "habitat" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
PROVIDENCE, RI -- Shifting glaciers and exploding volcanoes aren't confined to Mars' distant past, according two new reports in the journal Nature. Glaciers moved from the poles to the tropics 350,000 to 4 million years ago, depositing massive amounts of ice at the base of mountains and volcanoes in the eastern Hellas region near the planet's equator, based on a report by a team of scientists analyzing images from the Mars Express mission. Scientists also studied images of glacial remnants on the western side of Olympus Mons, the largest volcano in the solar system. They found additional evidence of recent ice formation and movement on these tropical mountain glaciers, similar to ones on Mount Kilimanjaro in Africa. In a second report, the international team reveals previously unknown traces of a major eruption of Hecates Tholus less than 350,000 million years ago. In a depression on the volcano, researchers found glacial deposits estimated to be 5 to 24 million years old. James Head, professor of geological sciences at Brown University and an author on the Nature papers, said the glacial data suggests recent climate change in Mars' 4.6-billion-year history. The team also concludes that Mars is in an "interglacial" period. As the planet tilts closer to the sun, ice deposited in lower latitudes will vaporize, changing the face of the Red Planet yet again. Discovery of the explosive eruption of Hecates Tholus provides more evidence of recent Mars rumblings. In December, members of the same research team revealed that calderas on five major Mars volcanoes were repeatedly active as little as 2 million years ago. The volcanoes, scientists speculated, may even be active today. "Mars is very dynamic," said Head, lead author of one of the Nature reports. "We see that the climate change and geological forces that drive evolution on Earth are happening there." Head is part of a 33-institution team analyzing images from Mars Express, launched in June 2003 by the European Space Agency. The High Resolution Stereo Camera, or HRSC, on board the orbiter is producing 3-D images of the planet's surface. These sharp, panoramic, full-color pictures provided fodder for a third Nature report. In it, the team offers evidence of a frozen body of water, about the size and depth of the North Sea, in southern Elysium. A plethora of ice and active volcanoes could provide the water and heat needed to sustain basic life forms on Mars. Fresh data from Mars Express – and the announcement that live bacteria were found in a 30,000-year-old chunk of Alaskan ice – is fueling discussion about the possibility of past, even present, life on Mars. In a poll taken at a European Space Agency conference last month, 75 percent of scientists believe bacteria once existed on Mars and 25 percent believe it might still survive there. Head recently traveled to Antarctica to study glaciers, including bacteria that can withstand the continent's dry, cold conditions. The average temperature on Mars is estimated to be 67 degrees below freezing. Similar temperatures are clocked in Antarctica's frigid interior. "We're now seeing geological characteristics on Mars that could be related to life," Head said. "But we're a long way from knowing that life does indeed exist. The glacial deposits we studied would be accessible for sampling in future space missions. If we had ice to study, we would know a lot more about climate change on Mars and whether life is a possibility there." The European Space Agency, the German Aerospace Center and the Freie Universitaet in Berlin built and flew the HRSC and processed data from the camera. The National Aeronautics and Space Administration (NASA) supported Head's work. AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:3b3957fd-03ff-4083-b0d3-e93cff20fba9>
CC-MAIN-2013-20
http://www.eurekalert.org/pub_releases/2005-03/bu-fai031405.php
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933385
821
3.703125
4
[ "climate" ]
{ "climate": [ "climate change" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Teachers, register your class or environmental club in our annual Solar Oven Challenge! Registration begins in the fall, and projects must be completed in the spring to be eligible for prizes and certificates. Who can participate? GreenLearning's Solar Oven Challenge is open to all Canadian classes. Past challenges have included participants from grade 3 through to grade 12. Older students often build solar ovens as part of the heat unit in their Science courses. Other students learn about solar energy as a project in an eco-class or recycling club. How do you register? 1. Registration is now open to Canadian teachers. To register, send an email to Gordon Harrison at GreenLearning. Include your name, school, school address and phone number, and the grade level of the students who will be participating. 2. After you register, you will receive the Solar Oven Challenge Teacher's Guide with solar oven construction plans. Also see re-energy.ca for construction plans, student backgrounders, and related links on solar cooking and other forms of renewable energy. At re-energy.ca, you can also see submissions, photos and recipes from participants in past Solar Oven Challenges. 3. Build, test and bake with solar ovens! 4. Email us photos and descriptions of your creations by the deadline (usually the first week of June). 5. See your recipes and photos showcased at re-energy.ca. Winners will be listed there and in GreenLearning News.
<urn:uuid:c9af9ded-f40a-4b50-aa4b-6454d2943f75>
CC-MAIN-2013-20
http://www.greenlearning.ca/re-energy/solar-oven-challenge
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92865
301
2.71875
3
[ "climate" ]
{ "climate": [ "renewable energy" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Entomological Survey of Rio Bravo Conservation and Management Area, Belize Peter Kovarik, John Shuey, and Chris Carlton During the mid-late 1990's, lepidopterist John Shuey became interested in testing the widely touted concept that insect communities are useful in evaluating impacts to ecological integrity in tropical forest communities. At that time the literature had been advocating insects, especially butterflies, as appropriate indicators for assessing the impacts of specific management activities on tropical forests. People were already using insects as indicators, despite the fact that this simple premise had yet to be tested. John enlisted coleopterist Peter Kovarik as a partner in this study. The two decided that in addition to butterflies, scarabaeine scarabs and hister beetles would become part of the study. The taxa that were selected were chosen in part because of their susceptibility to bait and or passive trapping techniques. In fact the beauty of this study was that we envisioned relatively little active collecting. This way we could quasi enjoy ourselves while our traps were filling with insects! The site chosen for our study was Rio Bravo Conservation Area located in Orange Walk Province, Belize. Rio Bravo is a 230,000 acre nature preserve in the northwestern corner of the country near the corner where Belize, Guatemala, and Mexico meet. Rio Bravo is a beautiful mosaic of semitropical moist forest, savanna, and wetland habitats with over 230 species of trees, 70 species of mammals, and approximately 400 species of birds. Among large animals, the area has healthy populations of jaguar, puma, Baird's tapir, and two species of monkeys. There are also significant Mayan archeological sites, and the area has a colorful recent history of mahogany logging, chicl? extraction, and marijuana farming. This preserve is ministered by Programme For Belize (PfB), a private conservation organization that holds Rio Bravo in trust for the people of Belize. PfB integrates elements of sustainable forestry and natural product harvesting, ecotourism, and education into a single comprehensive long-term management plan. For a location map of Rio Bravo click on the PfB logo above. For additional information about The Nature Conservancy's role, click on that logo. Rio Bravo turned out to be an ideal setting for our study. In addition to extensive areas of oak-pine savanna, there were huge expanses of contiguous limestone rainforest without excessive topographic variability. Within this expanse of fairly homogenous habitat, there were areas along trails or roads through the forest that had been fairly recently logged. We sought to imbed our sample sites within areas that were both recently disturbed by man and others which had been relatively untouched for quite some time. Because Rio Bravo is a preserve and PfB encourages research activities, we felt confidant that human impact on both the ecosystem and our sampling devices would be minimal. Thus, it seemed probable that any potential differences we observed in insect communities between our sample sites would be primarily due to forest integrity rather than extraneous factors. Our core studies were conducted once during the dry season and twice during the rainy season. They were completed in 1996. Since neither John nor Pete knew much about scarabaeine scarabs, expert Bill Warner was coaxed into participating by promising him many scarabs with no strings attached. In fact Bill eventually joined us in Rio Bravo in July 1996. We are still awaiting some of our quantitative insect data, but we have botanical assessments of each forest tract, and data from the sample events for butterflies and hister beetles in hand. As soon as the last of our insect data is cleaned up, we will finish off the ecological end of this work. Like many studies, this one began small and focused and eventually expanded into a full blown entomological survey. Coleopterist Chris Carlton became the third major player in our survey. Chris has made several trips to Rio Bravo and done some intensive leaf litter work and set up many a flight intercept trap in search of tiny pselaphid beetles. We collected massive amounts of insects and have spent the last several years processing our samples and farming specimens out to various specialists who have generously provided us with identifications. We have learned, with little surprise that many of the insects that inhabit the forests and savannas of Rio Bravo are new to science. Many others are new country records. This is not surprising either since the entomofauna of Belize is poorly known. What is perhaps a bit of a surprise are the many unusual range extensions for some of the described insect species. This indicates that knowing the insect fauna of Belize will improve our understanding of the zoogeography of Mexico and Central America. The following checklists and are now available and more will be added as they mature. Available images may be accessed through links within the checklists. - Anthribidae, by Charles O'Brien - Cerambycidae, by Robert Turnbow - Chrysomelidae, by Shawn Clark - Curculionidae, by Charles O'Brien - Elateroidea, by Paul Johnson - Erotylidae, by Paul Skelley - Mordellidae, by John Jackman - Staphylinidae: subfamily Pselaphinae, by Chris Carlton - Tenebrionidae, by Charles Triplehorn and Otto Merkl For questions and comments contact Peter Kovarik.
<urn:uuid:3906d3d8-a657-4701-9dfb-88a24cc19cd2>
CC-MAIN-2013-20
http://www.lsuinsects.org/expeditions/riobravo/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966503
1,110
3.375
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "ecological", "ecosystem", "habitat", "wetland" ] }
{ "strong": 4, "weak": 1, "total": 5, "decision": "accepted_strong" }
You are here: PureEnergySystems.com > News > October E-Cat Test Validates Cold Fusion Despite Challenges The test of the E-Cat (Energy Catalyzer) that took place on October 6, 2011 in Italy has validated Andrea Rossi's claim that the device produces excess energy via a novel Cold Fusion nuclear reaction. Despite its success, the test was flawed, and could have been done in a way that produced more spectacular results -- as if confirmation of cold fusion is not already stunning enough. Andrea Rossi stands in front of his E-Cat apparatus, October 6, 2011 Photo by Maurizio Melis of Radio24 by Hank Mills Pure Energy Systems News Andrea Rossi has made big claims for the past year, about his cold fusion "E-Cat" (Energy Catalyzer) technology. He has claimed that it produces vast amounts of energy via a safe and clean low energy nuclear reaction that consumes only tiny amounts of nickel and hydrogen. A series of tests had been performed earlier this year that seemed to confirm excess energy is produced by the systems tested. Some of the tests were particularly impressive, such as one that lasted eighteen hours, and was performed by Dr. Levi of the University of Bologna. Unfortunately, the tests were not planned out as well as they could have been and had flaws. The most recent test that took place on October 6, 2011 in Bologna, Italy, was supposed to address many of the concerns about the previous tests, and be performed in a way that would put to rest many issues that had been discussed continually on the internet. Despite showing clear evidence of excess energy -- which is absolutely fantastic -- this most recent test failed to live up to its full potential. It was a big success in that it validated the claim the E-Cat produces excess energy via cold fusion, but it was not nearly as successful as it could have been. Or as successful as we, the outsiders looking in, would like for it to have been. The Inventor's Mindset One thing that should be stated is that inventors do not always think like the people who follow their inventions. They have their own mindset and way of looking at things. This should be obvious, because they are seeing *everything* from a different perspective. For example, when we think seeing the inside of an important component would be exciting and informative, they consider it a threat to their intellectual property. Or, for example, when we would like to see a test run for days, they are thinking that a few hours is long enough. In their mind, they know their technology works, and running it for hours, days, or weeks would be more of a chore to them than an exciting event. In Rossi's case, he has worked with these reactors for many years. He has tested them time and time again. In fact, he has built hundreds of units (of different models), and has tested every one of them. He is aware of how the units operate and how they perform. Actually, for a period of many months to a year or more, he had an early model of E-Cat heating one of his offices in Italy. Satisfying the curiosity of internet "chatters" by operating a unit for an extended period of time -- beyond what he thinks is needed to prove the effect -- is just a waste of his time, according to his thinking. He could spend the time getting the one megawatt plant ready to launch. Don't forget, Rossi is a busy person. In addition to finishing the one megawatt plant, he has a new partner company to find, a wife at home, and a life to live! We need to consider that he works sixteen to eighteen hours a day building units, testing them, addressing other issues about the E-Cat. Although he is a very helpful person in many ways (willing to communicate with people and answer questions), he simply does not have the time to grant all of the many requests made of him. If he did, he could not get any work done at all, and the E-Cat would never be launched, or ever make it to the market place! The Outsider's Mindset I consider myself an outsider. I have never built a cold fusion device, have never spent years working to develop a technology, and have never gone through the grueling process of trying to bring a product into the market place. Although I spend a lot of time researching various technologies on the internet, I don't work sixteen to eighteen hours a day. In addition, I have no vested interest in the success of any technology, other than simply wanting at least one to hit the market place, ASAP. As an outsider, I do not think like Rossi thinks. I don't think the majority of people think like Rossi thinks, because they are not in his shoes. They are not working to the point of exhaustion, and do not have years of their life invested in an exotic technology. Due to the fact we do not think like Rossi, his actions or sometimes lack thereof can seem strange, bizarre, or odd. Sometimes, they can make us want to smack ourselves, to make sure we are not in some sort of strange dream. The recent test on October 6, 2011 is an example of a situation in which outsiders would have liked to have seen a very different test. Here are examples of how an outsider would have liked to see the test performed, compared to Rossi's possible mindset. (Please note that I am making speculations about what Rossi is thinking, and his mindset. I do not know for sure if my guesses are accurate. If they are not, then I would like to apologize to Rossi, and give him the chance to respond in any way he sees fit.) In the recent test, the output producing capability of the reactors was throttled down for safety reasons. This may have been done by keeping the hydrogen pressure low, or adding less of the catalyst to the nickel powder. Also, only one out of the three reactors that were inside of the module, were used in the test. For an experimental test to prove the effect beyond a shadow of a doubt, I, as an outsider, would have loved to have seen the device fully throttled up, despite the safety risks. Even if it meant everyone that attended would have had to sign long legal disclaimers, it would have been worth it. I think it would have been great if all three reactors were utilized, and they all were adjusted to produce their maximum level of output. This would have increased the amount of output produced dramatically, and would have reduced the amount of input needed. The more heat produced by the system, the less heat would have needed to be input via the electric resistors. Rossi, on the other hand, probably thought throttling up the device to a high level was not worth the risk, and was not needed to prove that excess energy was being produced. It is true that an explosion causing injuries -- while probably VERY unlikely -- could result in a setback of his project, and possible legal ramifications. Also, in reality, the test proved excess energy was being produced even with only one, throttled down reactor being used. So even though a test of the device adjusted to operate at full power would have been useful and exciting, it was not absolutely needed for what Rossi wanted to accomplish. I would like to ask Rossi to consider performing a demonstration with a module both adjusted to operate at full power, and utilizing all three reactor cores. Even if he has to limit the number of people involved, perform the test remotely with cameras monitoring the module, utilize a blast shield, or only allow certain individuals (who have signed disclaimers) to go into the room in which the module is running. A Longer Self Sustain As an outsider, I have not had the chance to look at test data from these devices self-sustaining for long periods of time -- 12 hours, 24 hours, days, weeks, etc. I would really like to see one of these units self sustain for a *very* long period of time. This is not because I think the output of the E-Cat during the recent test was due to stored energy being released (the 'thermal inertia' theory being floated around the internet). In fact, I think that the flat line in NyTeknik's graph -- showing self sustain mode for three and a half hours without any drop in output temperature -- provides clear evidence against the thermal inertia theory. The reason I would like to see a longer period of self sustain, is that it would not only document a huge gain of energy, but one that no individual could rationally deny! Rossi has claimed that these devices represent an alternative energy solution that could change the world. I think this is true. However, to show just how much potential this technology has, an even more extended test of the E-Cat in self sustain mode (at full power or at least with all three reactors inside the module being used), would have been much more impressive. I am not saying the Oct. 6 test was not impressive -- it was very significant because it demonstrated excess energy and proof of cold fusion -- but that a longer test would have been better. It would have done more to shut up the cynics (a few of which will never change their minds), and help the technology get into the mainstream (dumbstream) media. I really don't think Rossi cares too much about showing off the technology's full potential, at this point. He also does not seem to appear to want the attention of the mainstream media, or at least any more than he thinks he needs. If he did, the test would have been far different, and would have produced such a gigantic amount of excess energy everyone's jaws would have dropped. My jaw dropped when I saw the flat line during self sustain mode (because it proved beyond a doubt the system was producing excess energy), but my jaw did not drop as far as it could have, if the period of operation had lasted longer. Interestingly, I have known inventors, of unrelated energy technologies, that purposely held back from showing the *best* version of their technology. They did not want to show off too much, because they did not want to deal with the fallout of attracting too much attention. Instead of performing an amazing demonstration, they performed one that proved the point -- at least to their satisfaction -- but would not attraction too much attention. I think Rossi may feel the same way. If he had his way, he would have never done a single test before the launch of the one megawatt plant. It was Focardi that convinced him to do a public test, because he feared that (due to health problems) he may not live long enough to see the technology be revealed to the world. A longer test (at least 12 hours) in self sustain mode would have been great, exciting, and would have produced even more excess energy. However, in Rossi's mind, it was not needed, for potentially valid reasons (at least from the perspective of someone on the inside). I would simply like to humbly plead with Rossi, to try and step in the shoes of the outsider, and at the next test allow the module to run for a longer period in self sustain mode. Modern Testing Methods and Tools I have looked at the data acquired during the test, but have not had a chance to study it as in depth as I would like to. The data shows a clear gain of energy in my opinion, and confirms that the E-Cat is producing excess energy. As I said before, the test was a success. However, it could have been performed in a more modern way. For example, all of the temperature measurements, power input measurements, and water flow measurements should have been fed into the same computer, to be recorded in a real time manner. This way all the data would have been automatically recorded into one data set, including the hour and second of every measurement. It seems data collection was not done this way at the test, and some of the data was actually taken by hand! Because the data was not all automatically recorded into one computer during the test, NyTeknik (who had the exclusive right to be the first to post a report on the test) has not yet posted a graph that charts all the measurements of all the factors of the test. What I would like to see is a single high resolution graph, that shows all of the measurements that were taken of every parameter of the test. If one graph showing everything would be too complex for a non-expert to easily interpret, then a series of graphs would be ideal. This would allow everyone to more simply determine the total energy in, and the total energy out. The data collected and the manner in which it was collected is good enough to show there was a significant amount of excess energy produced, especially during self sustain mode. It may also be good enough to show even more details about the excess energy produced. Sadly, I'm not an expert in scientific data interpretation, so it takes me more time to interpret data than an expert who does so full time (like Rossi). I hope that when I have had the time to examine the data in more depth, I will see that Rossi's claims about the results of the test (not just excess energy but a six fold gain of energy, in a worst case scenario) are accurate. At this point, I am not going to doubt him. He is the expert, and there are many people going over the data, and hopefully more data from the test will be coming in the near future. What I would like to do, is request that he upgrade his data acquisition methods for any upcoming public tests. However, from Rossi's perspective, the way the data was acquired was good enough, and proved the point he wanted to make. I respect his view, but I do hope that he will change his mind in the future. For the record, I am not stating that I think better data acquisition techniques are needed to verify his technology produces excess energy, and even significant amounts of it. I simply think it would make analysis of test data much simpler, quicker, and precise. One of the most useful tools in the scientific method is a control. A control is an object or thing that you do not try to change during the experiment. For example, if you were giving an experimental drug to a hundred people, you might want to have a number of additional people who do not receive the drug. You would compare how the drug effects the people who consumed it, to those who did not receive the drug at all. By comparing the two sets of people, those who consumed the drug and those who did not, you could more easily see the effectiveness of the drug -- or if it was doing harm. In Rossi's test, a control system would have been an E-Cat module that was setup in the exact same way, except it would have not been filled with hydrogen gas. It would have had the same flow of water going through it, the same electrical input, and it would have operated for the same length of time as the E-Cat unit with hydrogen. By comparing the two, you could easily see the difference between the "control" E-Cat (that was not having nuclear reactions take place), and the "real" E-Cat (that was producing excess heat). If a control had been used in the experiment, the excess heat would be even more obvious. It would have been so obvious, that it could have made the test go from a major success (with some flaws), to the most spectacular scientific test in the last hundred years. Yes, a control would have made that much of a difference! I understand that Rossi may not see the need for a control, when the test that was performed clearly showed excess energy without it. A control might have made the experiment so mind blowingly amazing, it could have attracted too much media attention, too many scientists that would want to get involved, and too many individuals wanting additional information. The result could have been that Rossi would not even have the time to finish his one megawatt plant. However, from the view point of an outsider, I think a control would have greatly benefited the experiment. If it created too much media attention, perhaps someone could volunteer to work for a month as an unpaid intern, filtering through all of the requests from media representatives, and taking care of many non-technical tasks, so Rossi could focus on getting the one megawatt plant ready! I sincerely hope that during the test of the one megawatt plant, and any tests before then, a control run will be performed, in which no hydrogen is placed in the reactors. Rossi's Statement about the Test Results Andrea Rossi responded to an email we sent him that had questions about the test. Here is the email, and his responses. THANK YOU FOR YOUR CONTINUOUS ATTENTION. PLEASE FIND THE ANSWERS IN BLOCK LETTERS ALONG YOUR TEXT: Dear Andrea Rossi, In regards to the latest test of the Energy Catalyzer, I have a number of questions I hope you can answer. 1) My understanding is that if a reactor core is not adjusted to be under-powered (below its maximum potential) in self-sustain mode, it can have a tendency to become unstable and climb in output. If the reactor is left in an unstable self-sustaining mode for too long, the output can climb to potentially dangerous levels. Can you provide some information about how the reactor core in the test was adjusted to self-sustain in a safe manner? NO, VERY SORRY a) For example, there was only one active reactor core in the module tested. How was the single reactor core adjusted to be under-powered? b) Is adjusting the reactor core as simple as lowering the hydrogen pressure? 2) What is the power consumption of the device that "produces frequencies" that was mentioned in the NyTeknik article? Although the power consumption of this device is probably insignificant, providing a figure could help put to rest the idea (that some are suggesting) that a large amount of power was being consumed by the frequency-generating device, and transmitted into the reactor. THE ENERGY CONSUMED FROM THE FREQUENCY GENERATOR IS 50 WH/H AND IT HAS BEEN CALCULATED, BECAUSE THIS APPARATUS WAS PLUGGED IN THE SAME LINE WHERE THE ENERGY-CONSUME MEASUREMENT HAS BEEN DONE a) Can you tell us anything more about this frequency generating device and its function? NO, SORRY, THIS IS A CONFIDENTIAL ISSUE b) Is the frequency-generating device turned on at all times when a module is in operation, or only when a module is in self-sustain mode? c) Some are suggesting that this device is "the" catalyst that drives the reactions in the reactor core. However, you have stated in the past that the catalyst is actually one or more physical elements (in addition to nickel and hydrogen) that are placed in the reactor core. Can you confirm that physical catalysts are used in the reactor? YES, I CONFIRM THIS 3) Does the reaction have to be quenched with additional water flow though the reactor, or is reducing the hydrogen pressure enough to end the reactions on its own? NEEDS ADDITIONAL QUENCHING a) If reducing the hydrogen pressure (or venting it completely) is not enough to turn off the module, could it be due to the fact some hydrogen atoms are still bonded to nickel atoms, and undergoing nuclear reactions? b) If there is some other reason why reducing hydrogen pressure is not enough to quickly turn off the module, could you please specify? Thank you for taking the time to answer these questions, and for allowing a test to be performed that clearly shows anomalous and excess energy being produced. Hopefully, the world will notice the significance of this test. THANK YOU VERY MUCH, AND, SINCE I HAVE ABSOLUTELY NOT TIME TO ANSWER (I MADE AN EXCEPTION FOR YOU) PLEASE EXPLAIN THAT BEFORE THE SELF SUSTAINING MODE THE REACTOR WAS ALREADY PRODUCING ENERGY MORE THAN IT CONSUMED, SO THAT THE ENERGY CONSUMED IS NOT LOST, BUT TURNED INTO ENERGY ITSELF, THEREFORE IS NOT PASSIVE. ANOTHER IMPORTANT INFORMATION: IF YOU LOOK CAREFULLY AT THE REPORT, YOU WILL SEE THAT THE SPOTS OF DRIVE WITH THE RESISTANCE HAVE A DURATION OF ABOUT 10 MINUTES, WHILE THE DURATION OF THE SELF SUSTAINING MODES IS PROGRESSIVELY LONGER, UNTIL IT ARRIVES TO BE UP TO HOURS. BESIDES, WE PRODUCED AT LEAST 4.3 kWh/h FOR ABOUT 6 HOURS AND CONSUMED AN AVERAGE OF 1.3 kWh/h FOR ABOUT 3 HOURS, SO THAT WE MADE IN TOTAL DURING THE TEST 25.8 kWh AND CONSUMED IN TOTAL DURING THE TEST 3.9 kWh. IN THE WORST POSSIBLE SCENARIO, WHICH MEANS NOT CONSIDERING THAT THE CONSUME IS MAINLY MADE DURING THE HEATING OF THE REACTOR DURING THE FIRST 2 HOURS, WE CAN CONSIDER THAT THE WORST POSSIBLE RATIO IS 25.8 : 3.9 AND THIS IS THE COP 6 WHICH WE ALWAYS SAID. OF COURSE, THE COP IS BETTER, BECAUSE, OBVIOUSLY, THE REACTOR, ONCE IN TEMPERATURE, NEEDS NOT TO BE HEATED AGAIN FROM ROOM TEMPERATURE TO OPERATIONAL TEMPERATURE. WARMEST REGARDS TO ALL, ANDREA ROSSI He claims that the test produced 25.8 kilowatt hours of power, and consumed only 3.9 kilowatt hours, not considering the losses from using two circuits of water and a heat exchanger. This would be very impressive for a system that is only using one reactor core (out of three), that has been adjusted to only produce a fraction of its maximum potential power. However, from my analysis of the data so far (still trying to wrap my head around it), I have not been able to confirm his claim of a COP of 6. I am not saying it is not the case, or not in the data. I simply have yet to fully examine the data, and I am waiting for more data to be released. Actually, I hope that someone will release all the data in one file and/or graph that will be easier to interpret. Perhaps NyTeknik, if they have not done so already, could contact Rossi or someone else who attended and recorded the data, and ask for any test data they are missing. Bottom Line - Cold Fusion Is Here The fact of the matter was that the October 6th test was a success in many ways. - It documented a gain of energy. - It documented a gain of energy in self-sustain mode. - It documented massive "heat after death." Most importantly, it proved beyond a doubt, that cold fusion is a reality. Italian scientific journalist Maurizio Melis of Il Sole 24 Ore, who witnessed the test in Bologna, "In the coming weeks Rossi aims to activate a 1MW plant, which is now almost ready, and we had the opportunity to inspect it during the demonstration of yesterday. If the plant starts up then it will be very difficult to affirm that it is a hoax. Instead, we will be projected suddenly into a new energetic era." The test could have been made better in many ways. It had flaws. However, it was the most significant test of the E-Cat so far, for one reason in particular.... This graph shows that the E-Cat is a device producing excess energy, because the red line does not go down until after the hydrogen is vented. - Some may legitimately argue about how much energy was produced, because we don't yet have all the test data in one easy to interpret graph or file. - Some may point out the flaws in the test, such as the lack of a control, the lack of another several hours of operation in self sustain mode. - Some may point out ways the test could be improved. However, that graph by NyTeknik makes it clear the test was a success -- not a failure. Mainstream media, your alarm clock is buzzing, it's time to wake up! # # # This story is also published at BeforeItsNews. What You Can Do - Pass this on to your friends and favorite news sources. - Join the H-Ni_Fusion technical discussion group to explore the details of the technology. - Once available, purchase a unit and/or encourage others who are able, to do so. - Let professionals in the renewable energy sector know about the promise of - Subscribe to our newsletter to stay abreast of the latest, greatest developments in the free energy - Consider investing in Rossi's group once they open to that in October. - Help us manage the PESWiki feature page on Rossi's technology. Other PES Coverage PESN Coverage of E-Cat For a more exhaustive listing, see News:Rossi_Cold_Fusion LENR-to-Market Weekly -- June 6, 2013 - EU Parliament gives LENR thumbs LENR-to-Market Weekly -- May 30, 2013 - additional info on the E-Cat 3rd party test report (PESN) LENR-to-Market Weekly -- May 23, 2013 - E-Cat 3rd Party Results Posted (PESN) E-Cat Validation Creates More Questions (PESN; May 21, 2013) Third-Party E-Cat Test Results Posted - posted on ArXiv.org May 20, 2013) Interview with E-Cat Distributor License Broker, Roger Green (PESN; May 17, 2013) LENR-to-Market Weekly -- May 9, 2013 - Interview with Rossi about recent 1 MW plant delivery Interview with Andrea Rossi About 1 MW E-Cat Plant Delivery (PESN; May 7, 2013) LENR-to-Market MONTHLY -- April 29, 2013 - E-Cat teases with April 30 delivery date (PESN) LENR-to-Market Weekly -- March 28, 2013 - E-Cat 3rd-Party testing LENR-to-Market Weekly -- March 7, 2013 - more on NASA (PESN) LENR-to-Market Weekly -- February 21, 2013 - NASA on nuclear reactor in LENR-to-Market Weekly -- February 14, 2013 - Piantelli self-sustains 2 LENR-to-Market Weekly -- February 7, 2013 - CF 101 week 2 (PESN) LENR-to-Market Weekly -- January 31, 2013 - CF 101 week 1 at MIT (PESN) LENR-to-Market Weekly -- January 24, 2013 - Piantelli gets CF patent, Rossi rebuffs (PESN) LENR-to-Market Weekly -- January 17, 2013 - Defkalion joint venture LENR-to-Market Weekly -- January 10, 2013 - Rossi having problems finding certification for home application (PESN) LENR-to-Market Weekly -- January 3, 2013 - Hot-Cat creating electromotive force? (PESN)
<urn:uuid:e2d43690-786e-4e65-ab06-86238be64fae>
CC-MAIN-2013-20
http://pesn.com/2011/10/08/9501929_E-Cat_Test_Validates_Cold_Fusion_Despite_Challenges/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971945
5,799
2.71875
3
[ "climate" ]
{ "climate": [ "renewable energy" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
This speciesis classified as Vulnerable because remote-sensing data indicate that there has been a dramatic loss of lowland forest across its range and that it is therefore likely to be undergoing a rapid population decline. Distribution and population Ninox odiosa is endemic to the island of New Britain Papua New Guinea where although it is rather poorly known, it appears to be not uncommon in suitable habitat. It is suspected to have declined rapidly in recent years owing to ongoing clearance of lowland forest (Buchananet al. 2008). The population is estimated to be in the band 10,000-19,999 mature individuals, equating to 15,000-29,999 individuals in total, rounded here to 15,000-30,000 individuals. Buchanan et al. (2008) calculated the rate of forest loss within the species's range on New Britain as 33.8% over three generations. Hence, this decline is expected to continue. It inhabits lowland rainforest up to 1,200 m. It is thought to tolerate some degree of habitat degradation. Lowland forest clearance on New Britain for conversion to oil palm plantations has been intense in recent decades and the island accounts for approximately half of Papua New Guinea's timber exports (Buchanan et al. 2008). Over 30% of suitable habitat has been cleared in the last 10 years and this trend is ongoing (Buchanan et al. 2008). Conservation actions underway None is known. Conservation actions proposed Identify and effectively protect a network of reserves, including some containing large areas of unlogged lowland forest, on New Britain. Continue to monitor trends in forest loss. Research its tolerance of degraded forest. Monitor populations in a number of primary forest and degraded forest sites across the island. Related state of the world's birds case studies thank you for your interest to save the Russet Hawk-owl (Ninox odiosa) If everything looks correct, click sign now. Your signature will not be added until you click the button below.
<urn:uuid:f863564d-3c61-471c-9e8f-32c4f6e557ed>
CC-MAIN-2013-20
http://www.thepetitionsite.com/636/519/346/save-the-russet-hawk-owl-ninox-odiosa/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917147
422
3.03125
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation", "habitat" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
Photo: James Duncan Davidson Drawn into controversy Wearing his wide-brimmed hat, climate scientist James Hansen starts his TEDTalk by asking, ”What do I know that would cause me, a reticent midwestern scientist, to get arrested in front of the White House, protesting?” Hansen studied under professor James Van Allen, who told him about observations of Venus — there was intense microwave radiation — because it’s hot, and it was kept that way by a thick C02 atmosphere. He was fortunate enough to join NASA and send an instrument to Venus. But while it was in transit, he became involved in calculating what would be the effect of the greenhouse effect here on Earth. It turns out the atmosphere was changing before our eyes and, “A planet changing before our eyes is more important, it affects and changes our lives.” The greenhouse effect has been understood for a century. Infrared radiation is absorbed by a layer of gas, working like a blanket to keep heat in. He worked with other scientists and eventually published an article in Science in 1981. They made several predictions in that paper: There would be shifting climate zones, rising sea levels, an opening of the northwest passage, and other effects. All of these have happened or are underway. That paper was reported on the front page of the NY Times, and led to him testifying to congress. He told them it would produce varied effects, heat waves and droughts, but also (because warmer atmosphere holds more water vapor) more extreme rainfall, stronger storms and greater flooding. All the global warming ‘hoopla’ became too much, and was distracting him from doing science. In addition, he was upset that the White House had altered his testimony, so he decided to leave communication to others. The future draws him back in The problem with not speaking was that he had two grandchildren. He realized he did not want them to say, “Opah understood what was happening, but he didn’t make it clear.” So he was drawn more and more into the urgency. Adding carbon to the air is like throwing a blanket on the bed. “More energy is coming in than is going out, until Earth is warm enough to raiate to space as much energy as it recieves from the Sun.” The key quantity is the imbalance, so they did the measurements. It turns out that continents to depths of tens of meters were getting warmer, and the Earth is gaining energy from heat. That amount of energy is equivalent to dropping 400,000 Hrioshima bombs every day, over a year, and there is as much in the pipeline as has already occurred. If we want to restore energy balance and prevent further warming, we need to reduce the carbon levels from 391 parts per million to 350. The arguments against Deniers contend that it’s the sun driving this change. But Hansen notes the biggest change occurred during the low point of the solar cycle — meaning that the effect from the sun is dwarfed by the warming effect. There are remarkable records in the Earth of what has come before, and we’ve studied them extensively. There is a high correlation between the overall temperature, carbon levels, and sea level. The temperature slightly leads carbon changes by a couple centuries. Deniers like to use that to trick the public. But these are amplifying feedbacks, even through it’s instigated by small effect, a cycle is set up that feed in on itself: More sun in the summer means that ice sheets melt, which means a darker planet, which means more warming. These amplifying feedbacks account for almost entire paleoclimate changes. The same amplifying feedbacks must occur today. Ice sheets will melt, carbon and methane will be releaseed. “We can’t say exactly how fast these effects will happen, but it is certain they will occur. Unless we stop the warming.” The view of the future Hansen presents data showing that Greenland and Antarctica are both losing mass, and that methane is bubbling from the permafrost. That does not bode well. Historically, even at today’s level of carbon, the sea level was 15 meters higher than it is now. We will get least one meter of that this century. We will have started a process that is out of humanity’s control. There will be no stable shoreline, and the economic implications of that are devastating — not to mention the spectacular loss of speices. It’s possible that 20-50% of all species could be extinct by end of century if we stay on fossil fuels. Changes have already started. The Texas, Moscow, Oklahoma and other heat waves in recent memory were all exceptional events. There is clear evidence that these were caused by global warming. Hansen’s grandson Jake is super-enthusiastic, “He thinks he can protect his 2 and a half day old little sister. It would be immoral to leave these people with a climate system spiraling out of control.” The tragedy is that we can solve this. It could be addressed by collecting a fee for carbon emissions, distributed to all residents. That would stimulate the economy and innovation, and would not enlarge the government. Instead of doing this, we are subsidizing fossil fuels by $400-500 billion per year worldwide. This, says Hansen, is a planetary emergency, just as important as an asteroid on its way. “But we dither, taking no action to divert the asteroid, even though the longer we wait, the more difficult and expensive it becomes.” “Now you know some of what I know that is sounding me to sound this alarm. Clearly I haven’t gotten this message across. I need your help. We owe it to our children and grandchildren.”
<urn:uuid:d44ce976-2ef0-4a6c-aa1f-7b3db6a74681>
CC-MAIN-2013-20
http://blog.ted.com/2012/02/29/why-i-must-speak-out-on-climate-change-james-hansen-at-ted2012/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976114
1,222
3.265625
3
[ "climate" ]
{ "climate": [ "climate system", "energy balance", "global warming", "methane", "paleoclimate", "permafrost" ], "nature": [] }
{ "strong": 3, "weak": 3, "total": 6, "decision": "accepted_strong" }
Tata teams up with Aussies for India’s first floating solar plant The pilot project, which is due to start operations by the end of the year, is based on a Sunengy patented Liquid Solar Array (LSA) technology which uses traditional concentrated photovoltaic technology - a lens and a small area of solar cells that tracks the sun throughout the day, like a sunflower LSA inventor and Sunengy executive director and chief technology officer, Phil Connor, said that when located on and combined with hydroelectric dams, LSA provides the breakthroughs of reduced cost and "on demand" 24/7 availability that are necessary for solar power to become widely used. Floating the LSA on water reduces the need for expensive supporting structures to protect it from high winds. The lenses submerge in bad weather and the water also cools the cells which increases their efficiency and life-span. According to Connor, hydro power supplies 87 percent of the world's renewable energy and 16 percent of the world's power but is limited by its water resource. He said an LSA installation could match the power output of a typical hydro dam using less than 10 percent of its surface area and supply an additional six to eight hours of power per day. Modeling by Sunengy shows that a 240 MW LSA system could increase annual energy generation at the Portuguese hydro plant, Alqueva, by 230 percent. "LSA effectively turns a dam into a very large battery, offering free solar storage and opportunity for improved water resource management," said Connor. "If India uses just one percent of its 30,000 square kilometers of captured water with our system, we can generate power equivalent to 15 large coal-fired power stations." Construction of the pilot plant in India will commence in August 2011. Sunengy also plans to establish a larger LSA system in Australia's Hunter Valley by mid-2012 before going into full production.
<urn:uuid:a9635421-6f3c-47fb-b1b6-0bd49aad4d63>
CC-MAIN-2013-20
http://www.cleanbiz.asia/story/tata-teams-aussies-india%E2%80%99s-first-floating-solar-plant
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962086
395
2.90625
3
[ "climate" ]
{ "climate": [ "renewable energy" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Fuel cells and the electric motor are examples of highly-efficient, electric drive trains. Electric vehicles are expected to one day outstrip sales of combustion engines vehicles. Innovative technologies such as fuel cells, electric motors and electric vehicles will influence our future mobility. The market for electric vehicles boasts the most potential. Fuel cells, electric motors and electric vehicles are currently experiencing a breakthrough. Fuel cells are being used in new applications such as automobiles or laptop computers. Like electric vehicles, fuel cells are still in the development phase however. The potential is far from being exploited. Because a genuine fuel cell boom is anticipated, mass production is already underway. Like fuel cells, the application potential for electric motors and electric vehicles is still in its infancy stage. The discovery of the relationship between magnetic fields and electricity laid the foundation for the electric motor, and thus the electric vehicle. The electric motor that eventually resulted from this discovery is driven by the Lorentz force, which is the force on an electric charge as it moves through a magnetic field. The development of traditional technologies such as fuel cells and the electric motor has led to a rise in environmentally-friendly electric vehicles. Hybrid vehicles are still dominating the market in the segment for environmentally-friendly automobiles however. Utilizing a combination of combustion and electric motors, hybrid vehicles are slimmed-down versions of the electric vehicle. Fuel cells are based on the principle of a galvanic process. The composition of a fuel cell is influenced by both electrodes. The fuel cell energy stems from the electrode potential, which is created by the charging of the anode and cathode. The charging results in a potential difference in the fuel cell, which is eventually transformed into electric energy. From its discovery, to today's high-technology status, the fuel cell has experienced an astounding development. Fuel cells are already being used in a variety of applications today. But its impressive career is far from over. Because of their simple operation, the use of fuel cells in electric vehicles represents the market of the future. Theelectric motor began as an electromechanical transformer. As the description implies, the electric motor is capable of transforming electricity into mechanical energy. The electric motor functions by transforming its mechanical force into motion. Like fuel cell technology, the electric motor is a popular drive train alternative in electric vehicles. The development of the electric motor as a drive train for electric vehicles is still a work in progress however. The first genuine electric motor was produced as early as 1834. Today, state-of-the-art, innovative technologies are still based on discoveries made by researchers nearly 200 years ago, as illustrated by the examples of the fuel cell, electric motor and electric vehicle. While electric motors and fuel cells were originally used in industrial machine applications, electric vehicles are the technology of the future. At the beginning of their development, electric motors were initially used in locomotives . At this point, the focus is on the development of roadworthy electric vehicles. The key drivers of modern research into the electric vehicle are the electric motor's high degree of efficiency and low CO2 output, two factors that are behind current efforts to combat energy resource and climate change issues. The major issue is energy storage , which is the why researches are focused primarily on this aspect. For this reason, hybrid model electric vehicles - the combination of electric and combustion motors - are still in their infancy stage. Automotive Engineering highlights issues related to automobile manufacturing - including vehicle parts and accessories - and the environmental impact and safety of automotive products, production facilities and manufacturing processes. innovations-report offers stimulating reports and articles on a variety of topics ranging from automobile fuel cells, hybrid technologies, energy saving vehicles and carbon particle filters to engine and brake technologies, driving safety and assistance systems. Fraunhofer-Institut für Werkstoffmechanik IWM13.08.2008 | Read more The National Academies21.07.2008 | Read more Fraunhofer Institute for Information and Data Processing IITB11.07.2008 | Read more Du Pont de Nemours (Deutschland) GmbH11.07.2008 | Read more University of Portsmouth02.07.2008 | Read more DOE/Lawrence Livermore National Laboratory06.06.2008 | Read more Fraunhofer Institute for Integrated Circuits IIS04.06.2008 | Read more Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF09.05.2008 | Read more Fraunhofer Institute for Open Communication System02.05.2008 | Read more Du Pont de Nemours (Deutschland) GmbH09.04.2008 | Read more National Institute of Standards and Technology (NIST)04.04.2008 | Read more University of Stuttgart17.03.2008 | Read more DOE/Argonne National Laboratory26.02.2008 | Read more DOE/Pacific Northwest National Laboratory25.02.2008 | Read more Du Pont de Nemours (Deutschland) GmbH20.02.2008 | Read more Universiti Putra Malaysia (UPM)14.02.2008 | Read more A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials. The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers. Droplets in this toroidal shape made ... Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry. Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada. High manufacturing cost and a short lifetime are still a major obstacle on ... University of Würzburg physicists have succeeded in creating a new type of laser. Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature. It also emits light the waves of which are in phase with one another: the polariton laser, developed ... Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions. They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics. “When water boils, its molecules are released as vapor. We call this ... Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset. For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ... 22.05.2013 | Life Sciences 22.05.2013 | Ecology, The Environment and Conservation 22.05.2013 | Earth Sciences 17.05.2013 | Event News 15.05.2013 | Event News 08.05.2013 | Event News
<urn:uuid:b0d7c4e8-3cb3-4f9b-bfa3-37fbf01e78f3>
CC-MAIN-2013-20
http://www.innovations-report.com/reports/reports_list.php?show=19&page=5
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921333
1,622
3.25
3
[ "climate", "nature" ]
{ "climate": [ "climate change", "co2" ], "nature": [ "conservation" ] }
{ "strong": 3, "weak": 0, "total": 3, "decision": "accepted_strong" }
Office of Environment and Energy The Energy web site describes HUD energy initiatives, policies and how federal government wide energy policies affect HUD programs and assistance. HUD faces many challenges when it comes to energy policy. For an overview, see Implementing HUD's Energy Strategy, a report to Congress dated December 2008, that includes a summary of progress toward implementing planned actions . First, utility bills burden the poor and can cause homelessness. There is a Home Energy Affordability Gap Index based on energy bills for persons below 185 percent of the Federal Poverty Level. The gap was $34.1 billion at 2007/2008 winter heating fuel prices . The burden on the poor is more than four times the average 4 percent others pay.Twenty-six percent of evictions were due to utility cut-offs in St. Paul, MN. Second, HUD programs are affected by energy costs. HUD's own "energy bill" - the amount that HUD spends annually on heating, lighting, and cooling its portfolio of public and assisted housing and section 8 vouchers - reached the $5 billion mark in 2007 . Public Housing utilities cost more than $1 billion per year. Third, energy costs affect economic development. Importing fuel drains millions of dollars from local economies . Database of State Incentives for Renewables & Efficiency (DSIRE) Information on state, local, utility and federal incentives and policies that promote renewable energy and energy efficiency from a database funded by the U.S. Department of Energy. Edison Electric Institute’s Electric Company Programs Information on energy efficiency and low-income assistance programs offered by various utilities across the nation. HUD’s Public and Indian Housing Environmental and Conservation Clearinghouse Sources of funding for energy conservation and utility cost reduction activities from HUD’s Public and Indian Housing Environmental Clearinghouse. Promoting Energy Star through HUD’s HOME Investment Partnerships Program Resources for promoting Energy Star through the HOME program. Additional HUD Resources Useful documents, publications, and information related to resource conservation in public housing from HUD’s Public and Indian Housing Environmental Clearinghouse. Energy Star For New Construction Assisted By The Home Program HUD has worked with EPA to promote the use of ENERGY STAR standards in construction of houses. Here are the results of that production by the HOME Program for Fiscal Year 2009. Energy Star for Grantees Energy Efficiency with CDBG HOME Energy Star Awards for Affordable Housing Regional Energy Coordinators reviewed applications for Energy Star Awards for Affordable Housing in 2008, 2009, and 2010. HUD CHP Screening Tools HUD's 2002 Energy Action Plan committed HUD to promote the use of combined heat and power (cogeneration) (CHP) in housing and community development. HUD developed a Q Guide explaining CHP to building owners and managers. HUD and DOE Oak Ridge National Laboratory then developed a Level 1 feasibility screening software tool to enable them quickly to get a rough estimate of the cost, savings and payback for installing CHP. The Level 1 screening tool requires only monthly utility bills and a little information about the building and its occupants. PDF | Download Software | more... HUD and ORNL have now produced a Level 2 Combined Heat and Power (CHP) analysis tool for more detailed analysis of the potential for installing combined heat and power (cogeneration) in multifamily buildings. Level 2 works from hourly utility consumption and detailed information about the building and its equipment. ORNL Level 2 Tool "HUD CHP Guide #3 Introduction to the Level 2 Analysis for Combined Heat and Power in Multifamily Housing" explains how it was developed and provides links to ORNL for downloading the tool, its Users' Manual and training material. It also provides an exercise to demonstrate how it works. The tool is complex and calls for analysis by those with advanced ability to understand building energy use and simulation. Green Homes and Communities This website has very good energy information, including references to Sustainable Communities, DOE EECBG funding etc. Energy Efficiency in CPD Programs - See Table on page 13 for planned actions developed by the Energy Task Force. - See Fisher, Sheehan and Colton, On the Brink 2008; The Home Energy Affordability Gap - See Table B-1, in "Implementing HUD's Energy Strategy" - See Energy and Economic Development Phase I | Phase II.
<urn:uuid:a2370072-c634-4186-b06e-2fd1ba0e0385>
CC-MAIN-2013-20
http://portal.hud.gov/hudportal/HUD?src=/program_offices/comm_planning/library/energy
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.915712
901
2.65625
3
[ "climate", "nature" ]
{ "climate": [ "renewable energy" ], "nature": [ "conservation" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }
How will the technology and policy changes now sweeping through the industry affect the architecture of the utility grid? Will America build an increasingly robust transmission infrastructure, or... Unforeseen consequences of dedicated renewable energy transmission. Growth in renewable electricity (RE) generation will require major expansion of electricity transmission grids, and in the U.S. this could require building an additional 20,000 miles of transmission over the next decade—double what’s currently planned. To facilitate this, government policymakers are planning to build what are sometimes called “green” transmission lines that are restricted to carrying electricity generated by renewable sources, primarily wind and solar. However, state and local jurisdictions are resisting siting of transmission unless it serves local constituents and existing power plants. If such transmission is built and local access is allowed, then the major beneficiaries of the added transmission might be existing power generation facilities, especially coal plants. Many of these facilities have very low electricity generating costs and their capacity factors are transmission-constrained. Their access to added transmission lines could enable them to sell electric power at rates against which RE can’t compete. 20,000 Miles of Wire JP Morgan studied a possible federal renewable energy standard (RES) and its impact on the growth rate of RE. 1 We used JP Morgan data to estimate the potential impact of an RES and the transmission required to facilitate it on the existing fleet of power plants. The analysis focused primarily on coal plants because they can increase their capacity factors, whereas U.S. nuclear plants already have capacity factors above 90 percent. Given the location of the coal plants throughout the U.S. and their current capacity factors, we estimated the impact of expanded electricity transmission lines on RE generation and costs and on conventional electricity generation and costs. The locations of the RE central station technologies and their distances from major load centers largely determine the new transmission that will be required. Geothermal will be installed in a small number of Western states, 2 while biomass will be installed primarily in the northern Great Plains, the Pacific Northwest, and perhaps parts of the South. Solar thermal (ST) and photovoltaics (PV) will be installed in some Western and Southwestern states, and wind will be installed primarily in the northern Great Plains. The major load centers are primarily metropolitan areas in the coastal states, the Boston-Washington corridor, the West Coast corridor, and major Midwestern cities. In general, increased transmission capability is desirable, because a robust interstate electric transmission system is in everyone’s interest—consumers, power producers, and governments. An expanded transmission network will allow for power system growth, provide greater flexibility in expanding generation at existing plant sites, and facilitate construction of new generating plants at optimal locations. However, there’s a mismatch between RE resources and load centers: Most of the best RE sites are west of the Mississippi river, but most of the load centers are east of the river or on the West Coast. Even West Coast load centers are far from the best RE sites. We estimated how much new transmission needs to be built to
<urn:uuid:be40a391-ba3c-4564-a4eb-3d42885d4685>
CC-MAIN-2013-20
http://www.fortnightly.com/fortnightly/2012/02/not-so-green-superhighway
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.928116
625
2.953125
3
[ "climate" ]
{ "climate": [ "renewable energy" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
A new carbon cycle model developed by researchers in Europe indicates that global carbon emissions must start dropping by no later than 2015 to prevent the planet from tipping into dangerous climate instability. The finding is likely to put new pressure on the world’s top two carbon emitters — China and the US — both of which were widely blamed for failure to reach a binding global accord on carbon reductions in Copenhagen last December. Furthermore, the non-binding outcome of Copenhagen has global carbon emissions peaking in 2020 — five years too late, according to the latest model. The model, developed by researchers at Germany’s Max Planck Institute for Meteorology, suggests the world’s annual carbon emissions can reach no more than 10 billion tonnes in five years’ time before they must be put on a steady downward path. After that, the researchers say, emissions must drop by 56 per cent by mid-century and need to approach zero by 2100. Those targets are necessary to prevent average global temperatures from rising by more than 2 degrees C by 2100. Under that scenario, though, further warming can still be expected for years to come afterward. “It will take centuries for the global climate system to stabilise,” says Erich Roeckner, a researcher at the Max Planck Institute. The new model is the first to pinpoint the extent to which global carbon emissions must be cut to prevent dangerous climate change. Since the beginning of the Industrial Revolution, atmospheric concentrations of carbon dioxide have risen by 35 per cent, to around 390 parts per million today. Stabilising the climate will require concentrations to climb to no higher than 450 parts per million. “What’s new about this research is that we have integrated the carbon cycle into our model to obtain the emissions data,” Roeckner says.
<urn:uuid:3c48599d-00d0-4bef-8ac0-94d34a0f8472>
CC-MAIN-2013-20
http://www.greenbang.com/clocks-ticking-carbon-emissions-must-peak-by-2015_14888.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.934462
371
3.9375
4
[ "climate" ]
{ "climate": [ "carbon dioxide", "climate change", "climate system" ], "nature": [] }
{ "strong": 2, "weak": 1, "total": 3, "decision": "accepted_strong" }
Conservation work completed on one of Glasgow’s most historic buildings 27 February 2013 Glasgow Cathedral has recently received £30,000 worth of conservation work by Historic Scotland to repair the damage inflicted by last year’s gales. Despite being the only medieval cathedral on the Scottish mainland to have survived the 1560 Reformation virtually complete, the weathercock and spike on top of the landmark did not escape last year’s merciless strong winds unscathed. The spike of the church was no longer vertical due to corrosion and a closer examination of the weathercock showed damage to some of the rivets with the tail being completely blown off. A further structural inspection also found an area of the steeple was in need of re-pointing. Overseeing the conservation work was Historic Scotland's District Architect, Ian Lambie. While the conservation itself was a fairly simple task, it was made more complex by the height of the building and the narrow conditions. Ian Lambie said: “The only way of accessing the spire head was by climbing up inside the steeple using steep narrow ladders and squeezing and crawling through small windows at the apex. Making repairs at this sort of height has its challenges but it's always worth it for the view!” Now that the necessary repairs are complete, the weathercock should be able to withstand the worst of the wind and the re-pointing should hold for at least another ten years. Notes for editors: Historic Scotland around the web: - Historic Scotland is an executive agency of the Scottish Government charged with safeguarding the nation’s historic environment. The agency is fully accountable to Scottish Ministers and through them to the Scottish Parliament.
<urn:uuid:28ea8990-1e5d-41bc-a0c2-6f3d60fc4a2f>
CC-MAIN-2013-20
http://www.historic-scotland.gov.uk/largetext/news_article?articleid=38690
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702448584/warc/CC-MAIN-20130516110728-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.954779
351
2.625
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Pacific ecosystem: An ochre sea star makes a kelp forest home in Monterey Bay, California. Image courtesy of Kip F. Evans Ocean Literacy - Essential Principle 5 The ocean supports a great diversity of life and ecosystems. Fundamental Concept 5a. Ocean life ranges in size from the smallest virus to the largest animal that has lived on Earth, the blue whale. Fundamental Concept 5b. Most life in the ocean exists as microbes. Microbes are the most important primary producers in the ocean. Not only are they the most abundant life form in the ocean, they have extremely fast growth rates and life cycles. Fundamental Concept 5c. Some major groups are found exclusively in the ocean. The diversity of major groups of organisms is much greater in the ocean than on land. Fundamental Concept 5d. Ocean biology provides many unique examples of life cycles, adaptations and important relationships among organisms (symbiosis, predator-prey dynamics and energy transfer) that do not occur on land. Fundamental Concept 5e. The ocean is three-dimensional, offering vast living space and diverse habitats from the surface through the water column to the seafloor. Most of the living space on Earth is in the ocean. Fundamental Concept 5f. Ocean habitats are defined by environmental factors. Due to interactions of abiotic factors such as salinity, temperature, oxygen, pH, light, nutrients, pressure, substrate and circulation, ocean life is not evenly distributed temporally or spatially, i.e., it is “patchy”. Some regions of the ocean support more diverse and abundant life than anywhere on Earth, while much of the ocean is considered a desert. Fundamental Concept 5g. There are deep ocean ecosystems that are independent of energy from sunlight and photosynthetic organisms. Hydrothermal vents, submarine hot springs, methane cold seeps, and whale falls rely only on chemical energy and chemosynthetic organisms to support life. Fundamental Concept 5h. Tides, waves and predation cause vertical zonation patterns along the shore, influencing the distribution and diversity of organisms. Fundamental Concept 5i. Estuaries provide important and productive nursery areas for many marine and aquatic species. Shop Windows to the Universe Science Store! The Winter 2009 issue of The Earth Scientist , focuses on Earth System science, including articles on student inquiry, differentiated instruction, geomorphic concepts, the rock cycle, and much more! You might also be interested in: About 70% of the Earth is covered with water, and we find 97% of that water in the oceans. Everyone who has taken in a mouthful of ocean water while swimming knows that the ocean is really salty. All water...more Oxygen is a chemical element with an atomic number of 8 (it has eight protons in its nucleus). Oxygen forms a chemical compound (O2) of two atoms which is a colorless gas at normal temperatures and pressures....more The deep ocean is very cold, under high pressure, and always dark because sunlight can not penetrate that far. The only light comes from bioluminescence – a chemical reaction inside the bodies of some...more Photosynthesis is the name of the process by which autotrophs (self-feeders) convert water, carbon dioxide, and solar energy into sugars and oxygen. It is a complex chemical process by which plants and...more Methane is gas that is found in small quantities in Earth's atmosphere. Methane is the simplest hydrocarbon, consisting of one carbon atom and four hydrogen atoms. Methane is a powerful greenhouse gas....more The intertidal zone is the area along a coastline that is underwater at high tide and above the water at low tide. Whether it’s a rocky coast, a sandy beach, or a salt marsh, life in the intertidal zone...more The ocean makes Earth habitable. Fundamental Concept 4a. Most of the oxygen in the atmosphere originally came from the activities of photosynthetic organisms in the ocean. Fundamental Concept 4b. The first...more
<urn:uuid:449397e8-9b26-4af0-be64-67eab4f80091>
CC-MAIN-2013-20
http://www.windows2universe.org/teacher_resources/main/frameworks/ol_ep5.html&lang=sp
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702810651/warc/CC-MAIN-20130516111330-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.89608
855
3.859375
4
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "greenhouse gas", "methane" ], "nature": [ "ecosystem", "ecosystems" ] }
{ "strong": 5, "weak": 0, "total": 5, "decision": "accepted_strong" }
The World Heritage Committee, 1. Having examined Documents WHC-08/32.COM/8B and WHC-08/32.COM/INF.8B1, 2. Inscribes the Al-Hijr Archaeological Site (Madâin Sâlih), Saudi Arabia,on the World Heritage List on the basis of criteria (ii) and (iii); 3. Adopts the following Statement of Outstanding Universal Value: The archaeological site of Al-Hijr is a major site of the Nabataean civilisation, in the south of its zone of influence. Its integrity is remarkable and it is well conserved. It includes a major ensemble of tombs and monuments, whose architecture and decorations are directly cut into the sandstone. It bears witness to the encounter between a variety of decorative and architectural influences (Assyrian, Egyptian, Phoenician, Hellenistic), and the epigraphic presence of several ancient languages (Lihyanite, Talmudic, Nabataean, Greek, Latin). It bears witness to the development of Nabataean agricultural techniques using a large number of artificial wells in rocky ground. The wells are still in use. The ancient city of Hegra/Al-Hijr bears witness to the international caravan trade during late Antiquity. Criterion (ii): The site of Al-Hijr is located at a meeting point between various civilisations of late Antiquity, on a trade route between the Arabian Peninsula, the Mediterranean world and Asia. It bears outstanding witness to important cultural exchanges in architecture, decoration, language use and the caravan trade. Although the Nabataean city was abandoned during the pre-Islamic period, the route continued to play its international role for caravans and then for the pilgrimage to Mecca, up to its modernisation by the construction of the railway at the start of the 20th century. Criterion (iii): The site of Al-Hijr bears unique testimony to the Nabataean civilisation, between the 2nd and 3rd centuries BC and the pre-Islamic period, and particularly in the 1st century AD. It is an outstanding illustration of the architectural style specific to the Nabataeans, consisting of monuments directly cut into the rock, and with facades bearing a large number of decorative motifs. The site includes a set of wells, most of which were sunk into the rock, demonstrating the Nabataeans' mastery of hydraulic techniques for agricultural purposes. The testimony borne by Al-Hijr to the Nabataean civilisation is of outstanding integrity and authenticity, because of its early abandonment and the benefit over a very long period of highly favourable climatic conditions. The State Party has begun to set up an extremely comprehensive Local Management Unit, and this process is now under way. The announced management plan should enable satisfactory protection of the property. With this in mind, the plan should organise systematic monitoring of the conservation of the site, and prepare a project for the presentation of the Outstanding Universal Value of the property for the benefit both of visitors and of the population of the region. 4. Requests the State Party to: - a) implement the established management plan; - b) in the framework of the management plan and the Local Management Unit, conduct regular monitoring; 5. Recommends that: - a) the new framework law on the Kingdom's Antiquities and Museums be promulgated; - b) care be taken to ensure that the development of reception facilities at the property and future developments in the wider surroundings of the property do not impact on the expression of the property's Outstanding Universal Value.
<urn:uuid:807378d0-43c0-45bd-9668-e889e9d9556f>
CC-MAIN-2013-20
http://whc.unesco.org/en/decisions/1480/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.914231
754
2.515625
3
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
From Sioux Falls to Rapid City, fire weather meteorologists are watching conditions closely. And until we receive widespread, heavy moisture they'll be monitoring what is known as the Keetch-Byram Drought Index or KBDI. It measures the amount of precipitation needed to return the soil to full saturation. It uses a system rating of zero to 800, which represents the moisture amount of zero to eight inches of water. It's what is needed to reduce the drought index to zero, which is saturation. Much of KELOLAND is at 500 or above. The KBDI of 400 to 600 is typical of late summer and early fall. When it gets to 600 and above, that's when intense, deep burning fires can be expected with an emphasis on downwind new fires occuring. The highest spots are in south central, north central and northeast South Dakota. Just this week, the area near Lake Andes is also considered at 800. It's an important number to know this time of year, whether you're harvesting or off road for hunting. © 2012 KELOLAND TV. All Rights Reserved.
<urn:uuid:18ab9e3d-e911-44bd-a8d5-0c3c3e67c967>
CC-MAIN-2013-20
http://www.keloland.com/newsdetail.cfm/fire-weather-index/?id=137711
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00003-ip-10-60-113-184.ec2.internal.warc.gz
en
0.950935
229
2.546875
3
[ "climate" ]
{ "climate": [ "drought" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Recently there has been a substantial increase in the attention being paid to Chronic Wasting Disease (CWD) by the media, state and federal natural resource agencies, and hunters and outdoor enthusiasts. The Florida Fish and Wildlife Conservation Commission (FWC) developed this page to provide background information on CWD and explain what is being done to determine if the disease is in Florida, and if it is not, what we are doing to make sure it never gets here. What is Chronic Wasting Disease? Chronic Wasting Disease is a progressive, neurological, debilitating disease that belongs to a family of diseases known as Transmissible Spongiform Encephalopathies (TSEs). It is believed to be caused by an abnormal protein called a prion. CWD has been diagnosed in mule deer, white-tailed deer, and Rocky Mountain elk in captive herds and in the wild. Other cervids (antlered animals) may also be susceptible. CWD attacks the brains of infected animals, causing them to become emaciated, display abnormal behavior, and lose bodily functions. CWD is a fatal disease. Clinical signs include excessive salivation and grinding of teeth, increased drinking and urination, dramatic loss of weight and body condition, poor hair coat, staggering, and finally death. Behavioral changes, including decreased interaction with other animals, listlessness, lowering of the head, blank facial expression, and repetitive walking in set patterns also may occur. How is CWD transmitted? Transmission of CWD occurs by direct contact with body fluids (feces, urine, saliva) or by indirect contact (contaminated environment). The prion is persistent in the environment and premises may remain infective for years. Crowding, such as in deer farms or by artificial feeding, facilitates transmission. There is no evidence that CWD can be transmitted to livestock or humans. Where is CWD found? CWD has been found in captive and/or free-ranging cervids in Colorado, Illinois, Iowa, Kansas, Maryland, Michigan, Minnesota, Missouri, Montana, Nebraska, New Mexico, New York, North Dakota, Oklahoma, Pennsylvania, South Dakota, Texas, Utah, Virginia, West Virginia, Wisconsin, Wyoming, the Canadian provinces of Saskatchewan and Alberta, and South Korea. In the US, the core endemic area is contiguous portions of Wyoming, Colorado, and Nebraska. The prevalence of CWD in this area is approximately <1% - 15% in mule deer and <1% in elk, although this varies greatly by location. Virginia and West Virginia are the only Southeastern states where CWD has been detected. CWD has not been found in Florida. How is CWD diagnosed? Currently the only practical method for diagnosing CWD is through analysis of brain stem tissue or lymph nodes from dead animals. There is no practical live-animal test. A tonsilar biopsy may be done on live animals; however, this is difficult and deer have to be held until diagnosis. How is CWD controlled in a population? Control is extremely difficult once CWD becomes established in a natural population. This is because of the lack of a practical live-animal test, long incubation periods, and the persistence of the prion in the environment. Also, there is no vaccine or treatment once an animal gets the disease. If detected early in free-ranging populations, i.e. when prevalence is low, then eradication may be an achievable goal. This is not currently considered possible in the core endemic area; Wisconsin, however, has initiated an aggressive eradication program in the portion of the state where CWD has been found. What steps is FWC taking to determine if CWD is in Florida, and if it is not, what is being done to keep it from getting here? The FWC has initiated a comprehensive monitoring program to make sure CWD is not already in Florida. We are asking the general public to keep their eye out for deer showing symptoms indicative of CWD. If you see a sickly, extremely skinny deer (see photo) report its location to the CWD hotline, toll free (866) 293-9282. If you harvest such a deer, do not handle it but call the CWD hotline. A biologist will collect the deer and take it to a lab for a necropsy. In addition, we will be collecting and testing tissue samples from hunter-killed deer during the hunting season. All CWD test results will be posted on this site as they are received. Click here for Florida CWD test results. The number one objective in CWD management is to prevent it from spreading into new areas. One theoretical mode of disease transmission is through infected deer, elk or moose carcasses. Therefore, in an effort to minimize the risk of the disease spreading, Florida has adopted regulations affecting the transportation of hunter-harvested deer, elk and moose from CWD-infected areas. It is illegal to bring into Florida carcasses of any species of the family Cervidae (e.g. deer, elk and moose) from 21 states and two Canadian provinces where CWD has been detected. At this time, CWD has been detected in Colorado, Illinois, Iowa, Kansas, Maryland, Michigan, Minnesota, Missouri, Montana, Nebraska, New Mexico, New York, North Dakota, Oklahoma, South Dakota, Texas, Utah, Virginia, West Virginia, Wisconsin, Wyoming, and the Canadian provinces of Saskatchewan and Alberta. Visit the United States Department of Agriculture's Web site for state-to-state CWD reports. Hunters still can bring back de-boned meat from any CWD-affected region, as well as finished taxidermy mounts, hides, skulls, antlers and teeth as long as all soft tissue has been removed. Whole, bone-in carcasses and parts are permitted to be brought back to Florida if they were harvested from non-affected CWD states. The most likely way CWD will get to Florida is through importation of live infected animals. To prevent this, live cervids (mule deer, white-tailed deer and elk) cannot be imported into Florida unless they come from a herd certified CWD-free by the Florida Department of Agriculture and Consumer Services. Any illegal importations of cervids should be reported to 1-888-404-FWCC. Public health and wildlife officials advise hunters to take the following precautions when pursuing or handling deer that may have been exposed to CWD: - Do not shoot, handle or consume any animal that is acting abnormally or appears to be sick. Contact the Florida Fish and Wildlife Conservation Commission (FWC) toll free at (866) 293-9282, if you see or harvest an animal that appears sick. - Wear latex or rubber gloves when field dressing your deer. - Bone out the meat from your animal. Don't saw through bone, and avoid cutting through the brain or spinal cord (backbone). - Minimize the handling of brain and spinal tissues. - Wash hands and instruments thoroughly after field dressing is completed. - Avoid consuming brain, spinal cord, eyes, spleen, tonsils and lymph nodes of harvested animals. (Normal field dressing coupled with boning out a carcass will remove most, if not all, of these body parts. Cutting away all fatty tissue will remove remaining lymph nodes.) - Avoid consuming the meat from any animal that tests positive for the disease. - If you have your deer commercially processed, request that your animal is processed individually, without meat from other animals being added to meat from your animal. For additional information on Chronic Wasting Disease check out these sites: USDA - Animal and Plant Health Inspection Services USGS - National Wildlife Health Center Southeastern Cooperative Wildlife Disease Study Florida Department of Health - Disease Control Chronic Wasting Disease Alliance Other FWC Resources: Florida Monitoring Program 2002-2009 If you see a sickly, extremely skinny deer report its location to the CWD hotline, toll free (866) 293-9282.
<urn:uuid:3a747b1c-54e8-4a4d-8700-da93ee569a81>
CC-MAIN-2013-20
http://www.myfwc.com/wildlifehabitats/health-disease/cwd/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00004-ip-10-60-113-184.ec2.internal.warc.gz
en
0.935035
1,663
3.515625
4
[ "nature" ]
{ "climate": [], "nature": [ "conservation" ] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
U.S. Power Plant Carbon Emissions Zoom in 2007 WASHINGTON, DC, March 18, 2008 (ENS) – The biggest single year increase in greenhouse gas emissions from U.S. power plants in nine years occurred in 2007, finds a new analysis by the nonprofit, nonpartisan Environmental Integrity Project. The finding of a 2.9 percent rise in carbon dioxide emissions over 2006 is based on an analysis of data from the U.S. Environmental Protection Agency. Now the largest factor in the U.S. contribution to climate change, the electric power industry’s emissions of carbon dioxide, CO2, have risen 5.9 percent since 2002 and 11.7 percent since 1997, the analysis shows. Texas tops the list of the 10 states with the biggest one-year increases in CO2 emissions, with Georgia, Arizona, California, Pennsylvania, Michigan, Iowa, Illinois, Virginia and North Carolina close behind. The top three states – Texas, Georgia and Arizona – had the greatest increases in CO2 emissions on a one, five and 10 year basis. TXU’s coal-fired Martin Lake power plant in east Texas (Photo Director of the Environmental Integrity Project Eric Schaeffer said, “The current debate over global warming policy tends to focus on long-term goals, like how to reduce greenhouse gas emissions by 80 percent over the next 50 years. But while we debate, CO2 emissions from power plants keep rising, making an already dire situation worse.” “Because CO2 has an atmospheric lifetime of between 50 and 200 years, today’s emissions could cause global warming for up to two centuries to come,” he warned. Data from 2006 show that the 10 states with the least efficient power production relative to resulting greenhouse gas emissions were North Dakota, Wyoming, Kentucky, Indiana, Utah, West Virginia, New Mexico, Colorado, Missouri, and Iowa. The report explains why national environmental groups are fighting to stop the construction of new conventional coal-fired power plants, which they say would make a bad situation worse. “For example” the report points out, “the eight planned coal-fired plants that TXU withdrew in the face of determined opposition in Texas would have added an estimated 64 million tons of CO2 to the atmosphere, increasing emissions from power plants in that state by 24 percent.” Some of the rise in CO2 emissions comes from existing coal fired power plants, the analysis found, either because these plants are operating at increasingly higher capacities, or because these aging plants require more heat to generate electricity. “For example, all of the top 10 highest emitting plants in the nation, either held steady or increased CO2 output from 2006 to 2007.” Robert W Scherer Power Plant is a coal-fired plant just north of Macon, Georgia. It emits more carbon dioxide than any other point in the United States. (Photo credit unknown) Georgia Power’s Scherer power plant near Macon, Georgia is the highest emitting plant in the nation. It pumped out 27.2 million tons of CO2 in 2007, up roughly two million tons from the year before. In view of these facts, the Environmental Integrity Project recommends that the nation’s oldest and dirtiest power plants should be retired, and replaced with cleaner sources of energy. That will require accelerating the development of wind power and other renewable sources of energy. Another good solution is cutting greenhouse gases quickly by reducing the demand for electricity, the authors advise. Smarter building codes, and funding low-cost conservation efforts, such as weatherization of low-income homes, purchase and installation of more efficient home and business appliances will reduce demand and yield greenhouse gas benefits. Texas tops every state measurement in the report from the most carbon dioxide measured in total tons to the largest increases in CO2 emissions over the last five years between 2002 and 2007. Ken Kramer, director of the Lone Star chapter of the Sierra Club based in Austin, Texas, says his state not only has more emissions than any other state – it has solutions to offer, such as a recent boom in wind power installations. “The bad news is that Texas is #1 in carbon emissions among the 50 states, and our emissions have grown in recent years,” Kramer said. “The good news is that Texas has the potential to play a major role in addressing global warming if we embrace smart energy solutions such as energy efficiency and renewable energy, solutions which pose tremendous economic as well as environmental benefits.” In Des Moines, Mark Kresowik, Iowa organizer of the Sierra Club’s National Coal Campaign, said, “Energy efficiency and renewable energy are powering a renaissance in rural Iowa and creating thousands of new manufacturing jobs for our state. By rejecting coal plants and reducing pollution through energy efficiency and renewable energy our states will prosper and attract new businesses and young workers for the future.” The consumption of electricity accounted for more than 2.3 billion tons of CO2 in 2006, or more than 39.5 percent of total emissions from human sources, according to the U.S. Department of Energy. Coal-fired power plants alone released more than 1.9 billion tons, or nearly one third of the U.S. total. The Department of Energy projects that carbon dioxide emissions from power generation will increase 19 percent between 2007 and 2030, due to new or expanded coal plants. An additional 4,115 megawatts of new coal-fired generating capacity was added between 2000 and 2007, with another 5,000 megawatts expected by 2012.
<urn:uuid:dc1a8aec-de58-4413-8512-51862619d9e3>
CC-MAIN-2013-20
http://www.sundancechannel.com/blog/2008/03/us-power-plant-carbon-emissions-zoom-in-2007/
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00004-ip-10-60-113-184.ec2.internal.warc.gz
en
0.936904
1,150
3.25
3
[ "climate", "nature" ]
{ "climate": [ "carbon dioxide", "climate change", "co2", "global warming", "greenhouse gas", "renewable energy" ], "nature": [ "conservation" ] }
{ "strong": 7, "weak": 0, "total": 7, "decision": "accepted_strong" }
Humans are intimately connected with the physical environment, even though we have only been present for a fraction of the vast history of the Earth. We strive to understand how the Earth has evolved since its formation over 4 billion years ago and what types of processes have fostered these changes. Our knowledge of the Earth is critical, not only for piecing together its history, but also to aid in the understanding of issues relevant to our present-day lives, such as: availability of natural resources, pollution, climate change, and natural hazards. During this course, we will perform a general survey of the physical Earth. We will examine the minerals and rocks of which the solid Earth is composed, the processes that generate Earth's landforms, natural hazards associated with geologic processes, geologic time, and surface processes (e.g., glaciers, streams, groundwater). Final Exam (May 17: 10:30-12:30) The final exam will be on materials presented during the final quarter of the course (lectures and materials in chapters 16, 15, 18, 19, 12). The exam will be comprehensive and will include material from the entire course. To get the most recent copies of the study guides click at left. The field trip to Great Falls (MD) took place on Saturday April 22. About 100 students attended on a morning that set a record for rainfall at Dulles Airport (over 3 inches of rain). The field trip was teh only opportunity for extra credit in the course. To see some photos from previous trips, click here.
<urn:uuid:a91b04b9-4328-4883-b634-7987811bb953>
CC-MAIN-2013-20
http://www.geol.umd.edu/~piccoli/geol100-2006/100.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00004-ip-10-60-113-184.ec2.internal.warc.gz
en
0.943719
316
3.515625
4
[ "climate" ]
{ "climate": [ "climate change" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
Original URL: http://www.theregister.co.uk/2006/08/21/mars_geysers/ Martian pole freckled with geysers Unlike anything on Earth Every spring, the southern polar cap on Mars almost fizzes with carbon dioxide, as the surface is broken by hundreds of geysers throwing sand and dust hundreds of feet into the Martian "air". The discovery was announced in the journal Nature by researchers at the Arizona State University, based on data from the Thermal Emission Imaging System on the Mars Odyssey orbiter. Images sent back by the probe showed that as the sun began to warm the pole, the polar cap began to break out in dark spots. Over the days and weeks that followed, these spots formed fan-like markings, and spidery patterns. As the sun rose higher in the Martian sky, the spots and fans became more numerous. "Originally, scientists thought the spots were patches of warm, bare ground exposed as the ice disappeared," said lead scientist Phil Christensen. "But observations made with THEMIS on NASA's Mars Odyssey orbiter told us the spots were nearly as cold as the carbon dioxide ice, which is at minus 198 degrees Fahrenheit." The team concluded that the dark spots were in fact geysers, and the fans that appeared were caused by the debris from the eruptions. Christensen said: "If you were there, you'd be standing on a slab of carbon-dioxide ice. Looking down, you would see dark ground below the three foot thick ice layer. "The ice slab you're standing on is levitated above the ground by the pressure of gas at the base of the ice." He explains that as the sunlight hits the region in the spring, it warms the dark ground enough that the ice touching the ground is vaporised. The gas builds up under the ice until it is highly pressurised and finally breaks through the surface layer. As the gas escapes, it carries the smaller, finer particles of the soil along with it, forming grooves under the ice. This "spider" effect indicates a spot where a geyser is established, and will form again the following year. ®
<urn:uuid:4bfc6c68-3f68-4e10-b04d-51c5dc0dab2e>
CC-MAIN-2013-20
http://www.theregister.co.uk/2006/08/21/mars_geysers/print.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00004-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971399
454
3.140625
3
[ "climate" ]
{ "climate": [ "carbon dioxide" ], "nature": [] }
{ "strong": 1, "weak": 0, "total": 1, "decision": "accepted_strong" }
The loss of biodiversity affects us in many ways. For some of us, its loss is felt in the same way that we regret the destruction of a great work of art- it may not affect us directly, but the indirect impact is strong. For others, its loss is felt in changes in lifestyles and livelihoods- the measurable costs are high. Striking the right balance between the conservation/sustainable use and the loss of biodiversity requires accounting for all the impacts of its destruction. Weighing the loss against any potential benefits will ensure that the social, as well as economic, well-being of every one are at the best levels possible. Market based economic systems have the potential to ensure that such a balancing occurs, but require that all the impacts of its loss, or use, have been fully internalised into market transactions. This book shows how public policy in the form of market creation can be used to internalise the loss of biodiversity. It promotes the use of markets to ensure that our collective preferences for conversation and sustainable use are reflected in economic outcomes. Executive Summary available. How to obtain this publication Readers can access the full version of Handbook of Market Creation for Biodiversity: Issues in Implementation choosing from the following options: - Subscribers and readers at subscribing institutions can access the online edition via SourceOECD, our online library. - Non-subscribers can purchase the PDF e-book and/or paper copy via our Online Bookshop. - The report is available to journalists from the OECD Media Relations Division (Media Relations Division) Executive Summary Handbook of Market Creation for Biodiversity: Issues in Implementation
<urn:uuid:224c42a0-6dca-4232-8283-dbb0b01bdf4e>
CC-MAIN-2013-20
http://www.oecd.org/environment/resources/handbookofmarketcreationforbiodiversityissuesinimplementation.htm
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00004-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924652
334
2.875
3
[ "nature" ]
{ "climate": [], "nature": [ "biodiversity", "conservation" ] }
{ "strong": 2, "weak": 0, "total": 2, "decision": "accepted_strong" }