text
string | id
string | dump
string | url
string | file_path
string | language
string | language_score
float64 | token_count
int64 | score
float64 | int_score
int64 | tags
list | matched_keywords
dict | match_summary
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
The Solar and Heliospheric Observatory (SOHO) spacecraft is expected to discover its 1,000TH comet this summer.
The SOHO spacecraft is a joint effort between NASA and the European Space Agency. It has accounted for approximately one-half of all comet discoveries with computed orbits in the history of astronomy.
"Before SOHO was launched, only 16 sun grazing comets had been discovered by space observatories. Based on that experience, who could have predicted SOHO would discover more than 60 times that number, and in only nine years," said Dr. Chris St. Cyr. He is senior project scientist for NASA's Living With a Star program at the agency's Goddard Space Flight Center, Greenbelt, Md. "This is truly a remarkable achievement!"
About 85 percent of the comets SOHO discovered belongs to the Kreutz group of sun grazing comets, so named because their orbits take them very close to Earth's star. The Kreutz sun grazers pass within 500,000 miles of the star's visible surface. Mercury, the planet closest to the sun, is about 36 million miles from the solar surface.
SOHO has also been used to discover three other well-populated comet groups: the Meyer, with at least 55 members; Marsden, with at least 21 members; and the Kracht, with 24 members. These groups are named after the astronomers who suggested the comets are related, because they have similar orbits.
Many comet discoveries were made by amateurs using SOHO images on the Internet. SOHO comet hunters come from all over the world. The United States, United Kingdom, China, Japan, Taiwan, Russia, Ukraine, France, Germany, and Lithuania are among the many countries whose citizens have used SOHO to chase comets.
Almost all of SOHO's comets are discovered using images from its Large Angle and Spectrometric Coronagraph (LASCO) instrument. LASCO is used to observe the faint, multimillion-degree outer atmosphere of the sun, called the corona. A disk in the instrument is used to make an artificial eclipse, blocking direct light from the sun, so the much fainter corona can be seen. Sun grazing comets are discovered when they enter LASCO's field of view as they pass close by the star.
"Building coronagraphs like LASCO is still more art than science, because the light we are trying to detect is very faint," said Dr. Joe Gurman, U.S. project scientist for SOHO at Goddard. "Any imperfections in the optics or dust in the instrument will scatter the light, making the images too noisy to be useful. Discovering almost 1,000 comets since SOHO's launch on December 2, 1995 is a testament to the skill of the LASCO team."
SOHO successfully completed its primary mission in April 1998. It has enough fuel to remain on station to keep hunting comets for decades if the LASCO continues to function.
For information about SOHO on the Internet, visit:
Explore further: Long-term warming, short-term variability: Why climate change is still an issue
|
<urn:uuid:78cbe1bd-1849-4138-b59a-5521e93122a3>
|
CC-MAIN-2013-20
|
http://phys.org/news4969.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.943417
| 663
| 4
| 4
|
[
"climate"
] |
{
"climate": [
"climate change"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
- Yes, this is a good time to plant native grass seed in the ground. You may have to supplement with irrigation if the rains stop before the seeds have germinated and made good root growth.
- Which grasses should I plant? The wonderful thing about California is that we have so many different ecosystems; the challenging thing about California is that we have so many different ecosystems. Itβs impossible for us to know definitively which particular bunchgrasses used to grow or may still grow at your particular site, but to make the best guesses possible, we recommend the following:
- Bestcase scenario is to have bunchgrasses already on the site that you can augment through proper mowing or grazing techniques.
- Next best is to have a nearby site with native bunchgrasses and similar elevation, aspect, and soils, that you can use as a model.
- After that, go to sources such as our pamphlet Distribution of Native Grasses of California, by Alan Beetle, $7.50.
- Also reference local floras of your area, available through the California Native Plant Society.
Container growing: We grow seedlings in pots throughout the season, but ideal planning for growing your own plants in pots is to sow six months before you want to put them in the ground. Though restorationists frequently use plugs and liners (long narrow containers), and they may be required for large areas, we prefer growing them the horticultural way: first in flats, then transplanting into 4" pots, and when they are sturdy little plants, into the ground. Our thinking is that since they are not tap-rooted but fibrous-rooted (one of their main advantages as far as deep erosion control is concerned) square 4" pots suit them, and so far our experiences have borne this out.
In future newsletters, we will be reporting on the experiences and opinions of Marin ranchers Peggy Rathmann and John Wick, who are working with UC Berkeley researcher Wendy Silver on a study of carbon sequestration and bunchgrasses. So far, itβs very promising. But more on that later. For now, Iβll end with a quote from Peggy, who grows, eats, nurtures, lives, and sleeps bunchgrasses, for the health of their land and the benefit of their cows.
βIt takes a while. But itβs so worth it.β
|
<urn:uuid:c183066d-32a9-42eb-91b6-191fdb0980c2>
|
CC-MAIN-2013-20
|
http://judithlarnerlowry.blogspot.com/2009/02/simplifying-california-native.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956731
| 495
| 2.515625
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon sequestration"
],
"nature": [
"ecosystems"
]
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
by Piter Kehoma Boll
Letβs expand the universe of Friday Fellow by presenting a plant for the first time! And what could be a better choice to start than the famous Grandidierβs Baobab? Belonging to the species Adansonia grandidieri, this tree is one of the trademarks of Madagascar, being the biggest species of this genus found in the island.
Reaching up to 30 m in height and having a massive trunk only branched at the very top, it has a unique look and is found only at southwestern Madagascar. However, despite being so attractive and famous, it is classified as an endangered species by IUCN Red List, with a declining population threatened by agriculture expansion.
This tree is also heavily exploited, having vitamin C-rich fruits which can be consumed fresh and seeds used to extract oil. Its bark can also be used to make ropes and many trees are found with scars due to the extraction of part of the bark.
Having a fibrous trunk, baoabs are able to deal with drought by apparently storaging water inside them. There are no seed dispersors, which can be due to the extiction of the original dispersor by human activities.
Originally occuring close to temporary water bodies in the dry deciduous forest, today many large trees are found in always dry terrains. This probably is due to human impact that changed the local ecosystem, letting it to become drier than it was. Those areas have no or very poor ability to regenerate and probably will never go back to what they were and, once the old trees die, there will be no more baobabs there.
- β -
Baum, D. A. (1995). A Systematic Revision of Adansonia (Bombacaceae) Annals of the Missouri Botanical Garden, 82, 440-470 DOI: 10.2307/2399893
Wikipedia. Adamsonia grandidieri. Available online at <http://en.wikipedia.org/wiki/Adansonia_grandidieri>. Access on October 02, 2012.
World Conservation Monitoring Centre 1998. Adansonia grandidieri. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.1. <www.iucnredlist.org>. Access on October 02, 2012.
|
<urn:uuid:10459212-d96b-47fa-9ead-4447c5ba731f>
|
CC-MAIN-2013-20
|
http://earthlingnature.wordpress.com/2012/10/05/friday-fellow-grandidiers-baobab/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.923648
| 488
| 3.703125
| 4
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"conservation",
"ecosystem",
"endangered species"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
Ki Tisa (Mitzvot)
For more teachings on this portion, see the archives to this blog, below at March 2006.
This weekβs parasha is best known for the dramatic and richly meaningful story of the Golden Calf and the Divine anger, of Mosesβ pleading on behalf of Israel, and the eventual reconciliation in the mysterious meeting of Moses with God in the Cleft of the Rockβsubjects about which Iβve written at length, from various aspects, in previous years. Yet the first third of the reading (Exod 30:11-31:17) is concerned with various practical mitzvot, mostly focused on the ritual worship conducted in the Temple, which tend to be skimmed over in light of the intense interest of the Calf story. As this year we are concerned specifically with the mitzvot in each parasha, I shall focus on this section.
These include: the giving by each Israelite [male] of a half-shekel to the Temple; the making of the laver, from which the priests wash their hands and feet before engaging in Divine service; the compounding of the incense and of the anointing oil; and the Shabbat. I shall focus here upon the washing of the hands.
Hand-washing is a familiar Jewish ritual: it is, in fact, the first act performed by pious Jews upon awakening in the morning (some people even keep a cup of water next to their beds, so that they may wash their hands before taking even a single step); one performs a ritual washing of the hands before eating bread; before each of the daily prayers; etc. The section here dealing with the laver in the Temple (Exod 30:17-21) is also one of the four portions from the Torah recited by many each morning, as part of the section of the liturgy known as korbanot, chapters of Written and Oral Torah reminiscent of the ancient sacrificial system, that precede Pesukei de-Zimra.
Sefer ha-Hinukh, at Β§106, explains the washing of hands as an offshoot of the honor due to the Temple and its serviceβone of many laws intended to honor, magnify, and glorify the Temple. Even if the priest was pure and clean, he must wash (literally, βsanctifyβ) his hands before engaging in avodah. This simple gesture of purification served as a kind of separation between the Divine service and everyday life. It added a feeling of solemnity, of seriousness, a sense that one was engaged in something higher, in some way separate from the mundane activities of regular life. (One hand-washing by kohanim, in the morning, was sufficient, unless they left the Temple grounds or otherwise lost the continuity of their sacred activity.) Our own netilat yadaim, whether before prayer or breaking bread, may be seen as a kind of halakhic carryover from the Temple service, albeit on the level of Rabbinic injunction.
What is the symbolism of purifying oneβs hands? Water, as a flowing element, as a solvent that washes away many of the things with which it comes in contact, is at once a natural symbol of both purity, and of the renewal of life. Mayim Hayyimβliving watersβis an age old association. Torah is compared to water; water, constantly flowing, is constantly returning to its source. At the End of Days, βthe land will be filled with knowledge of the Lord, like waters going down to the sea.β A small part of this is hinted in this simple, everyday gesture.
βSee that this nation is Your peopleβ
But I cannot pass over Ki Tisa without some comment on the incident of the Golden Calf and its ramifications. This week, reading through the words of the parasha in preparation for a shiur (what Ruth Calderon, founder of Alma, a secularist-oriented center for the study of Judaism in Tel Aviv, called βbarefoot readingββthat is, naΓ―ve, without preconceptions), I discovered something utterly simple that I had never noticed before in quite the same way.
At the beginning of the Calf incident, God tells Moses, who has been up on the mountain with Him, βGo down, for your people have spoiledβ (32:7). A few verses later, when God asks leave of Moses (!) to destroy them, Moses begs for mercy on behalf of the people with the words βWhy should Your anger burn so fiercely against Your peopleβ¦β (v. 11). That is, God calls them Mosesβ people, while Moses refers to them as Godβs people. Subsequent to this exchange, each of them refers to them repeatedly in the third person, as βthe peopleβ or βthis peopleβ (ΧΧ’Χ; ΧΧ’Χ ΧΧΧ). Neither of them refers to them, as God did in the initial revelation to Moses at the burning bush (Exodus 3:7 and passim) as βmy people,β or with the dignified title, βthe children of Israelββas if both felt a certain alienation, of distance from this tumultuous, capricious bunch. Only towards the end, after God agrees not to destroy them, but still states βI will not go up with them,β but instead promises to send an angel, does Moses says βSee, that this nation is Your peopleβ (ΧΧ¨ΧΧ ΧΧ Χ’ΧΧ ΧΧΧΧ ΧΧΧ; 33:13).
What does all this signify? Reading the peshat carefully, there is one inevitable conclusion: that God wished to nullify His covenant with the people Israel. It is in this that there lies the true gravity, and uniqueness, of the Golden Calf incident. We are not speaking here, as we read elsewhere in the Bibleβfor example, in the two great Imprecations (tokhahot) in Lev 26 and Deut 28, or in the words of the prophets during the First Templeβmerely of threats of punishment, however harsh, such as drought, famine, pestilence, enemy attacks, or even exile and slavery. There, the implicit message is that, after a period of punishment, a kind of moral purgation through suffering, things will be restored as they were. Here, the very covenant itself, the very existence of an intimate connection with God, hangs in the balance. God tells Moses, βI shall make of you a people,β i.e., instead of them.
This, it seems to me, is the point of the second phase of this story. Moses breaks the tablets; he and his fellow Levites go through the camp killing all those most directly implicated in worshipping the Calf; God recants and agrees not to destroy the people. However, βMy angel will go before themβ but βI will not go up in your midstβ (33:2, 3). This should have been of some comfort; yet this tiding is called βthis bad thing,β the people mourn, and remove the ornaments they had been wearing until then. Evidently, they understood the absence of Godβs presence or βfaceβ as a grave step; His being with them was everything. That is the true importance of the Sanctuary in the desert and the Tent of Meeting, where Moses speaks with God in the pillar of cloud (33:10). God was present with them there in a tangible way, in a certain way continuing the epiphany at Sinai. All that was threatened by this new declaration.
Moses second round of appeals to God, in Exod 33:12-23, focuses on bringing God, as it were, to a full reconciliation with the people. This is the significance of the Thirteen Qualities of Mercy, of what I have called the Covenant in the Cleft of the Rock, the βfaith of Yom Kippurβ as opposed to that of Shavuot (see HY I: Ki Tisa; and note Prof. Jacob Milgromβs observation that this chapter stands in the exact center, in a literary sense, of the unit known as the HextateuchβTorah plus the Book of Joshua).
But I would add two important points. One, that this is the first place in the Torah where we read about sin followed by reconciliation. After Adam and Eve ate of the fruit of the Garden, they were punished without hope of reprieve; indeed, their βpunishment β reads very much like a description of some basic aspects of the human condition itself. Cain, after murdering Abel, was banished, made to wander the face of the earth. The sin of the brothers in selling Joseph, and their own sense of guilt, is a central factor in their family dynamic from then on, but there is nary a word of Godβs response or intervention. It would appear that Godβs initial expectation in the covenant at Sinai was one of total loyalty and fidelity. The act of idolatry was an unforgivable breach of the covenantβmuch as adultery is generally perceived as a fundamental violation of the marital bond.
Moses, in persuading God to recant of His jealousy and anger, to give the faithless people another chance, is thus introducing a new concept: of a covenant that includes the possibility of even the most serious transgressions being forgiven; of the knowledge that human beings are fallible, and that teshuvah and forgiveness are essential components of any economy of men living before a demanding God.
The second, truly astonishing point is the role played by Moses in all this. Moshe Rabbenu, βthe man of God,β is not only the great teacher of Israel, the channel through which they learn the Divine Torah, but also, as it were, one who teaches God Himself. It is God who βreveals His Qualities of Mercyβ at the Cleft of the Rock; but without Moses cajoling, arguing, persuading (and note the numerous midrashim around this theme), βwere it not for my servant Moses who stood in the breach,β all this would not have happened. It was Moses who elicited this response and who, so to speak, pushed God Himself to this new stage in his relation with Israelβto give up His expectations of perfection from His covenanted people, and to understand that living within a covenant means, not rigid adherence to a set of laws, but a living relationship with real people, taking the bad with the good. (Again, the parallel to human relationships is obvious)
|
<urn:uuid:c4c19472-691a-44c6-a55b-21fbb183475b>
|
CC-MAIN-2013-20
|
http://hitzeiyehonatan.blogspot.com/2008_02_01_archive.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966594
| 2,269
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
βA remote Indian village is responding to global warming-induced water shortages by creating large masses of ice, or βartificial glaciers,β to get through the dry spring months. (See a map of the region.)
Located on the western edge of the Tibetan plateau, the village of Skara in the Ladakh region of India is not a common tourist destination.
βItβs beautiful, but really remote and difficult to get to,β said Amy Higgins, a graduate student at the Yale School of Forestry & Environmental Studies who worked on the artificial glacier project.
βA lot of people, when I met them in Delhi and I said I was going to Ladakh, they looked at me like I was going to the moon,β said Higgins, who is also a National Geographic grantee.
People in Skara and surrounding villages survive by growing crops such as barley for their own consumption and for sale in neighboring towns. In the past, water for the crops came from meltwater originating in glaciers high in the Himalaya.β
Read more: National Geographic
|
<urn:uuid:5050ac83-4770-4e9c-9b44-38ba46d2466e>
|
CC-MAIN-2013-20
|
http://peakwater.org/2012/02/artificial-glaciers-water-crops-in-indian-highlands/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.973301
| 226
| 3.78125
| 4
|
[
"climate"
] |
{
"climate": [
"global warming"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Since 1993, RANβs Protect-an-Acre program (PAA) has distributed more than one million dollars in grants to more than 150 frontline communities, Indigenous-led organizations, and allies, helping their efforts to secure protection for millions of acres of traditional territory in forests around the world.
Rainforest Action Network believes that Indigenous peoples are the best stewards of the worldβs rainforests and that frontline communities organizing against the extraction and burning of dirty fossil fuels deserve the strongest support we can offer. RAN established the Protect-an-Acre program to protect the worldβs forests and the rights of their inhabitants by providing financial aid to traditionally under-funded organizations and communities in forest regions.
Indigenous and frontline communities suffer disproportionate impacts to their health, livelihood and culture from extractive industry mega-projects and the effects of global climate change. Thatβs why Protect-an-Acre provides small grants to community-based organizations, Indigenous federations and small NGOs that are fighting to protect millions of acres of forest and keep millions of tons of CO2 in the ground.
Our grants support organizations and communities that are working to regain control of and sustainably manage their traditional territories through land title initiatives, community education, development of sustainable economic alternatives, and grassroots resistance to destructive industrial activities.
PAA is an alternative to βbuy-an-acreβ programs that seek to provide rainforest protection by buying tracts of land, but which often fail to address the needs or rights of local Indigenous peoples. Uninhabited forest areas often go unprotected, even if purchased through a buy-an-acre program. It is not uncommon for loggers, oil and gas companies, cattle ranchers, and miners to illegally extract resources from so-called βprotectedβ areas.
Traditional forest communities are often the best stewards of the land because their way of life depends upon the health of their environment. A number of recent studies add to the growing body of evidence that Indigenous peoples are better protectors of their forests than governments or industry.
Based on the success of Protect-an-Acre, RAN launched The Climate Action Fund (CAF) in 2009 as a way to direct further resources and support to frontline communities and Indigenous peoples challenging the fossil fuel industry.
Additionally, RAN has been a Global Advisor to Global Greengrants Fund (GGF) since 1995, identifying recipients for small grants to mobilize resources for global environmental sustainability and social justice using the same priority and criteria as we use for PAA and CAF.
Through these three programs each year we support grassroots projects that result in at least:
|
<urn:uuid:995ec683-d967-4f36-82d9-547c9ea3d646>
|
CC-MAIN-2013-20
|
http://ran.org/protect-an-acre
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.938919
| 540
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"climate change",
"co2"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Karuk Tribe: Learning from the First Californians for the Next California
Editor's Note: This is part of series, Facing the Climate Gap, which looks at grassroots efforts in California low-income communities of color to address climate change and promote climate justice.
This article was published in collaboration with GlobalPossibilities.org.
The three sovereign entities in the United States are the federal government, the states and indigenous tribes, but according to Bill Tripp, a member of the Karuk Tribe in Northern California, many people are unaware of both the sovereign nature of tribes and the wisdom they possess when it comes to issues of climate change and natural resource management.
βA lot of people donβt realize that tribes even exist in California, but we are stakeholders too, with the rights of indigenous peoples,β says Tripp.
Tripp is an Eco-Cultural Restoration specialist at the Karuk Tribe Department of Natural Resources. In 2010, the tribe drafted an Eco-Cultural Resources Management Plan, which aims to manage and restore βbalanced ecological processes utilizing Traditional Ecological Knowledge supported by Western Science.β The plan addresses environmental issues that affect the health and culture of the Karuk tribe and outlines ways in which tribal practices can contribute to mitigating the effects of climate change.
Before climate change became a hot topic in the media, many indigenous and agrarian communities, because of their dependence upon and close relationship to the land, began to notice troubling shifts in the environment such as intense drought, frequent wildfires, scarcer fish flows and erratic rainfall.
There are over 100 government recognized tribes in California, which represent more than 700,000 people. The Karuk is the second largest Native American tribe in California and has over 3,200 members. Their tribal lands include over 1.48 million acres within and around the Klamath and Six Rivers National Forests in Northwest California.
Tribes like the Karuk are among the hardest hit by the effects of climate change, despite their traditionally low-carbon lifestyles. The Karuk, in particular have experienced dramatic environmental changes in their forestlands and fisheries as a result of both climate change and misguided Federal and regional policies.
The Karuk have long depended upon the forest to support their livelihood, cultural practices and nourishment. While wildfires have always been a natural aspect of the landscape, recent studies have shown that fires in northwestern California forests have risen dramatically in frequency and size due to climate related and human influences. According to the California Natural Resources Agency, fires in California are expected to increase 100 percent due to increased temperatures and longer dry seasons associated with climate change.
Some of the other most damaging human influences to the Karuk include logging activities, which have depleted old growth forests, and fire suppression policies created by the U.S. Forest Service in the 1930s that have limited cultural burning practices. Tripp says these policies have been detrimental to tribal traditions and the forest environment.
βIt has been huge to just try to adapt to the past 100 years of policies that have led us to where we are today. We have already been forced to modify our traditional practices to fit the contemporary political context,β says Tripp.
Further, the construction of dams along the Klamath River by PacifiCorp (a utility company) has impeded access to salmon and other fish that are central to the Karuk diet. Fishing regulations have also had a negative impact.
Though the Karukβs dependence on the land has left them vulnerable to the projected effects of climate change, it has also given them and other indigenous groups incredible knowledge to impart to western climate science. Historically, though, tribes have been largely left out of policy processes and decisions. The Karuk decided to challenge this historical pattern of marginalization by formulating their own Eco-Cultural Resources Management Plan.
The Plan provides over twenty βCultural Environmental Management Practicesβ that are based on traditional ecological knowledge and the βWorld Renewalβ philosophy, which emphasizes the interconnectedness of humans and the environment. Tripp says the Plan was created in the hopes that knowledge passed down from previous generations will help strengthen Karuk culture and teach the broader community to live in a more ecologically sound way.
βIt is designed to be a living documentβ¦We are building a process of comparative learning, based on the principals and practices of traditional ecological knowledge to revitalize culturally relevant information as passed through oral transmission and intergenerational observations,β says Tripp.
One of the highlights of the plan is to re-establish traditional burning practices in order to decrease fuel loads and the risk for more severe wildfires when they do happen. Traditional burning was used by the Karuk to burn off specific types of vegetation and promote continued diversity in the landscape. Tripp notes that these practices are an example of how humans can play a positive role in maintaining a sound ecological cycle in the forests.
βThe practice of utilizing fire to manage resources in a traditional way not only improves the use quality of forest resources, it also builds and maintains resiliency in the ecological process of entire landscapesβ explains Tripp.
Another crucial aspect of the Plan is the life cycle of fish, like salmon, that are central to Karuk food traditions and ecosystem health. Traditionally, the Karuk regulated fishing schedules to allow the first salmon to pass, ensuring that those most likely to survive made it to prime spawning grounds. There were also designated fishing periods and locations to promote successful reproduction. Tripp says regulatory agencies have established practices that are harmful this cycle.
βToday, regulatory agencies permit the harvest of fish that would otherwise be protected under traditional harvest management principles and close the harvest season when the fish least likely to reach the very upper river reaches are passing through,β says Tripp.
The Karuk tribe is now working closely with researchers from universities such as University of California, Berkeley and the University of California, Davis as well as public agencies so that this traditional knowledge can one day be accepted by mainstream and academic circles dealing with climate change mitigation and adaptation practices.
According to the Plan, these land management practices are more cost effective than those currently practiced by public agencies; and, if implemented, they will greatly reduce taxpayer cost burdens and create employment. The Karuk hope to create a workforce development program that will hire tribal members to implement the planβs goals, such as multi-site cultural burning practices.
The Plan has a long way to full realization and Federal recognition. According to the National Indian Forest Resources Management Act and the National Environmental Protection Act, it must go through a formal review process. Besides that, the Karuk Tribe is still solidifying funding to pursue its goals.
The work of Californiaβs environmental stewards will always be in demand, and the Karuk are taking the lead in showing how community wisdom can be used to generate an integrated approach to climate change. Such integrated and community engaged policy approaches are rare throughout the state but are emerging in other areas. In Oakland, for example, the Oakland Climate Action Coalition engaged community members and a diverse group of social justice, labor, environmental, and business organizations to develop an Energy and Climate Action Plan that outlines specific ways for the City to reduce greenhouse gas emissions and create a sustainable economy.
In the end, Tripp hopes the Karuk Plan will not only inspire others and address the global environmental plight, but also help to maintain the very core of his people. In his words: βBeing adaptable to climate change is part of that, but primarily it is about enabling us to maintain our identity and the people in this place in perpetuity.β
Dr. Manuel Pastor is Professor of Sociology and American Studies & Ethnicity at the University of Southern California where he also directs the Program for Environmental and Regional Equity and co-directs USCβs Center for the Study of Immigrant Integration. His most recent books include Just Growth: Inclusion and Prosperity in Americaβs Metropolitan Regions (Routledge 2012; co-authored with Chris Benner) Uncommon Common Ground: Race and Americaβs Future (W.W. Norton 2010; co-authored with Angela Glover Blackwell and Stewart Kwoh), and This Could Be the Start of Something Big: How Social Movements for Regional Equity are Transforming Metropolitan America (Cornell 2009; co-authored with Chris Benner and Martha Matsuoka).
|
<urn:uuid:003baaf4-69c7-4ee7-b37f-468bf9b55842>
|
CC-MAIN-2013-20
|
http://www.resilience.org/stories/2012-10-19/karuk-tribe-learning-from-the-first-californians-for-the-next-california
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704713110/warc/CC-MAIN-20130516114513-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.945849
| 1,714
| 3.296875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"adaptation",
"climate change",
"climate justice",
"drought",
"greenhouse gas"
],
"nature": [
"ecological",
"ecosystem",
"ecosystem health",
"restoration"
]
}
|
{
"strong": 6,
"weak": 3,
"total": 9,
"decision": "accepted_strong"
}
|
What Is Air Pollution?
in its great magnitude has existed in the 20th century from the
coal burning industries of the early century to the fossil burning technology in
the new century. The problems of
air pollution are a major problem for highly developed nations whose large
industrial bases and highly developed infrastructures generate much of the air
Every year, billions of tonnes of pollutants are released into the
atmosphere; the sources include power plants burning fossil fuels to the effects
of sunlight on certain natural materials. But
the air pollutants released from natural materials pose very little health
threat, only the natural radioactive gas radon poses any threat to health.
So much of the air pollutants being released into the atmosphere are all
results of manβs activities.
In the United Kingdom, traffic
is the major cause of air pollution in British cities. Eighty six percent of families own either one or two
vehicles. Because of the
high-density population of cities and towns, the number of people exposed to air
pollutants is great. This had led
to the increased number of people getting chronic diseases over these past years
since the car ownership in the UK has nearly trebled. These include asthma and respiratory complaints ranging
through the population demographic from children to elderly people who are most
at risk. Certainly those who are
suffering from asthma will notice the effects more greatly if living in the
inner city areas or industrial areas or even near by major roads.
Asthma is already the fourth biggest killer, after heart diseases and
cancers in the UK and currently, it affects more than three point four million
In the past, severe pollution in London during 1952 added with low winds
and high-pressure air had taken more than four thousand lives and another seven
hundred in 1962, in what was called the βDark Yearsβ because of the dense
dark polluted air.
is also causing devastation for the environment; many of these causes are by man
made gases like sulphur dioxide that results from electric plants burning fossil
fuels. In the UK, industries and
utilities that use tall smokestacks by means of removing air pollutants only
boost them higher into the atmosphere, thereby only reducing the concentration
at their site.
These pollutants are often transported over the North Sea and produce
adverse effects in western Scandinavia, where sulphur dioxide and nitrogen oxide
from UK and central Europe are generating acid rain, especially in Norway and
Sweden. The pH level, or relative
acidity of many of Scandinavian fresh water lakes has been altered dramatically
by acid rain causing the destruction of entire fish populations.
In the UK, acid rain formed by subsequent sulphur dioxide atmospheric
emissions has lead to acidic erosion in limestone in North Western Scotland and
marble in Northern England.
In 1998, the
London Metropolitan Police launched the βEmissions Controlled Reductionβ
scheme where by traffic police would monitor the amount of pollutants being
released into the air by vehicle exhausts.
The plan was for traffic police to stop vehicles randomly on roads
leading into the city of London, the officer would then measure the amounts of
air pollutants being released using a CO2 measuring reader fixed in
the owner's vehicle's exhaust. If the
exhaust exceeded the legal amount (based on micrograms of pollutants) the driver
would be fined at around twenty-five pounds.
The scheme proved unpopular with drivers, especially with those driving
to work and did little to help improve the city air quality.
In Edinburgh, the main causes of bad air quality were from the vast
number of vehicles going through the city centre from west to east.
In 1990, the Edinburgh council developed the city by-pass at a cost of
nearly seventy five million pounds. The
by-pass was ringed around the outskirts of the city where its main aim was to
limit the number of vehicles going through the city centre and divert vehicles
to use the by-pass in order to reach their destination without going through the
city centre. This released much of
the congestion within the city but did little very little in solving the
cityβs overall air quality.
To further decrease the number of vehicles on the roads, the government
promoted public transport. Over two
hundred million pounds was devoted in developing the country's public transport
network. Much of which included the development of more bus lanes in
the city of London, which increased the pace of bus services.
Introduction of gas and electric powered buses took place in Birmingham
in order to decrease air pollutants emissions around the centre of the city.
Because children and the elderly are at most risk to chronic diseases,
such as asthma, major diversion roads were build in order to divert the vehicles
away from residential areas, schools and elderly institutions.
In some councils, trees were planted along the sides of the road in order
to decrease the amount of carbon monoxide emissions.
Other ways of improving the air quality included the restriction on the
amounts of air pollutants being released into the atmosphere by industries;
tough regulations were placed whereby if the air quality dropped below a certain
level around the industries area, a heavy penalty would be wavered against them.
Β© Copyright 2000, Andrew Wan.
|
<urn:uuid:ea6c54fe-1f6e-4a4c-bcb5-4f4c9e0fb6de>
|
CC-MAIN-2013-20
|
http://everything2.com/user/KS/writeups/air+pollution
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.948933
| 1,097
| 3.25
| 3
|
[
"climate"
] |
{
"climate": [
"co2"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
The Economics of Ecosystems and Biodiversity: Ecological and Economic Foundations
Human well-being relies critically on ecosystem services provided by nature. Examples include water and air quality regulation, nutrient cycling and decomposition, plant pollination and flood control, all of which are dependent on biodiversity. They are predominantly public goods with limited or no markets and do not command any price in the conventional economic system, so their loss is often not detected and continues unaddressed and unabated. This in turn not only impacts human well-being, but also seriously undermines the sustainability of the economic system.
It is against this background that TEEB: The Economics of Ecosystems and Biodiversity project was set up in 2007 and led by the United Nations Environment Programme to provide a comprehensive global assessment of economic aspects of these issues. The Economics of Ecosystems and Biodiversity, written by a team of international experts, represents the scientific state of the art, providing a comprehensive assessment of the fundamental ecological and economic principles of measuring and valuing ecosystem services and biodiversity, and showing how these can be mainstreamed into public policies. The Economics of Ecosystems and Biodiversity and subsequent TEEB outputs will provide the authoritative knowledge and guidance to drive forward the biodiversity conservation agenda for the next decade.
1. Integrating the Ecological and Economic Dimensions in Biodiversity and Ecosystem Service Valuation
2. Biodiversity, Ecosystems and Ecosystem Services
3. Measuring Biophysical Quantities and the Use of Indicators
4. The Socio-cultural Context of Ecosystem and Biodiversity Valuation
5. The Economics of Valuing Ecosystem Services and Biodiversity
6. Discounting, Ethics, and Options for Maintaining Biodiversity and Ecosystem Integrity
7. Lessons Learned and Linkages with National Policies
Appendix 1: How the TEEB Framework Can be Applied: The Amazon Case
Appendix 2: Matrix Tables for Wetland and Forest Ecosystems
Appendix 3: Estimates of Monetary Values of Ecosystem Services
"A landmark study on one of the most pressing problems facing society, balancing economic growth and ecological protection to achieve a sustainable future."
- Simon Levin, Moffett Professor of Biology, Department of Ecology and Evolution Behaviour, Princeton University, USA
"TEEB brings a rigorous economic focus to bear on the problems of ecosystem degradation and biodiversity loss, and on their impacts on human welfare. TEEB is a very timely and useful study not only of the economic and social dimensions of the problem, but also of a set of practical solutions which deserve the attention of policy-makers around the world."
- Nicholas Stern, I.G. Patel Professor of Economics and Government at the London School of Economics and Chairman of the Grantham Research Institute on Climate Change and the Environment
"The [TEEB] project should show us all how expensive the global destruction of the natural world has become and β it is hoped β persuade us to slow down.' The Guardian 'Biodiversity is the living fabric of this planet β the quantum and the variability of all its ecosystems, species, and genes. And yet, modern economies remain largely blind to the huge value of the abundance and diversity of this web of life, and the crucial and valuable roles it plays in human health, nutrition, habitation and indeed in the health and functioning of our economies. Humanity has instead fabricated the illusion that somehow we can get by without biodiversity, or that it is somehow peripheral to our contemporary world. The truth is we need it more than ever on a planet of six billion heading to over nine billion people by 2050. This volume of 'TEEB' explores the challenges involved in addressing the economic invisibility of biodiversity, and organises the science and economics in a way decision makers would find it hard to ignore."
- Achim Steiner, Executive Director, United Nations Environment Programme
This volume is an output of TEEB: The Economics of Ecosystems and Biodiversity study and has been edited by Pushpam Kumar, Reader in Environmental Economics, University of Liverpool, UK. TEEB is hosted by the United Nations Environment Programme (UENP) and supported by the European Commission, the German Federal Ministry for the Environment (BMU) and the UK Department for Environment, Food and Rural Affairs (DEFRA), recently joined by Norway's Ministry for Foreign Affairs, The Netherlands' Ministry of Housing (VROM), the UK Department for International Development (DFID) and also the Swedish International Development Cooperation Agency (SIDA). The study leader is Pavan Sukhdev, who is also Special Adviser β Green Economy Initiative, UNEP.
View other products from the same publisher
|
<urn:uuid:906f7240-4b78-478d-9b89-4b845237d4f3>
|
CC-MAIN-2013-20
|
http://www.nhbs.com/the_economics_of_ecosystems_and_biodiversity_tefno_176729.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.898484
| 966
| 3.21875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change"
],
"nature": [
"biodiversity",
"biodiversity loss",
"conservation",
"ecological",
"ecosystem",
"ecosystem integrity",
"ecosystem services",
"ecosystems",
"wetland"
]
}
|
{
"strong": 8,
"weak": 2,
"total": 10,
"decision": "accepted_strong"
}
|
DENVER β Put on your poodle skirts and tune in Elvis on the transistor radio, because itβs starting to look a lot like the 1950s.
Unfortunately, this wonβt be the nostalgic β50s of big cars and pop music.
The 1950s that could be on the way to Colorado is the decade of drought.
So says Brian Bledsoe, a Colorado Springs meteorologist who studies the history of ocean currents and uses what he learns to make long-term weather forecasts.
βI think weβre reliving the β50s, bottom line,β Bledsoe said Friday morning at the annual meeting of the Colorado Water Congress.
Bledsoe studies the famous El NiΓ±o and La NiΓ±a ocean currents. But he also looks at other, less well-known cycles, including long-term temperature cycles in the oceans.
In the 1950s, water in the Pacific Ocean was colder than normal, but it was warmer than usual in the Atlantic. That combination caused a drought in Colorado that was just as bad as the Dust Bowl of the 1930s.
The ocean currents slipped back into their 1950s pattern in the last five years, Bledsoe said. The cycles can last a decade or more, meaning bad news for farmers, ranchers, skiers and forest residents.
βDrought feeds on drought. The longer it goes, the harder it is to break,β Bledsoe said.
The outlook is worst for Eastern Colorado, where Bledsoe grew up and his parents still own a ranch. They recently had to sell half their herd when their pasture couldnβt provide enough feed.
βTheyβve spent the last 15 years grooming that herd for organic beef stock,β he said.
Bledsoe looks for monsoon rains to return to the Four Corners and Western Slope in July. But thereβs still a danger in the mountains in the summer.
βInitially, dry lightning could be a concern, so obviously, the fire season is looking not so great right now,β he said.
Weather data showed the last yearβs conditions were extreme.
Nolan Doesken, Coloradoβs state climatologist, said the summer of 2012 was the hottest on record in Colorado. And it was the fifth-driest winter since record-keeping began more than 100 years ago.
Despite recent storms in the San Juan Mountains, this winter hasnβt been much better.
βWeβve had a wimpy winter so far,β Doesken said. βThe past week has been a good week for Colorado precipitation.β
However, the next weekβs forecast shows dryness returning to much of the state.
Reservoir levels are higher than they were in 2002 β the driest year since Coloradans started keeping track of moisture β but the state is entering 2013 with reservoirs that were depleted last year.
βYou donβt want to start a year at this level if youβre about to head into another drought,β Doesken said.
It was hard to find good news in Friday morningβs presentations, but Bledsoe is happy that technology helps forecasters understand the weather better than they did during past droughts. That allows people to plan for whatβs on the way.
βIβm a glass-half-full kind of guy,β he said.
|
<urn:uuid:6b5ff0a8-5351-4289-bb86-d7195a7837dc>
|
CC-MAIN-2013-20
|
http://durangoherald.com/article/20130201/NEWS01/130209956/0/20120510/Drought-is-making-itself-at-home
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.964422
| 739
| 2.640625
| 3
|
[
"climate"
] |
{
"climate": [
"drought",
"el niΓ±o",
"monsoon"
],
"nature": []
}
|
{
"strong": 3,
"weak": 0,
"total": 3,
"decision": "accepted_strong"
}
|
The βpresidiβ translates as βgarrisonsβ (from the French word, βto equipβ), as protectors of traditional food production practices
Monday, March 23, 2009
The βpresidiβ translates as βgarrisonsβ (from the French word, βto equipβ), as protectors of traditional food production practices
This past year, I have had rewarding opportunities to observe traditional food cultures in varied regions of the world. These are:
Athabascan Indian in the interior of Alaska (the traditional Tanana Chiefs Conference tribal lands) in July, 2008 (for more, read below);
Swahili coastal tribes in the area of Munje village (population about 300), near Msambweni, close to the Tanzania border in December, 2008-January, 2009 (for more, read below); and,Laikipia region of Kenya (January, 2009), a German canton of Switzerland (March, 2009), and the Piemonte-Toscana region of northern/central Italy (images only, February-March, 2009).
In Fort Yukon, Alaska, salmon is a mainstay of the diet. Yet, among the Athabascan Indians, threats to subsistence foods and stresses on household economics abound. In particular, high prices for external energy sources (as of July, 2008, almost $8 for a gallon of gasoline and $6.50 for a gallon of diesel, which is essential for home heating), as well as low Chinook salmon runs for information click here, and moose numbers.
Additional resource management issues pose threats to sustaining village life β for example, stream bank erosion along the Yukon River, as well as uneven management in the Yukon Flats National Wildlife Refuge. People are worried about ever-rising prices for fuels and store-bought staples, and fewer and fewer sources of wage income. The result? Villagers are moving out from outlying areas into βhubβ communities like Fort Yukon -- or another example, Bethel in Southwest Alaska β even when offered additional subsidies, such as for home heating. But, in reality, βhubsβ often offer neither much employment nor relief from high prices.
In Munje village in Kenya, the Digo, a Bantu-speaking, mostly Islamic tribe in the southern coastal area of Kenya, enjoy the possibilities of a wide variety of fruits, vegetables, and fish/oils.
Breakfast in the village typically consists of mandazi (a fried bread similar to a doughnut), and tea with sugar. Lunch and dinner is typically ugali and samaki (fish), maybe with some dried cassava or chickpeas.
On individual shambas (small farms), tomatoes, cassava, maize, cowpeas, bananas, mangos, and coconut are typically grown. Ugali is consumed every day, as are cassava, beans, oil, fish -- and rice, coconut, and chicken, depending on availability.
Even with their own crops, villagers today want very much to enter the market economy and will sell products from their shambas to buy staples and the flour needed to make mandazis, which they in turn sell. Sales of mandazis (and mango and coconut, to a lesser extent) bring in some cash for villagers.
A treasured food is, in fact, the coconut. This set of pictures show how coconut is used in the village. True, coconut oil now is reserved only for frying mandazi. But it also is used as a hair conditioner, and the coconut meat is eaten between meals. I noted also that dental hygiene and health were good in the village. Perhaps the coconut and fish oils influence this (as per the work of Dr. Weston A. Price).
Photos L-R: Using a traditional conical basket (kikatu), coconut milk is pressed from the grated meat; Straining coconut milk from the grated meat, which is then heated to make oil; Common breakfast food (and the main source of cash income), the mandazi, is still cooked in coconut oil
Note: All photos were taken by G. Berardi
Thursday, February 19, 2009
Despite maize in the fields, it is widely known that farmers are hoarding stocks in many districts. Farmers are refusing the NCPB/government price of Sh1,950 per 90-kg bag. They are waiting to be offered at least the same amount of money as that which was being assigned to imports (Bii, 2009b). βThe country will continue to experience food shortages unless the Government addresses the high cost of farm inputs to motivate farmers to increase production,β said Mr. Jonathan Bii of Uasin Gish (Bartoo & Lucheli, 2009; Bii, 2009a, 2009b; Bungee, 2009).
Pride and politics, racism and corruption are to blame for food deficits (Kihara & Marete, 2009; KNA, 2009; Muluka, 2009; Siele, 2009). Clearly, what are needed in Kenya are food system planning, disaster management planning, and protection and development of agricultural and rural economies.
Click here for the full text.
Photos taken by G. Berardi
Cabbage, an imported food (originally), and susceptible to much pest damage.
Camps still remain for Kenyaβs Internally Displaced Persons resulting from post-election violence forced migrations. Food security is poor.
Lack of sustained recent short rains have resulted in failed maize harvests.
Friday, January 16, 2009
Today I went to a lunch time discussion of sustainability. This concept promoted development with an equitable eye to the triple bottom line - financial, social, and ecological costs. We discussed the how it seemed relatively easier to discuss the connections between financial and ecological costs, than between social costs and other costs. Sustainable development often comes down to "green" designs that consider environmental impacts or critiques of the capitalist model of financing.
As I thought about sustainable development, or sustainable community management if you are a bit queasy with the feasibility of continuous expansion, I considered its corollaries in the field of disaster risk reduction. It struck me again that it is somewhat easier to focus on some components of the triple bottom line in relation to disasters.
The vulnerability approach to disasters has rightly brought into focus the fact that not all people are equally exposed to or impacted by disasters. Rather, it is often the poor or socially marginalized most at risk and least able to recover. This approach certainly brings into focus the social aspects of disasters.
The disaster trap theory, likewise, brings into focus the financial bottom line. This perspective is most often discussed in international development and disaster reduction circles. It argues that disasters destroy development gains and cause communities to de-develop unless both disaster reduction and development occur in tandem. Building a cheaper, non-earthquake resistant school in an earthquake zone, may make short-term financial sense. However,over the long term, this approach is likely to result in loss of physical infrastructure, human life, and learning opportunities when an earthquake does occur.
What seems least developed to me, though I would enjoy being rebutted, is the ecological bottom line of disasters. Perhaps it is an oxymoron to discuss the ecological costs of disasters, given that many disasters are triggered natural ecological processes like cyclones, forest fires, and floods. It might also be an oxymoron simply because a natural hazard disaster is really looking at an ecological event from an almost exclusively human perspective. Its not a disaster if it doesn't destroy human lives and human infrastructure. But, the lunch-time discussion made me wonder if there wasn't something of an ecological bottom line to disasters in there somewhere. Perhaps it is in the difference between an ecological process heavily or lightly impacted by human ecological modification. Is a forest fire in a heavily managed forest different from that in an unmanaged forest? Certainly logging can heighten the impacts of heavy rains by inducing landslides, resulting in a landscape heavily rather than lightly impacted by the rains. Similar processes might also be true in the case of heavily managed floodplains. Flooding is concentrated and increased in areas outside of levee systems. What does that mean for the ecology of these locations? Does a marsh manage just as well in low as high flooding? My guess would be no.
And of course, there is the big, looming disaster of climate change. This is a human-induced change that may prove quite disasterous to many an ecological system, everything from our pine forests here, to arctic wildlife, and tropical coral reefs.
Perhaps, we disaster researchers, need to also consider a triple bottom line when making arguments for the benefits of disaster risk reduction.
Tuesday, January 13, 2009
This past week the Northwest experienced a severe barrage of weather systems back to back. Everyone seemed to be affected. Folks were re-routed on detours, got soaked, slipped on ice, or had to spend money to stay a little warmer. In Whatcom and Skagit Counties, there are hundreds to thousands of people currently in the process of recovering and cleaning-up after the floods. These people live in the rural areas throughout the county, with fewer people knowing about their devastation and having greater vulnerability to flood hazards.
Luckily, there are local agencies and non-profits who are ready at a momentβs call to help anyone in need. The primary organization that came to the aid of the flood victims was the American Red Cross.
The last week I began interning and volunteering with one of these non-profits, the Mt. Baker American Red Cross (ARC) Chapter. While I am still in the process of getting screened and officially trained, I received first-hand experience and saw how important this organization is to the community.
With the flood waters rising throughout the week, people were flooded out of their homes and rescued from the overflowing rivers and creeks. As the needs for help increased, hundreds of ARC volunteers were called to service. Throughout the floods there have been several shelters opened to accommodate the needs of these flood victims. On Saturday I was asked to help staff one of these shelters overnight in Ferndale.
While I talked with parents and children, I became more aware of the stark reality of how these people have to recover from having all their possessions covered in sewage and mud and damaged by flood waters. In the meantime, these flood victims have all their privacy exposed to others in a public shelter, while they work to find stability in the middle of all the traumas of the events. As I sat talking and playing with the children, another thought struck me. Children are young and resilient, but it must be very difficult when they connect with a volunteer and then lose that connection soon after. Sharing a shelter with the folks over the weekend showed a higher degree of reality and humanity to the situation than the news coverage ever could.
I posted this bit about my volunteer experience because it made me realize something about my education and degree track in disaster reduction and emergency planning. We look at ways to create a more sustainable community, and we need to remember that community service is an important part of creating this ideal. Underlying sustainable development is the triple bottom line (social, economy, and environment). Volunteers and non-profits are a major part of this social line of sustainability. Organizations like the American Red Cross only exist because of volunteers. So embrace President-elect Obamaβs call for a culture of civil service this coming week and make a commitment to the organization of your choice with your actions or even your pocketbook. Know that sustainable development cannot exist with out social responsibility.
Thursday, January 8, 2009
Its been two days now that schools have been closed in Whatcom County, not for snow, but for rain and flooding. This unusual event coincides with record flooding throughout Western Washington, just a year after record flooding closed I5 for three days and Lewis County businesses experienced what they then called an unprecedented 500 year flood. I guess not.
There are many strange things about flood risk notation, and this idea that a 500 year flood often trips people up. They often believe a flood of that size will happen only once in 500 years. On a probabilistic level, this is inaccurate. A 500 year flood simply has a .2% probability of happening each year. A more useful analogy might be to tell people they are rolling a 500 sided die every year and hoping that it doesnβt come up with a 1. Next year theyβll be forced to roll again.
But, this focus on misunderstandings of probability often hides an even larger societal misunderstanding . Flood risk changes when we change the environment in which it occurs. If a flood map tells you that you are not in the flood plain, better check the date of the map. Most maps are utterly out of date and many vastly underestimate present flood risk. There are several reasons this happens. Urban development, especially development with a lot of parking lots and buildings that donβt let water seep into the ground, will cause rainwater to move quickly into rivers rather than seep into the ground and slowly release. Developers might complain that they are required to create runoff catchment wetlands when they do build. They do, but these requirements may very well be based upon outdated data on flood risk. Thus, each new development never fully compensates for its runoff, a small problem for each site but a mammoth problem when compounded downstream.
Deforesting can have the same effect, with the added potential for house-crushing and river-clogging mudslides. Timber harvesting is certainly an important industry in our neck of the woods. Not only is commercial logging an important source of jobs for many rural and small towns, logging on state Department of Natural Resource land is the major source of funding for K-12 education. Yet, commercial logging, like other industries, suffers from a problem of cost externalization. When massive mudslides occurred during last yearβs storm, Weyerhaeuser complained that it wasnβt itβs logging practices, but the fact that it was an unprecedented, out of the blue, 500 year storm that caused it. While it is doubtful the slides would have occurred uncut land, that isnβt the only fallacy. When the slide did occur, the costs of repairing roads, treatment plants, and bridges went to the county and often was passed on to the nationβs tax payers through state and federal recovery grants. Thus, what should have been paid by Weyerhaeuser, 500 year probability or not, was paid by someone else.
Finally, there is local government. Various folks within local governments set regulations for zoning, deciding what will be built and where. Here is the real crux of the problem. Local government also gets an increase in revenue in the form of property, sales, and business income taxes. Suppress the updating of flood plain maps, and you get a short term profit and often, a steady supply of happy voters. You might think these local governments will have to pay when the next big flood comes, but often that can be avoided. Certainly, they must comply with federal regulations on flood plain management to be part of the National Flood Insurance program, but that plan has significant leeway and little monitoring. Like the commercial logging, disaster-stricken local governments can often push the recovery costs off to individual homeowners through the FEMA homeownerβs assistance program, and off to state and federal agencies by receiving disaster recovery and community development grants and loans. Certainly, some communities are so regularly devastated, and are so few resources, that disasters simply knock them down before they can given stand up again. But others have found loopholes and can profit by continuing to use old food maps and failing to aggressively control flood plain development.
What is it going to take to really change this system and make it unprofitable to profit from bad land use management?
Hereβs a good in-depth article on last yearβs landslides in Lewis County. http://seattletimes.nwsource.com/html/localnews/2008048848_logging13m.html
An interesting article on the failure of best management practices in development catchment basins can be found here: Hur, J. et al (2008) Does current management of storm water runoff adequately protect water resources in developing catchments? Journal of Soil and Water Conservation, 63 (2) pp. 77-90.
Monday, December 29, 2008
Itβs difficult to imagine a more colorful book, celebrating locally-grown and βmarketed foods, than David Westerlundβs Simone Goes to the Market: A Childrenβs Book of Colors Connecting Face and Food. This book is aimed at families and the foods they eat. Who doesnβt want to know where their food is coming from β the terroir, the kind of microclimate itβs produced in, as well as whoβs selling it? Gretchen sells her pole beans (purple), Maria her Serrano peppers (green), Dana and Matt sell their freshly-roasted coffee (black), Katie her carrots (orange), a blue poem from Matthew, brown potatoes from Roslyn, yellow patty pan squash from Jed, red tomatoes (soft and ripe) from Diana, and golden honey from Bill (and his bees). This is a book perfect for children of any age who want to connect to and with the food systems that sustain community. Order from [email protected].
|
<urn:uuid:e139d24e-7144-4cf8-866c-6066d64a435f>
|
CC-MAIN-2013-20
|
http://igcr.blogspot.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709037764/warc/CC-MAIN-20130516125717-00000-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.962803
| 3,622
| 2.875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"disaster risk reduction",
"flood risk",
"food security"
],
"nature": [
"conservation",
"ecological",
"wetlands"
]
}
|
{
"strong": 4,
"weak": 3,
"total": 7,
"decision": "accepted_strong"
}
|
Buried inside Robert Bryceβs relatively new book entitled Power Hungry is a call to βaggressively pursue taxes or caps on the emissions of neurotoxins, particularly those that come from burning coalβ to generate electricity such as mercury and lead. This is notable not because Bryce agrees with many environmental and human health experts, but also because the book credibly debunks the move to tax or cap carbon dioxide emissions both from technical and political perspectives.
The word βneurotoxicβ literally translates as βnerve poisonβ. Broadly described, a neurotoxicant is any chemical substance which adversely acts on the structure or function of the human nervous system.
As its subtitle signals, Power Hungry also declares policies subsidizing renewable sources of electricity, biofuels and electric vehicles as too costly and impractical to make a significant difference in making the U.S. power and transportation systems more sustainable.
So why take aim at mercury and lead, which is certain to drive up the cost of coal-fired electricity just as a carbon cap or tax would? Because, Bryce asserts, βarguing against heavy metal contaminants with known neurotoxicity will be far easier than arguing against carbon dioxide emissions. Cutting the output of mercury and the other heavy metals may, in the long run, turn out to have far greater benefits for the environmental and human health.β Bryce draws a parallel to the U.S. government ordering oil refiners to remove lead from gasoline starting in the 1970s.
In the book, which has has received predominantly good reviews on Amazon.com, Bryce makes some valid points about the carbon density of our energy sources. Among his overarching messages is that the carbon density of the worldβs major economies is actually declining (see graph below). Not to be missed: his attack on carbon sequestration, pp. 160-165. His case about the threat of neurotoxins begins on p. 167.
Thereβs a lot more to this challenge of reducing Americaβs reliance on coal-fired power plants than this. But considering the failure by the U.S. Congress to agree on a carbon tax or cap, his idea has serious merit and deserves a broad discussion, especially as Congress reassess its budget priorities. This includes billions of dollars of tax breaks and incentives for oil and other fossil fuels.
|
<urn:uuid:ed7842f6-485f-401b-96c7-6ca3e6045411>
|
CC-MAIN-2013-20
|
http://www.theenergyfix.com/2011/05/07/tax-toxins-not-carbon-dioxide-from-coal-fired-power-plants/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.955607
| 481
| 2.671875
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"carbon sequestration"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Deaths in Moscow have doubled to an average of 700 people a day as the Russian capital is engulfed by poisonous smog from wildfires and a sweltering heat wave, a top health official said today, according to the Associated Press.
The Russian newspaper Pravda reported: βMoscow is suffocating. Thick toxic smog has been covering the sky above the city for days. The sun in Moscow looks like the moon during the day: itβs not that bright and yellow, but pale and orange with misty outlines against the smoky sky. Muscovites have to experience both the smog and sweltering heat at once.β
βRussia has recently seen the longest unprecedented heat wave for at least one thousand years, the head of the Russian Meteorological Center,β the news site Ria Novosti reported.
Various news sites report that foreign embassies have reduced activities or shut down, with many staff leaving Moscow to escape the toxic atmosphere.
Russian heatwave: This NASA map released today shows areas of Russia experiencing above-average temperatures this summer (orange and red). The map was released on NASAβs Earth Obervatory website.
NASA Earth Observatory image by Jesse Allen, based on MODIS land surface temperature data available through the NASA Earth Observations (NEO) Website. Caption by Michon Scott.
According to NASA:
In the summer of 2010, the Russian Federation had to contend with multiple natural hazards: drought in the southern part of the country, and raging fires in western Russia and eastern Siberia. The events all occurred against the backdrop of unusual warmth. Bloomberg reported that temperatures in parts of the country soared to 42 degrees Celsius (108 degrees Fahrenheit), and the Wall Street Journal reported that fire- and drought-inducing heat was expected to continue until at least August 12.
This map shows temperature anomalies for the Russian Federation from July 20-27, 2010, compared to temperatures for the same dates from 2000 to 2008. The anomalies are based on land surface temperatures observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASAβs Terra satellite. Areas with above-average temperatures appear in red and orange, and areas with below-average temperatures appear in shades of blue. Oceans and lakes appear in gray.
Not all parts of the Russian Federation experienced unusual warmth on July 20-27, 2010. A large expanse of northern central Russia, for instance, exhibits below-average temperatures. Areas of atypical warmth, however, predominate in the east and west. Orange- and red-tinged areas extend from eastern Siberia toward the southwest, but the most obvious area of unusual warmth occurs north and northwest of the Caspian Sea. These warm areas in eastern and western Russia continue a pattern noticeable earlier in July, and correspond to areas of intense drought and wildfire activity.
Bloomberg reported that 558 active fires covering 179,596 hectares (693 square miles) were burning across the Russian Federation as of August 6, 2010. Voice of America reported that smoke from forest fires around the Russian capital forced flight restrictions at Moscow airports on August 6, just as health officials warned Moscow residents to take precautions against the smoke inhalation.
Posted by David Braun
Earlier related post: Russia burns in hottest summer on record (July 28, 2010)
Talk about tough: These guys throw themselves out of 50-year-old aircraft into burning Siberian forests. (National Geographic Magazine feature, February 2008)
Photo by Mark Thiessen
Join Nat Geo News Watch community
Readers are encouraged to comment on this and other postsβand to share similar stories, photos and linksβon the Nat Geo News Watch Facebook page. You must sign up to be a member of Facebook and a fan of the blog page to do this.
Leave a comment on this page
You may also email David Braun ([email protected]) if you have a comment that you would like to be considered for adding to this page. You are welcome to comment anonymously under a pseudonym.
|
<urn:uuid:10b103c1-284b-41c5-8dc9-bc9d1b7577ea>
|
CC-MAIN-2013-20
|
http://newswatch.nationalgeographic.com/2010/08/09/russia_chokes_as_fires_rage_worst_summer_ever/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699881956/warc/CC-MAIN-20130516102441-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.933289
| 827
| 2.84375
| 3
|
[
"climate"
] |
{
"climate": [
"drought",
"heatwave"
],
"nature": []
}
|
{
"strong": 2,
"weak": 0,
"total": 2,
"decision": "accepted_strong"
}
|
Time to think big
Did the designation of 2010 as the first-ever International Year of Biodiversity mean anything at all? Is it just a publicity stunt, with no engagement on the real, practical issues of conservation, asks Simon Stuart, Chair of IUCNβs Species Survival Commission.
Eight years ago 183 of the worldβs governments committed themselves βto achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level as a contribution to poverty alleviation and to the benefit of all life on Earthβ. This was hardly visionaryβthe focus was not on stopping extinctions or loss of key habitats, but simply on slowing their rate of lossβbut it was, at least, the first time the nations of the world had pledged themselves to any form of concerted attempt to face up to the ongoing degradation of nature.
Now the results of all the analyses of conservation progress since 2002 are coming in, and there is a unanimous finding: the world has spectacularly failed to meet the 2010 Biodiversity Target, as it is called. Instead species extinctions, habitat loss and the degradation of ecosystems are all accelerating. To give a few examples: declines and extinctions of amphibians due to disease and habitat loss are getting worse; bleaching of coral reefs is growing; and large animals in South-East Asia are moving rapidly towards extinction, especially from over-hunting and degradation of habitats.
|This month the worldβs governments will convene in Nagoya, Japan, for the Convention on Biological Diversityβs Conference of the Parties. Many of us hope for agreement there on new, much more ambitious biodiversity targets for the future. The first test of whether or not the 2010 International Year of Biodiversity means anything will be whether or not the international community can commit itself to a truly ambitious conservation agenda.|
The early signs are promising. Negotiating sessions around the world have produced 20 new draft targets for 2020. Collectively these are nearly as strong as many of us hoped, and certainly much stronger than the 2010 Biodiversity Target. They include: halving the loss and degradation of forests and other natural habitats; eliminating overfishing and destructive fishing practices; sustainably managing all areas under agriculture, aquaculture and forestry; bringing pollution from excess nutrients and other sources below critical ecosystem loads; controlling pathways introducing and establishing invasive alien species; managing multiple pressures on coral reefs and other vulnerable ecosystems affected by climate change and ocean acidification; effectively protecting at least 15 per cent of land and sea, including the areas of particular importance for biodiversity; and preventing the extinction of known threatened species. We now have to keep up the pressure to prevent these from becoming diluted.
We at IUCN are pushing for urgent action to stop biodiversity loss once and for all. The well-being of the entire planetβand of peopleβdepends on our committing to maintain healthy ecosystems and strong wildlife populations. We are therefore proposing, as a mission for 2020, βto have put in place by 2020 all the necessary policies and actions to prevent further biodiversity lossβ. Examples include removing government subsidies which damage biodiversity (as many agricultural ones do), establishing new nature reserves in important areas for threatened species, requiring fisheries authorities to follow the advice of their scientists to ensure the sustainability of catches, and dramatically cutting carbon dioxide emissions worldwide to reduce the impacts of climate change and ocean acidification.
If the world makes a commitment along these lines, then the 2010 International Year of Biodiversity will have been about more than platitudes. But it will still only be a start: the commitment needs to be implemented. We need to look for signs this year of a real change from governments and society over the priority accorded to biodiversity.
|One important sign will be the amount of funding that governments pledge this year for replenishing the Global Environment Facility (GEF), the worldβs largest donor for biodiversity conservation in developing countries. Between 1991 and 2006, it provided approximately $2.2 billion in grants to support more than 750 biodiversity projects in 155 countries. If the GEF is replenished at much the same level as over the last decade we shall know that the governments are still in βbusiness as usualβ mode. But if it is doubled or tripled in size, then we shall know that they are starting to get serious.|
IUCN estimates that even a tripling of funding would still fall far short of what is needed to halt biodiversity loss. Some conservationists have suggested that developed countries need to contribute 0.2 per cent of gross national income in overseas biodiversity assistance to achieve this. That would work out at roughly $120 billion a yearβthough of course this would need to come through a number of sources, not just the GEF. It is tempting to think that this figure is unrealistically high, but it is small change compared to the expenditures governments have committed to defence and bank bail outs.
It is time for the conservation movement to think big. We are addressing problems that are hugely important for the future of this planet and its people, and they will not be solved without a huge increase in funds.
|
<urn:uuid:2d3e80a0-ca7b-4358-80a9-0f5129e87a3e>
|
CC-MAIN-2013-20
|
http://cms.iucn.org/es/recursos/focus/enfoques_anteriores/cbd_2010/noticias/opinion/?6131/time-to-think-big
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.940236
| 1,055
| 3.296875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"carbon dioxide",
"climate change"
],
"nature": [
"biodiversity",
"biodiversity loss",
"conservation",
"ecosystem",
"ecosystems",
"habitat"
]
}
|
{
"strong": 7,
"weak": 1,
"total": 8,
"decision": "accepted_strong"
}
|
Opportunities and Challenges in High Pressure Processing of Foods
By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Keywords high pressure, food processing, non-thermal processing
Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that βhigh pressure kills microorganisms and preserves foodβ was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980β²s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003).
The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994).
Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows:
1. it enables food processing at ambient temperature or even lower temperatures;
2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage;
3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and
4. it can be used to create ingredients with novel functional properties.
The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression.
As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the βactivation volumeβ of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981).
For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH β€ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995).
Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments.
High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999).
High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990).
The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited.
EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS
Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects.
Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min.
Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure.
Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed.
Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots.
The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed.
High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion.
Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces.
Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature.
Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment.
Polyphenoloxidase and Pemxidase
Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively.
Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (β₯45C)-low pressure (β₯300 MPa) region where an antagonistic effect was observed.
Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site.
Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment.
Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated.
High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998).
High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993).
Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997).
When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997).
The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa.
Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa.
Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIlβAglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the Ξ²-lactoglobulin AB was completely hydrolyzed by pronase and Ξ±-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules.
Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts.
Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey.
The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported.
The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds.
Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa.
The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions.
The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure.
Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported.
COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS
Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section.
Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged.
The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992).
The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995).
Carbon Dioxide and Argon
Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed.
Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered.
Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C.
The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999).
OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING
The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations.
Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method.
Dehydration and Osmotic Dehydration
The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998).
Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure.
The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003).
Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality.
Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998).
Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature.
Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min.
Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Pelegβs model was found to satisfactorily describe the rate of water absorption of navy beans.
A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999).
Solid Liquid Extraction
The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice.
Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi
|
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695>
|
CC-MAIN-2013-20
|
http://www.redorbit.com/news/business/815480/opportunities_and_challenges_in_high_pressure_processing_of_foods/
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704132298/warc/CC-MAIN-20130516113532-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.924161
| 14,546
| 2.5625
| 3
|
[
"climate"
] |
{
"climate": [
"carbon dioxide",
"co2",
"temperature rise"
],
"nature": []
}
|
{
"strong": 2,
"weak": 1,
"total": 3,
"decision": "accepted_strong"
}
|
China has worked actively and seriously to tackle global climate change and build capacity to respond to it. We believe that every country has a stake in dealing with climate change and every country has a responsibility for the safety of our planet. China is at a critical stage of building a moderately prosperous society on all fronts, and a key stage of accelerated industrialization and urbanization. Yet, despite the huge task of developing the economy and improving peopleβs lives, we have joined global actions to tackle climate change with the utmost resolve and a most active attitude, and have acted in line with the principle of common but differentiated responsibilities established by the United Nations. China voluntarily stepped up efforts to eliminate backward capacity in 2007, and has since closed a large number of heavily polluting small coal-fired power plants, small coal mines and enterprises in the steel, cement, paper-making, chemical and printing and dyeing sectors. Moreover, in 2009, China played a positive role in the success of the Copenhagen conference on climate change and the ultimate conclusion of the Copenhagen Accord. In keeping with the requirements of the Copenhagen Accord, we have provided the Secretariat of the United Nations Framework Convention on Climate Change with information on Chinaβs voluntary actions on emissions reduction and joined the list of countries supporting the Copenhagen Accord.
The targets released by China last year for greenhouse gas emissions control require that by 2020, CO2 emissions per unit of GDP should go down by 40% - 45% from the 2005 level, non-fossil energy should make up about 15% of primary energy consumption, and forest coverage should increase by 40 million hectares and forest stock volume by 1.3 billion cubic meters, both from the 2005 level. The measure to lower energy consumption alone will help save 620 million tons of standard coal in energy consumption in the next five years, which will be equivalent to the reduction of 1.5 billion tons of CO2 emissions. This is what China has done to step up the shift in economic development mode and economic restructuring. It contributes positively to Asiaβs and the global effort to tackle climate change.
Ladies and Gentlemen,
Green and sustainable development represents the trend of our times. To achieve green and sustainable development in Asia and beyond and ensure the sustainable development of resources and the environment such as the air, fresh water, ocean, land and forest, which are all vital to human survival, we countries in Asia should strive to balance economic growth, social development and environmental protection. To that end, we wish to work with other Asian countries and make further efforts in the following six areas.
First, shift development mode and strive for green development. To accelerate the shift in economic development mode and economic restructuring provides an important precondition for our efforts to actively respond to climate change, achieve green development and secure the sustainable development of the population, resources and the environment. It is the shared responsibility of governments and enterprises of all countries in Asia and around the world. We should actively promote a conservation culture and raise awareness for environmental protection. We need to make sure that the concept of green development, green consumption and a green lifestyle and the commitment to taking good care of Planet Earth, our common home are embedded in the life of every citizen in society.
Second, value the importance of science and technology as the backing of innovation and development. We Asian countries have a long way to go before we reach the advanced level in high-tech-powered energy consumption reduction and improvement of energy and resource efficiency. Yet, this means we have a huge potential to catch up. It is imperative for us to quicken the pace of low-carbon technology development, promote energy efficient technologies and raise the proportion of new and renewable energies in our energy mix so as to provide a strong scientific and technological backing for green and sustainable development of Asian countries. As for developed countries, they should facilitate technology transfer and share technologies with developing countries on the basis of proper protection of intellectual property rights.
Third, open wider to the outside world and realize harmonious development. In such an open world as ours, development of Asian countries and development of the world are simply inseparable. It is important that we open our markets even wider, firmly oppose and resist protectionism in all forms and uphold a fair, free and open global trade and investment system. At the same time, we should give full play to the role of regional and sub-regional dialogue and cooperation mechanisms in Asia to promote harmonious and sustainable development of Asia and the world.
Fourth, strengthen cooperation and sustain common development. Pragmatic, mutually beneficial and win-win cooperation is a sure choice of all Asian countries if we are to realize sustainable development. No country could stay away from or manage to meet on its own severe challenges like the international financial crisis, climate change and energy and resources security. We should continue to strengthen macro-economic policy coordination and vigorously promote international cooperation in emerging industries, especially in energy conservation, emissions reduction, environmental protection and development of new energy sources to jointly promote sustainable development of the Asian economy and the world economy as a whole.
Fifth, work vigorously to eradicate poverty and gradually achieve balanced development. A major root cause for the loss of balance in the world economy is the seriously uneven development between the North and the South. Today, 900 million people in Asia, or roughly one fourth of the entire Asian population, are living below the 1.25 dollars a day poverty line. We call for greater efforts to improve the international mechanisms designed to promote balanced development, and to scale up assistance from developed countries to developing countries, strengthen South-South cooperation, North-South cooperation and facilitate attainment of the UN Millennium Development Goals. This will ensure that sustainable development brings real benefits to poor regions, poor countries and poor peoples.
Sixth, bring forth more talents to promote comprehensive development. The ultimate goal of green and sustainable development is to improve peopleβs living environment, better their lives and promote their comprehensive development. Success in this regard depends, to a large extent, on the emergence of talents with an innovative spirit. We need to build institutions, mechanisms and a social environment to help people bring out the best of their talents, and to intensify education and training of professionals of various kinds. This will ensure that as Asia achieves green and sustainable development, our people will enjoy comprehensive development.
Ladies and Gentlemen,
We demonstrated solidarity as we rose up together to the international financial crisis in 2009. Let us carry forward this great spirit, build up consensus, strengthen unity and cooperation and explore a path of green and sustainable development. This benefits Asia. It benefits the world, too.
In conclusion, I wish this annual conference of the Boao Forum for Asia a complete success.
|
<urn:uuid:648ee2b5-f8cd-4273-8ab0-29206d637638>
|
CC-MAIN-2013-20
|
http://news.xinhuanet.com/english2010/china/2010-04/11/c_13245754_2.htm
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700958435/warc/CC-MAIN-20130516104238-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.936942
| 1,357
| 2.96875
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"climate change",
"co2",
"greenhouse gas"
],
"nature": [
"conservation"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
by Gerry Everding
St. Louis MO (SPX) Feb 12, 2013
Nominated early this year for recognition on the UNESCO World Heritage List, which includes such famous cultural sites as the Taj Mahal, Machu Picchu and Stonehenge, the earthen works at Poverty Point, La., have been described as one of the world's greatest feats of construction by an archaic civilization of hunters and gatherers.
Now, new research in the current issue of the journal Geoarchaeology, offers compelling evidence that one of the massive earthen mounds at Poverty Point was constructed in less than 90 days, and perhaps as quickly as 30 days - an incredible accomplishment for what was thought to be a loosely organized society consisting of small, widely scattered bands of foragers.
"What's extraordinary about these findings is that it provides some of the first evidence that early American hunter-gatherers were not as simplistic as we've tended to imagine," says study co-author T.R. Kidder, PhD, professor and chair of anthropology in Arts and Sciences at Washington University in St. Louis.
"Our findings go against what has long been considered the academic consensus on hunter-gather societies - that they lack the political organization necessary to bring together so many people to complete a labor-intensive project in such a short period."
Co-authored by Anthony Ortmann, PhD, assistant professor of geosciences at Murray State University in Kentucky, the study offers a detailed analysis of how the massive mound was constructed some 3,200 years ago along a Mississippi River bayou in northeastern Louisiana.
Based on more than a decade of excavations, core samplings and sophisticated sedimentary analysis, the study's key assertion is that Mound A at Poverty Point had to have been built in a very short period because an exhaustive examination reveals no signs of rainfall or erosion during its construction.
"We're talking about an area of northern Louisiana that now tends to receive a great deal of rainfall," Kidder says. "Even in a very dry year, it would seem very unlikely that this location could go more than 90 days without experiencing some significant level of rainfall. Yet, the soil in these mounds shows no sign of erosion taking place during the construction period. There is no evidence from the region of an epic drought at this time, either."
Part of a much larger complex of earthen works at Poverty Point, Mound A is believed to be the final and crowning addition to the sprawling 700-acre site, which includes five smaller mounds and a series of six concentric C-shaped embankments that rise in parallel formation surrounding a small flat plaza along the river. At the time of construction, Poverty Point was the largest earthworks in North America.
Built on the western edge of the complex, Mound A covers about 538,000 square feet [roughly 50,000 square meters] at its base and rises 72 feet above the river. Its construction required an estimated 238,500 cubic meters - about eight million bushel baskets - of soil to be brought in from various locations near the site. Kidder figures it would take a modern, 10-wheel dump truck about 31,217 loads to move that much dirt today.
"The Poverty Point mounds were built by people who had no access to domesticated draft animals, no wheelbarrows, no sophisticated tools for moving earth," Kidder explains. "It's likely that these mounds were built using a simple 'bucket brigade' system, with thousands of people passing soil along from one to another using some form of crude container, such as a woven basket, a hide sack or a wooden platter."
To complete such a task within 90 days, the study estimates it would require the full attention of some 3,000 laborers. Assuming that each worker may have been accompanied by at least two other family members, say a wife and a child, the community gathered for the build must have included as many as 9,000 people, the study suggests.
"Given that a band of 25-30 people is considered quite large for most hunter-gatherer communities, it's truly amazing that this ancient society could bring together a group of nearly 10,000 people, find some way to feed them and get this mound built in a matter of months," Kidder says.
Soil testing indicates that the mound is located on top of land that was once low-lying swamp or marsh land - evidence of ancient tree roots and swamp life still exists in undisturbed soils at the base of the mound. Tests confirm that the site was first cleared for construction by burning and quickly covered with a layer of fine silt soil. A mix of other heavier soils then were brought in and dumped in small adjacent piles, gradually building the mound layer upon layer.
As Kidder notes, previous theories about the construction of most of the world's ancient earthen mounds have suggested that they were laid down slowly over a period of hundreds of years involving small contributions of material from many different people spanning generations of a society. While this may be the case for other earthen structures at Poverty Point, the evidence from Mound A offers a sharp departure from this accretional theory. Kidder's home base in St.
Louis is just across the Mississippi River from one of America's best known ancient earthen structures, the Monk Mound at Cahokia, Ill. He notes that the Monk Mound was built many centuries later than the mounds at Poverty Point by a civilization that was much more reliant on agriculture, a far cry from the hunter-gatherer group that built Poverty Point. Even so, Mound A at Poverty Point is much larger than almost any other mound found in North America; only Monk's Mound at Cahokia is larger.
"We've come to realize that the social fabric of these socieites must have been much stronger and more complex that we might previously have given them credit. These results contradict the popular notion that pre-agricultural people were socially, politically, and economically simple and unable to organize themselves into large groups that could build elaborate architecture or engage in so-called complex social behavior," Kidder says.
"The prevailing model of hunter-gatherers living a life 'nasty, brutish and short' is contradicted and our work indicates these people were practicing a sophisticated ritual/religious life that involved building these monumental mounds."
Washington University in St. Louis
All About Human Beings and How We Got To Be Here
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
|
<urn:uuid:a5058d3c-2691-4aef-862f-88a3935a760d>
|
CC-MAIN-2013-20
|
http://www.terradaily.com/reports/Archaic_Native_Americans_built_massive_Louisiana_mound_in_less_than_90_days_999.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.966482
| 1,459
| 2.9375
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
By Pauline Hammerbeck
It's been a doozy of a wildfire season (Colorado's most destructive ever), leaving homeowners wondering what safety measures they can put in place to stave off flames in the event of a fire in their own neighborhood.
Landscaping, it turns out, can be an important measure in wildfire protection.
But fire-wise landscaping isn't just something for those dwelling on remote Western hilltops. Brush, grass and forest fires occur nearly everywhere in the United States, says the National Fire Protection Association. Here's how your landscaping can help keep you safe.
Create 'defensible' space
Most homes that burn during a wildfire are ignited by embers landing on the roof, gutters, and on decks and porches. So your first point of action should be creating a defensible space, a buffer zone around your home, to reduce sources of fuel.
Start by keeping the first 3 to 5 feet around your home free of all flammable materials and vegetation: plants, shrubs, trees and grasses, as well as bark and other organic mulches should all be eliminated (a neat perimeter of rock mulch or a rock garden can be a beautiful thing). Maintenance is also important:
- Clear leaves, pine needles and other debris from roofs, gutters and eaves
- Cut back tree branches that overhang the roof
- Clear debris from under decks, porches and other structures
Moving farther from the house, you might consider adding hardscaping - driveways, patios, walkways, gravel paths, etc. These features add visual interest, but they also maintain a break between vegetation and your home in the event of a fire. Some additional tasks to consider in the first 100 feet surrounding your home:
- Thin out trees and shrubs (particularly evergreens) within 30 feet
- Trim low tree branches so they're a minimum of 6 feet off the ground
- Mow lawn regularly and dispose of clippings and other debris promptly
- Move woodpiles to a space at least 30 feet from your home
Use fire-resistant plants
Populating your landscape with plants that are resistant to fire can also be an important tactic. Look for low-growing plants that have thick leaves (a sign that they hold water), extensive root systems and the ability to withstand drought.
This isn't as limiting as it sounds. Commonly used hostas, butterfly bushes and roses are all good choices. And there are plenty of fire-resistant plant lists to give you ideas on what to pick.
Where and how you plant can also have a dramatic effect on fire behavior. The plants nearest your home should be smaller and more widely spaced than those farther away.
Be sure to use a variety of plant types, which reduces disease and keeps the landscape healthy and green. Plant in small clusters - create a garden island, for instance, by surrounding a group of plantings with a rock perimeter - and use rock mulch to conserve moisture.
Maintain accessible water sources
Wildfires present a special challenge to local fire departments, so it's in your interest to be able to access or maintain an emergency water supply - particularly if you're in a remote location.
At a minimum, keep 100 feet of garden hose attached to a spigot (if your water comes from a well, consider an emergency generator to operate the pump during a power failure). But better protection can come from the installation of a small pond, cistern or, if budget allows, a swimming pool.
Good planning and a bit of elbow grease have a big hand in wildfire safety. In a year with record heat and drought, looking over your landscape with a firefighter's eye can offer significant peace of mind.
- Are You Properly Insured for Your Real Estate?
- The Ins and Outs of Homeowner's Insurance
- Tips for Fire Safety in Your Home
Guest blogger Pauline Hammerbeck is an editor for the Allstate Blog, which helps people prepare for the unpredictability of life.
Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinion or position of Zillow.
|
<urn:uuid:dbe77f52-384c-4c40-a487-84aae16a1d76>
|
CC-MAIN-2013-20
|
http://www.gloucestertimes.com/real_estate_news/x2068758245/How-to-Landscape-Your-Home-for-Fire-Safety
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.939375
| 854
| 2.578125
| 3
|
[
"climate"
] |
{
"climate": [
"drought"
],
"nature": []
}
|
{
"strong": 1,
"weak": 0,
"total": 1,
"decision": "accepted_strong"
}
|
Upland Bird Regional Forecast
When considering upland game population levels during the fall hunting season, two important factors impact population change. First is the number of adult birds that survived the previous fall and winter and are considered viable breeders in the spring. The second is the reproductive success of this breeding population. Reproductive success consists of nest success (the number of nests that successfully hatched) and chick survival (the number of chicks recruited into the fall population). For pheasant and quail, annual population turnover is relatively high; therefore, the fall population is more dependent on reproductive success than breeding population levels. For grouse (prairie chickens), annual population turnover is not as rapid although reproductive success is still the major population regulator and important for good hunting. In the following forecast, breeding population and reproductive success of pheasants, quail, and prairie chickens will be discussed. Breeding population data were gathered during spring breeding surveys for pheasants (crow counts), quail (whistle counts), and prairie chickens (lek counts). Data for reproductive success were collected during late summer roadside surveys for pheasants and quail. Reproductive success of prairie chickens cannot be easily assessed using the same methods because they generally do not associate with roads like the other game birds.
Kansas experienced extreme drought this past year. Winter weather was mild, but winter precipitation is important for spring vegetation, which can impact reproductive success, and most of Kansas did not get enough winter precipitation. Pheasant breeding populations showed significant reductions in 2012, especially in primary pheasant range in western Kansas. Spring came early and hot this year, but also included fair spring moisture until early May, when the precipitation stopped, and Kansas experienced record heat and drought through the rest of the reproductive season. Early nesting conditions were generally good for prairie chickens and pheasants. However, the primary nesting habitat for pheasants in western Kansas is winter wheat, and in 2012, Kansas had one of the earliest wheat harvests on record. Wheat harvest can destroy nests and very young broods. The early harvest likely lowered pheasant nest and early brood success. The intense heat and lack of rain in June and July resulted in a decrease in brooding cover and insect populations, causing lower chick survival for all upland game birds.
Because of drought, all counties in Kansas were opened to Conservation Reserve Program (CRP) emergency haying or grazing. CRP emergency haying requires fields that are hayed to leave at least 50 percent of the field in standing grass cover. CRP emergency grazing requires 25 percent of the field (or contiguous fields) to be left ungrazed or grazing at 75-percent normal stocking rates across the entire field. Many CRP fields, including Walk In Hunting Areas (WIHA), may be affected across the state. WIHA property is privately-owned land open to the public for hunting access. Kansas has more than one million acres of WIHA. Often, older stands of CRP grass are in need of disturbance, and haying and grazing can improve habitat for the upcoming breeding season, and may ultimately be beneficial if weather is favorable.
Due to continued drought, Kansas will likely experience a below-average upland game season this fall. For those willing to hunt hard, there will still be pockets of decent bird numbers, especially in the northern Flint Hills and northcentral and northwestern parts of the state. Kansas has approximately 1.5 million acres open to public hunting (wildlife areas and WIHA combined). The regular opening date for the pheasant and quail seasons will be Nov. 10 for the entire state. The previous weekend will be designated for the special youth pheasant and quail season. Youth participating in the special season must be 16 years old or younger and accompanied by a non-hunting adult who is 18 or older. All public wildlife areas and WIHA tracts will be open for public access during the special youth season. Please consider taking a young person hunting this fall, so they might have the opportunity to develop a passion for the outdoors that we all enjoy.
PHEASANT β Drought in 2011 and 2012 has taken its toll on pheasant populations in Kansas. Pheasant breeding populations dropped by nearly 50 percent or more across pheasant range from 2011 to 2012 resulting in fewer adult hens in the population to start the 2012 nesting season. The lack of precipitation has resulted in less cover and insects needed for good pheasant reproduction. Additionally, winter wheat serves as a major nesting habitat for pheasants in western Kansas, and a record early wheat harvest this summer likely destroyed many nests and young broods. Then the hot, dry weather set in from May to August, the primary brood-rearing period for pheasants. Pheasant chicks need good grass and weed cover and robust insect populations to survive. Insufficient precipitation and lack of habitat and insects throughout the stateβs primary pheasant range resulted in limited production. This will reduce hunting prospects compared to recent years. However, some good opportunities still exist to harvest roosters in the sunflower state, especially for those willing to work for their birds. Though the drought has taken its toll, Kansas still contains a pheasant population that will produce a harvest in the top three or four major pheasant states this year.
The best areas this year will likely be pockets of northwest and northcentral Kansas. Populations in southwest Kansas were hit hardest by the 2011-2012 drought (72 percent decline in breeding population), and a very limited amount of production occurred this season due to continued drought and limited breeding populations.
QUAIL β The bobwhite breeding population in 2012 was generally stable or improved compared to 2011. Areas in the northern Flint Hills and parts of northeast Kansas showed much improved productivity this year. Much of eastern Kansas has seen consistent declines in quail populations in recent decades. After many years of depressed populations, this yearβs rebound in quail reproduction in eastern Kansas is welcomed, but overall populations are still below historic averages. The best quail hunting will be found throughout the northern Flint Hills and parts of central Kansas. Prolonged drought undoubtedly impacted production in central and western Kansas.
PRAIRIE CHICKEN β Kansas is home to greater and lesser prairie chickens. Both species require a landscape of predominately native grass. Lesser prairie chickens are found in westcentral and southwestern Kansas in native prairie and nearby stands of native grass within the conservation reserve program (CRP). Greater prairie chickens are found primarily in the tallgrass and mixed-grass prairies in the eastern one-third and northern one-half of the state.
The spring prairie chicken lek survey indicated that most populations remained stable or declined from last year. Declines were likely due to extreme drought throughout 2011. Areas of northcentral and northwest Kansas fared the best, while areas in southcentral and southwest Kansas experienced the sharpest declines where drought was most severe. Many areas in the Flint Hills were not burned this spring due to drought. This resulted in far more residual grass cover for much improved nesting conditions compared to recent years. There have been some reports of prairie chickens broods in these areas, and hunting will likely be somewhat improved compared to recent years.
Because of recent increases in prairie chicken (both species) populations in northwest Kansas, regulations have been revised this year. The early prairie chicken season (Sept. 15-Oct. 15) and two-bird bag limit has been extended into northwest Kansas. The northwest unit boundary has also been revised to include areas north of U.S. Highway 96 and west of U.S. Highway 281. Additionally, all prairie chicken hunters are now required to purchase a $2.50 prairie chicken permit. This permit will allow KDWPT to better track hunters and harvest, which will improve management activities. Both species of prairie chicken are of conservation concern and the lesser prairie chicken is a candidate species for federal listing under the Endangered Species Act.
This region has 11,809 acres of public land and 339,729 acres of WIHA open to hunters this fall.
Pheasant β Spring breeding populations declined almost 50 percent from 2011 to 2012, reducing fall population potential. Early nesting conditions were decent due to good winter wheat growth, but early wheat harvest and severe heat and drought through the summer reduced populations. While this resulted in a significant drop in pheasant numbers, the area will still have the highest densities of pheasants this fall compared to other areas in the state. Some counties β such as Graham, Rawlins, Decatur, and Sherman β showed the relatively-highest densities of pheasants during summer brood surveys. Much of the cover will be reduced compared to previous years due to drought and resulting emergency haying and grazing in CRP fields. Good hunting opportunities will also be reduced compared to recent years, and harvest will likely be below average.
Quail β Populations in this region have been increasing in recent years although the breeding population had a slight decline. This area is at the extreme northwestern edge of bobwhite range in Kansas, and densities are relatively low compared to central Kansas. Some counties β such as Graham, Rawlins, and Decatur β will provide hunting opportunities for quail.
Prairie Chicken β Prairie chicken populations have expanded in both numbers and range within the region over the past 20 years. The better hunting opportunities will be found in the central and southeastern portions of the region in native prairies and nearby CRP grasslands. Spring lek counts in that portion of the region were slightly depressed from last year and nesting conditions were only fair this year. Extreme drought likely impaired chick survival.
This region has 75,576 acres of public land and 311,182 acres of WIHA open to hunters this fall.
Pheasant β The Smoky Hills breeding population dropped about 40 percent from 2011 to 2012, reducing overall fall population potential. While nesting conditions were fair due to good winter wheat growth, the drought and early wheat harvest impacted the number of young recruited into the fall population. Certain areas had decent brood production, including portions of Mitchell, Rush, Rice, and Cloud counties. Across the region, hunting opportunities will likely be below average and definitely reduced from recent years. CRP was opened to emergency haying and grazing, reducing available cover.
Quail β Breeding populations increased nearly 60 percent from 2011 to 2012, increasing fall population potential. However, drought conditions were severe, likely impairing nesting and brood success. There are reports of fair quail numbers in certain areas throughout the region. Quail populations in northcentral Kansas are naturally spotty due to habitat characteristics. Some areas, such as Cloud County, showed good potential while other areas in the more western edges of the region did not fare as well.
Prairie Chicken β Greater prairie chickens occur throughout the Smoky Hills in large areas of native rangeland and some CRP. This region includes some of the highest densities and greatest hunting opportunities in the state for greater prairie chickens. Spring counts indicated that numbers were stable or slightly reduce from last year. Much of the rangeland cover is significantly reduced due to drought, which likely impaired production, resulting in reduced fall hunting opportunities..
This region has 60,559 acres of public land and 54,170 of WIHA open to hunters this fall.
Pheasant β Spring crow counts this year showed a significant increase in breeding populations of pheasants. While this increase is welcome, this region was nearing all-time lows in 2011. Pheasant densities across the region are still low, especially compared to other areas in western Kansas. Good hunting opportunities will exist in only a few pockets of good habitat.
Quail β Breeding populations stayed relatively the same as last year, and some quail were detected during the summer brood survey. The long-term trend for this region has been declining, largely due to unfavorable weather and degrading habitat. This year saw an increase in populations. Hunting opportunities for quail will be improved this fall compared to recent years in this region. The best areas will likely be in Marshall and Jefferson counties.
Prairie Chickens β Very little prairie chicken range occurs in this region, and opportunities are limited. The best areas are in the western edges of the region, in large areas of native rangeland.
This region has 80,759 acres of public land and 28,047 acres of WIHA open to hunters this fall.
Pheasant β This region is outside the primary pheasant range and has very limited hunting. A few birds can be found in the northwestern portion of the region.
Quail β Breeding populations were relatively stable from 2011 to 2012 for this region although long term trends have been declining. In the last couple years, the quail populations throughout much of the region have been on the increase. Specific counties that showed relatively higher numbers are Coffey, Osage, and Wilson. However, populations remain far below historic levels across the bulk of the region due to extreme habitat degradation.
Prairie Chicken β Greater prairie chickens occur in the central and northwest parts of this region in large areas of native rangeland. Breeding population densities were up nearly 40 percent from last year, and opportunities may increase accordingly. However, populations have been in consistent decline over the long term. Infrequent fire frequency has resulted in woody encroachment of native grasslands in the area, gradually reducing the amount of suitable habitat.
This region has 128,371 acres of public land and 63,069 acres of WIHA open to hunters this fall.
Pheasant β This region is on the eastern edge of pheasant range in Kansas and well outside the primary range. Pheasant densities have always been relatively low throughout the Flint Hills. Spring breeding populations were down nearly 50 percent, and reproduction was limited this summer. The best pheasant hunting will be in the northwestern edge of this region in Marion and Dickinson counties.
Quail β This region contains some of the highest densities of bobwhite in Kansas. The breeding population in this region increased 25 percent compared to 2011, and the long-term trend (since 1998) has been stable do to steadily increasing populations over the last four or five years. High reproductive success was reported in the northern half of this region, and some of the best opportunities for quail hunting will be found in the northern Flint Hills this year. In the south, Cowley County showed good numbers of quail this summer.
Prairie Chickens β The Flint Hills is the largest intact tallgrass prairie left in North America. It has served as a core habitat for greater prairie chickens for many years. Since the early 1980s, inadequate range burning frequencies have consistently reduced nest success in the area, and prairie chicken numbers have been declining as a result. Because of the drought this spring, many areas that are normally burned annually were left unburned this year. This left more residual grass cover for nesting and brood rearing. There are some good reports of prairie chicken broods, and hunting opportunities will likely increase throughout the region this year.
This region has 19,534 acres of public land and 73,341 acres of WIHA open to hunters this fall.
Pheasant β The breeding population declined about 40 percent from 2011 to 2012. Prolonged drought for two years now and very poor vegetation conditions resulted in poor reproductive success this year. All summer indices showed a depressed pheasant population in this region, especially compared to other regions. Some of the relatively better counties in this area will be Reno, Pawnee, and Pratt although these counties have not been immune to recent declines. There will likely few good hunting opportunities this fall.
Quail β The breeding population dropped over 30 percent this year from 2011 although long term trends (since 1998) have been stable in this region. This region generally has some of the highest quail densities in Kansas, but prolonged drought and reduced vegetation have caused significant declines in recent years. Counties such as Reno, Pratt, and Stafford will likely have the best opportunities in the region. While populations may be down compared to recent years, this region will continue to provide fair hunting opportunities for quail.
Prairie Chicken β This region is almost entirely occupied by lesser prairie chickens. The breeding population declined nearly 50 percent from 2011 to 2012. Reproductive conditions were not good for the region due to extreme drought and heat for the last two years, and production was limited. The best hunting opportunities will likely be in the sand prairies south of the Arkansas River.
This region has 2,904 acres of public land and 186,943 acres of WIHA open to hunters this fall.
Pheasant β The breeding population plummeted more than 70 percent in this region from 2011 to 2012. Last year was one of the worst on record for pheasant reproduction. However, last fall there was some carry-over rooster (second-year) from a record high season in 2010. Those carry-over birds are mostly gone now, which will hurt hunting opportunities this fall. Although reproduction was slightly improved from 2011, chick recruitment was still fair to below average this summer due to continued extreme drought conditions. Moreover, there were not enough adult hens in the population yet to make a significant rebound. Generally, hunting opportunity will remain well below average in this region. Haskell and Seward counties showed some improved reproductive success, especially compared to other counties in the region.
Quail β The breeding population in this region tends to be highly variable depending on available moisture and resulting vegetation. The region experienced an increase in breeding populations from 2011 to 2012 although 2011 was a record low for the region. While drought likely held back production, the weather was better than last year, and some reproduction occurred. Indices are still well below average for the region. There will be some quail hunting opportunities in the region although good areas will be sparse.
Prairie Chicken β While breeding populations in the eastern parts of this region were generally stable or increasing, areas of extreme western and southwest portions (Cimarron National Grasslands) saw nearly 30-percent declines last year and 65 percent declines this year. Drought remained extreme in this region, and reproductive success was likely very low. Hunting opportunities in this region will be extremely limited this fall.
|
<urn:uuid:a611d07f-9067-4341-92f3-f62b82e34e98>
|
CC-MAIN-2013-20
|
http://www.kdwpt.state.ks.us/index.php/news/Hunting/Upland-Birds/Upland-Bird-Regional-Forecast
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697380733/warc/CC-MAIN-20130516094300-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
en
| 0.956535
| 3,769
| 3.484375
| 3
|
[
"climate",
"nature"
] |
{
"climate": [
"drought"
],
"nature": [
"conservation",
"endangered species",
"habitat"
]
}
|
{
"strong": 4,
"weak": 0,
"total": 4,
"decision": "accepted_strong"
}
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 389