article_text
stringlengths
294
32.8k
topic
stringlengths
3
42
Millions of Americans are now routinely exposed to unhealthy plumes of wildfire smoke that can waft thousands of miles across the country, scientists have warned.Wildfires cause soot and ash to be thrown off into the air, which then carries the minuscule particles that can be inhaled by people many miles away, aggravating a variety of health conditions. The number of people in the US exposed to unhealthy levels of these particulates from wildfires at least one day a year has increased 27-fold over the last decade, a new study found, with 25 million people in 2020 alone breathing in potentially toxic air from fires.Pockets of deeply unhealthy air have emerged mainly in the US west, the staging ground for wildfires of increasing intensity that have been fueled by years of fire suppression and global heating, priming forests to burn. Six of the seven largest wildfires in California’s recorded history have occurred since 2020.Wildfire smoke can result in the closure of schools, the postponement of flights and even cause cycling races and Pearl Jam concerts to be canceled. But its most pervasive impact is a regression in air quality barely seen since the advent of the Clean Air Act in 1970, which helped lift dangerous, choking smog conditions from many polluted US cities.“We are seeing the undoing of a lot of that clean air progress, especially in the west,” said Marshall Burke, a scientist at Stanford University and co-author of the study published in Environmental Science and Technology.“There’s been really dramatic increases in wildfire smoke as air pollution, in some places fully reversing the impact of the Clean Air Act. It’s been remarkably quick. Our air pollution regulations are not designed to deal with this. It’s a worrying problem.”Series of US maps showing the increase in PM2.5 pollution from smoke, specifically in the west.The new study is based on a model that calculates how wildfire smoke has raised background pollution levels in locations across the US. It measures the presence of PM2.5, tiny particles about one-thirtieth of the width of a human hair that can travel through the air and bury themselves deep in the lungs of people when inhaled.Wildfire smoke has added about five micrograms of these particles to locations in the US west, on average, which is a sizable increase from national levels, which are about 10 micrograms from other sources of particulate pollution, such as the emissions from cars, trucks and power plants.Unlike these other sources, which are regulated by government, wildfire smoke is less predictable, reaches farther and is more egalitarian in whom it affects – the wealthy and white as well as poor people of color who are disproportionately exposed to pollution from nearby highways and factories.On 20 July 2021, the Met Life and Chrysler buildings were blanketed by a thick haze hanging over Manhattan caused by wildfires in the west of the US. Photograph: Julie Jacobson/AP“Wildfires produce an amazing amount of particulates that can travel thousands of miles, unlike other pollution,” said Burke. Last summer, New York experienced some of the worst air quality in the world due to smoke from wildfires several thousand miles away on the west coast of the US.A decade ago, fewer than 500,000 people in the US were exposed to any days of an air quality index of 100 or above due to smoke, a level that is deemed unhealthy. Now, Burke said, 5 million Americans are living in areas with such levels at least one day a year.“If you don’t live near a highway or power plant your air quality is likely to be fairly good, but incursion from wildfire smoke is changing that and there’s evidence this will increase,” he said. “Honestly, it was surprising to see how quickly these extreme exposures have gone up.”Bar chart of the number of people in the US exposed to at least one day of dangerous levels of PM2.5.The dangers posed by wildfire smoke are of increasing concern for experts in various places around the world – a summer of intense wildfires in Spain, France and Portugal has resulted in Europe’s highest wildfire emissions in 15 years. The probability of catastrophic wildfire events around the globe will increase by 30% by the end of the century even if planet-heating gases are rapidly cut, according to the most recent Intergovernmental Panel on Climate Change report.“As the globe warms, wildfires and associated air pollution are expected to increase, even under a low emissions scenario. In addition to human health impacts, this will also affect ecosystems as air pollutants settle from the atmosphere to Earth’s surface,” said Petteri Taalas, secretary general of the World Meteorological Organization.“We have seen this in the heatwaves in Europe and China this year when stable high atmospheric conditions, sunlight and low wind speeds were conducive to high pollution levels.”Research has linked wildfire smoke to the worsening of several conditions. Fierce wildfires in California in 2020 caused people to inhale smoke that raised their risk of heart attacks by up to 70%, a study found, with the smoke causing an estimated 3,000 deaths in people older than 65.A separate study published in May found that people living within 50km (31 miles) of wildfires over the past decade had a 10% higher incidence of brain tumors and a 5% higher chance of developing lung cancer compared with people living farther away.Breathing in wildfire smoke while pregnant, meanwhile, raises the risk of premature birth and even worsens outcomes for people who contract Covid-19. Francesca Dominici, a Harvard University professor who led the research on the link between wildfire smoke and Covid, said Burke’s new study is “well validated” and an “exciting area of research”.“The results are interesting and concerning,” Dominici said, adding that there is “emergent evidence” that PM2.5 from smoke is more toxic than particles from other sources.George Thurston, an environmental health scientist at the NYU School of Medicine, said that there is still more to be learned about the impact of wildfire smoke, with some research suggesting fossil fuel combustion is in fact more harmful, but that the new study is an “important addition” to the estimates of exposure of wildfire smoke.“We need studies like this to assess how big a risk this is, because the Environmental Protection Agency exempts these fires from air quality standards,” Thurston said. “This sort of work helps us to work out if new standards are required.”Burke said the threat of smoke became obvious to many in California in 2020 when the skies over the San Francisco Bay Area turned orange. Some of the wealthiest neighborhoods in the world have suffered from poor indoor air quality due to smoke, alleviated only by air filtration.“The sun never came up in the Bay Area, which really brought home this is a different era we are living in,” Burke said. “We naively thought we were safe in our homes but the health guidance is inadequate. In my own home I closed all the windows and doors and yet I got a monitor and found the indoor air quality was appalling.”Drastic cuts to greenhouse gases, better forest management where fuels are thinned or burned away in a controlled manner and improved guidance to households will all be required to improve the situation, Burke said. “We shouldn’t think about wildfires just in terms of numbers of homes burned down but also how many people have been exposed to pollution, because there are huge impacts that we just aren’t thinking about,” he said.
Environmental Science
This is partly a celebration tour for the largest piece of climate-stabilizing legislation ever enacted in the U.S., but it's also a public relations exercise: Some Americans recall the drama around getting Sen. Joe Manchin (D-W.Va.) and then Sen. Kyrsten Sinema (I-Ariz.) to support the massive environmental, health care, energy and industrial policy bill, but many fewer have followed what happened after Biden dragged it across the finish line. Only 27% of Americans said they knew a great deal or good amount about the IRA in a July 13-23 Washington Post/University of Maryland poll, while 71% said they knew little or nothing about the law. That's pretty common with big laws, Ezra Klein said at The New York Times. "There is all this attention to the fight to pass a bill — the Affordable Care Act, the Trump tax cuts, the Inflation Reduction Act. And then the bill passes. And if the fight stops, attention just drops off a cliff." But while attention has been elsewhere, quite a bit has happened with the IRA since Biden signed it into law on Aug. 16, 2022. The law has capped insulin prices and out-of-pocket drug expenses for Medicare beneficiaries and beefed up corporate and high-income tax compliance, but the IRA's biggest goal is "to spur clean energy buildout on a scale that will bend the arc of U.S. greenhouse gas emissions," using components made in America, The Associated Press reported. "In less than a year it has prompted investment in a massive buildout of battery and EV manufacturing across the states." "The green energy portion of the bill" — already slated to be "the most aggressive effort to decarbonize the U.S. economy ever" — is "turning out to be even bigger than analysts thought at the time," Rick Newman wrote at Yahoo Finance. In fact, "the laughably misnamed and poorly understood Inflation Reduction Act" may end up being "the biggest sleeper event of the Biden presidency." Here are some ways the IRA has changed America, and the world, after one year. 1. Cut carbon emissions The U.S. pledged in the 2015 Paris Agreement to slash greenhouse gas emissions by half of 2005 levels before 2030. Former President Donald Trump withdrew the U.S. from the agreement two years later, and by the start of the Biden administration, analysts at Rhodium Group, an independent analytics firm, forecast that the U.S. would, at best, cut emissions by about a quarter. Thanks largely to the IRA, Rhodium said in its July 2023 report, the U.S. is on track to cut emissions by as much as 42% by 2030 and by 32% to 51% by 2035. That's "a meaningful departure from previous years' expectations for the U.S. emissions trajectory," Rhodium analysts wrote. Other analyses reached similar conclusions. A study by Princeton University researchers forecast that the IRA would cut U.S. greenhouse gas emissions by 41% by 2030. A group of academic and private-sector energy researchers reported in the journal Science in June 2023 that thanks to "key IRA provisions," there should be "economy-wide emissions reductions between 43% and 48% below 2005 levels by 2035." None of those projections are "enough to hit U.S. goals," AP notes, but they're certainly "a significant improvement." 2. Charged the electric vehicle market The best data we have that the IRA is working is with electric vehicles (EVs), Heatmap's Robinson Meyer said on The Ezra Klein Show podcast. The law seeks "to both encourage Americans to buy and use electric vehicles and then also to build a domestic manufacturing base for electric vehicles. And I think on both counts, the data's really good on electric vehicles." The IRA offers consumers tax credits of up to $7,500 for new electric vehicles that are priced affordably and substantially made in the U.S., and up to $4,000 for used EVs. Sales of electric vehicles rose 53% in the first quarter of 2023, compared with a year earlier, and EVs "now make up one in every 12 cars sold" in the U.S., the investment firm Schroders reported. Rhodium forecast that EVs could make up 33-66% of all light-duty vehicle sales in 2035, from about 6% in 2022. "Though there's uncertainty on just how fast the U.S. scales up renewable energy on the grid or EVs on the road, those levels of deployment would be meaningfully lower than what we're estimating in our modeling under otherwise the same conditions absent the IRA," Ben King, lead author of the Rhodium report, told Grist. On the production side, "steel's going in the ground" on electric vehicle and EV battery factories, Meyer said. "The financing for those factories is locked down. It seems like they're definitely going to happen. They're permitted. Companies are excited about them. Large Fortune 500 automakers are confidently and with certainty planning for an electric vehicle future, and they're building the factories to do that in the United States," in both red and blue states. Since the IRA became law, "companies have announced at least 31 new battery manufacturing projects in the United States," former Biden White House economic adviser Brian Deese wrote in The New York Times, creating a pipeline that "amounts to 1,000 gigawatt-hours per year by 2030 — 18 times the energy storage capacity in 2021, enough to support the manufacture of 10 million to 13 million electric vehicles per year." The U.S. battery production pipeline is now growing faster than in Europe or even China, Schroders added. "The incentives from the IRA have been so strong that corporates have been reallocating capital spend from Europe to the U.S." 3. Ramped up U.S. manufacturing It's not just battery plants — about 80 clean energy manufacturing facilities have been announced since the IRA became law, "an investment equal to the previous seven years combined," AP reported, citing the American Clean Power Association. "It seems like every week there's a new factory facility somewhere" being announced, Jesse Jenkins, a professor at Princeton who has closely analyzed the IRA, told AP. "We've been talking about bringing manufacturing jobs back to America for my entire life. We're finally doing it, right? That's pretty exciting." "We will always look at the history of our industry in two eras now that the Inflation Reduction Act has passed," the before era and afterward, Scott Moskowitz at solar panel component manufacturer Qcells North America told the Times. "The IRA contains some of the most ambitious clean energy manufacturing incentives enacted anywhere in the world." The IRA also has the U.S. "building transmission lines and green hydrogen manufacturing facilities and trying to create a national network of electric vehicle chargers and on and on and on," Klein said on his podcast. "I mean, we're trying to do industrial physical transformation at a speed and scale unheralded in American history. This is bigger than anything we have done at this speed ever." 4. Helps homeowners and renters go electric The IRA "is the largest clean energy investment America has ever made, with strategic incentives to make the transition to clean energy and a decarbonized life easy and financially smart," but it's also "full of incentives to help you and your family go electric," the electrification nonprofit Rewiring America said. In fact, you can "think of the IRA as a free electric bank account with your name on it, because that's what it is." The IRA has two main consumer-oriented programs, the Home Efficiency Rebates program and the Home Electrification and Appliance Rebates program. And households can access up to $14,000 or more to install heat pumps, insulation, solar panels, EV hookups, or energy-efficient appliances, depending on your household income, cost of your project, total energy savings — and whether your governor signs on, Greg Iacurci reported at CNBC. Already, "one state, Florida, has publicly signaled it doesn't intend to apply for its $346 million of allocated federal funds," and "it's unclear if other states will bow out as well," Iacurci said. If they do, their money will be allocated to states that opt in. The rebates, designed to be delivered at the point of sale from a retailer or contractor, should be available in late 2023 or 2024, the Energy Department estimates. Rewiring America has a list of the various rebates and upfront discounts — and an "IRA savings calculator" to help you figure out which ones you might qualify for, and at what amount. 5. Aims to build a green hydrogen market from scratch The IRA's most generous subsidies involve hydrogen — "blue hydrogen," which is made from natural gas and envisioned as a way to capture and store carbon, and "green hydrogen," which is "produced by using clean energy, such as solar or wind power, to split water into two hydrogen atoms and one oxygen atom through a process called electrolysis," Schroders explained. "The IRA has introduced extremely generous subsidies for hydrogen, particularly for green hydrogen." The IRA subsidies for green hydrogen, which will be used as a fuel especially in industries that can't easily electrify, "are so generous that relatively immediately, it's going to have a negative cost to make green hydrogen. It will cost less than $0 to make green hydrogen," Heatmap's Meyer told the Times. "That is intentional because what needs to happen now is that green hydrogen moves into places where we're using natural gas," and right now there's no clean hydrogen economy set up to make that happen at scale. Green hydrogen is "one of these technologies on the cusp," Klein agreed, and the IRA hopes to "create a whole innovation and supply chain almost from scratch." Work on larger-scale projects has gotten off to a slow start, but "we are optimistic that progress will pick up in the latter half of the decade as electrolyser technology is further developed and proven," Schroders said. Jason Mortimer at EH2, which makes large and low-cost electrolyzers, told AP that "the IRA accelerates the implementation of hydrogen at scale by about four to five years," making the U.S. competitive with Europe. 6. Spurred allies and rivals to step up their efforts As much as they support decarbonization efforts, "nations in Europe and elsewhere are rattled by the possibility that the United States might now capture an outsized portion of the global green energy economy, and some are planning muscular incentives of their own to stimulate home-grown technology," Yahoo Finance's Newman wrote. On the plus side, "that could create a race to solve the problem, if only to capture the economic benefits." On the negative side, "many of the U.S.'s allies like Japan, Canada, and especially the European Union," see the IRA "less as long-awaited efforts to finally make good on promises of climate action and more as a threat to the ability of places like Europe to attract investment themselves," Alex Yablon said at Vox. "At the moment Biden wanted the transatlantic democratic alliance to come together for a common purpose and approach, wonks on either side of the ocean are sniping at each other." This isn't a winner-takes-all race, but the U.S. clearly has to work with global partners "to build a cooperative international framework around the IRA's investment incentives" and "build resilient energy supply chains, particularly for critical minerals," Deese wrote at the Times. "Our allies have little to fear and much to gain from working with the United States to expand incentives domestically to deploy clean energy technology because it must be deployed everywhere, and the IRA incentives will drive down the global cost of energy technologies" for everyone. The Inflation Reduction Act, at one year old, is still in its infancy, and much about how it changes the economy and environment are unknown — and unknowable. Some of the bets the government and industry are making could fail, or they could pay off wildly above expectations. Skeptical state and local governments could limit the law's efficacy. A future Republican president could try to undo key provisions. Permitting delays could thwart projects or delay their benefits. But decarbonizing the economy is an existential imperative, and time isn't on our side. "If the U.S. can start cutting down the emissions, steadily year over year, decade over decade, then we are on the right path to limit global warming," Aiguo Dai, a professor of atmospheric and environmental science at the University of Albany, told NPR News. "If we cut it too [slowly], it could be difficult to avoid catastrophic warming in the near future."
Environmental Science
Japan's subtropical forests home to a newly discovered beetle species A new weevil species was discovered in Japan's pristine subtropical forests on Ishigaki Island and Yanbaru National Park in Okinawa. Renowned for their remarkable biodiversity, the Ryukyu Islands are a chain of subtropical islands distributed between mainland Japan and Taiwan that boast a relatively isolated evolutionary history, and are home to a distinctive and fascinating insect fauna. Researchers at the Okinawan Institute of Science and Technology (OIST) have been placing net traps to monitor insects on Okinawa Island since 2015, and have captured a wide range of insects, including beetles, flies, wasps and bees, which are preserved in ethanol, dried and stored in the OIST insect collection. The newly discovered beetle species, Acicnemis ryukyuana, was successfully identified through microscope analysis and dissection by OIST entomologist Jake H. Lewis, who works as Collection Manager in the OIST Environmental Science and Informatics Section. "When I arrived at OIST in 2022, I dove headfirst into the OIST weevil collection. As I closely examined them, this species immediately caught my eye. It clearly belonged to the genus Acicnemis, but was unlike anything else described from East Asia," recounts Lewis. "Its elongated scales and unique coloration set this species apart from other known Japanese species." The genus Acicnemis contains over 180 species, so confirming the discovery of a new species within this genus requires thorough examination of the existing literature and museum collections. As the "type specimens" (the original specimens used for species description) in the genus Acicnemis are housed in European and Japanese museums, Lewis had to reach out to several institutions, including the Kyushu University Museum (Japan), the Natural History Museum in London (UK), and the Senckenberg German Entomological Institute (Germany) in order to authenticate the status of Acicnemis ryukyuana as a novel species. Based on the current knowledge, Acicnemis ryukyuana is endemic to the Ryukyus. The entomologist named this species ryukyuana [from Ryukyu] and リュウキュウカレキゾウムシ [pronunciation: Ryuku-kareki-zoumushi, translation: "Ryukyu dead-tree weevil"] in Japanese to emphasize that it is an endemic element of the Ryukyu biodiversity. Weevils form one of the most diverse animal groups on the planet and generally feed on plants. Some weevil species are highly specialized and have a narrow range of plants they can feed on. The host plant(s) for this new species remains unknown, and Lewis hopes to conduct further field studies investigating this. The distinctive features of Acicnemis ryukyuana Acicnemis ryukyuana can be immediately recognized by the yellow bands on its shoulders, and distinct pattern of gray, black and yellow scales on its tough fore wings. Other unique features visible under the microscope include the long scales (hairs) on the back and the shape of the last segment of the leg. "Based on the unique set of features observed in this new species, A. ryukyuana appears to be closely related to some other species in southeast Asia, however DNA analyses will be required to confirm this," explains Lewis. "I was drawn to Okinawa as there are numerous undescribed weevil species in the region, unlike in Canada, my home country, where weevils have been much more thoroughly studied. Living in Okinawa and having Yanbaru National Park as a backyard is very exciting as it is home to many undescribed, endemic species," says Lewis. "The Ryukyu Islands offer an irresistible playground for taxonomists, rich in species which you only find here." The beetles' sensitivity to human presence Although OIST researchers distributed insect traps widely across Okinawa Island including heavily populated and disturbed areas, A. ryukyuana was only captured in a pristine, specially protected part of Yanbaru National Park. The new species was also collected in well-preserved subtropical forest areas on Ishigaki Island; and located by Lewis in the Kyushu University Museum collection. "Based on these collection locations, this weevil species appears to be very sensitive to human disturbance compared to other Acicnemis species commonly found in the Ryukyu Islands." "This newly discovered beetle might be considered a vulnerable, endemic element of the Ryukyu fauna, similar to the flight-less bird Okinawa rail, the Yanbaru long-armed scarab beetle and the Okinawa spiny rat," says Lewis. "I am sure that taxonomists, conservation biologists, and local naturalists in Okinawa will be interested in knowing that yet another remarkable species has been discovered in the Ryukyu Islands." The finding is published in The Coleopterists Bulletin. More information: The Coleopterists Bulletin (2023), DOI: 10.1649/0010-065X-77.2.185 Provided by Okinawa Institute of Science and Technology
Environmental Science
Washington's Volcanoes Are Experiencing Seismic Tremors from an Unlikely Source: Glaciers Most people think of seismic activity as the result of movement along faults or of violent volcanic eruptions. But seismic events can have other causes, including floods and even large crowds of excited fans—such as those at Taylor Swift’s recent Seattle shows, whose enthusiastic reception caused seismic activity equivalent to a 2.3 magnitude earthquake—and glaciers. Decades ago, scientists who research seismic activity in the Washington Cascades recorded a number of small seismic events and eventually determined that they were caused by glacier movement. These events, called “glacier quakes,” allow for important insight into seismic activity, patterns of glacier movement and even climate events. For Washington state residents, seismic activity is nothing new. Washington is part of the Pacific Ring of Fire, an arc along the edge of the Pacific Ocean where tectonic plate interactions frequently lead to more earthquakes and volcanic eruptions. In 1980, Mount St. Helens erupted in southern Washington, killing 57 people and causing the most disastrous volcanic eruption in US history. Washington also falls within the Cascadia Subduction Zone, where a fault line between the Juan de Fuca Plate and North American Plate threatens a 9.0 magnitude or more earthquake in the coming century. In the Washington Cascades, glaciers are another frequent culprit of quakes. According to Seth Moran, a research seismologist at the USGS Cascades Volcano Observatory, when seismologists first set up seismometers on Mount Rainier decades ago, they were nervous when they saw small quakes reminiscent of the ones that preceded the eruption of Mount St. Helens. “However, we were eventually able to improve tracking and clearly determine that these quakes were coming from glaciers, not volcanic activity,” Moran told GlacierHub. Using seismometers, researchers identified similar glacier quakes on Mount St. Helens and Mount Baker in the following years, and these have continued to the present day, primarily occurring in the summer months, “because glaciers are moving more in warmer months, so we’re seeing more of the internal crackling then,” said Moran. WWU’s ‘Swift quake’ seismologist explains they may not be from earthquakes. https://t.co/JmHEtSuKi5 — Tacoma News Tribune (@thenewstribune) September 4, 2023 Washington glaciers are not alone in triggering these glacier quakes. The Alaska Earthquake Center detects between 1,500 and 2,000 glacial quakes per year—some comparable to magnitude 3 earthquakes. Most of these occur in Southcentral Alaska, and these quakes follow the same seasonal pattern as the Washington glacier quakes. Similarly, many national parks and hiking sites have had growing numbers of visitors following a nationwide spike in outdoor recreation and the end of COVID travel restrictions over the past few years. More visitors means more people in areas where they might be able to feel some of these quakes, leading to recent media coverage of the glacier quakes. But how exactly do glaciers cause these earthquakes? Jackie Caplan-Auerbach, a professor of geology at Western Washington University, located just west of Mount Baker, told GlacierHub that “anything that shakes the ground can be recorded by seismometers.” For instance, glacier lake outburst floods, which occur when water builds up behind a glacier dam until the pressure becomes so intense that it bursts through, also create seismic vibrations due to the large volume of water suddenly rushing downstream. In the case of glacier quakes in the Cascades, likely causes include crevasse formation—which occurs when the lower part of a glacier moves more quickly than the upper part, spreading the ice until a gash forms—and stick-slip behavior at the glacier ice-rock interface. Caplan-Auerbach elaborated on this behavior: “As the ice flows, it gets stuck on underlying material and eventually slips forward once the stresses build up to a certain point. That slip is what we record as a tiny quake.” In Alaska, calving of tidewater glaciers can also cause glacier quakes. While these glacier quakes have all been detected on volcanoes, that doesn’t mean the phenomenon occurs only on glaciated volcanic mountains. “We’re likely seeing these events at these volcanoes because they have large glaciers and/or they are heavily instrumented with lots of nearby seismometers,” said Caplan-Auerbach. The combination of these two factors has led groups that track quakes, such as the Pacific Northwest Seismic Network, to show high numbers of quakes around Mount St. Helens and Mount Rainier—where there are several seismometers that can pick up even small quakes—and much fewer around Mount Baker, where there is only one seismometer, at Glacier Peak. As demonstrated by the Alaskan glaciers, many of which were not on volcanoes, these glacier quakes can be measured in most places where there is a large glacier and a seismometer to track movement. Moran notes that the activity on Mount Rainier was especially interesting to researchers at first. After they determined that the small, repetitious quakes were not a sign of impending eruption like at Mount St. Helens, they were able to establish, in the late 2000s, that the glacier quakes were connected to storm systems in the area. Moran explained that these storms “were putting [rain]water into the glacier and allowing it to slip more readily for a short time,” which increased the frequency of the quakes. Another interesting glacier quake that researchers are still puzzling over is a repeating quake coming from the Easton Glacier on Mount Baker. Not only has this quake occurred every day in the same spot for the past 15 to 20 years during quake season, it is also much stronger than other glacier quakes in the area, measuring around magnitude 2 on the Richter Scale. “The waveform [of this quake] is identical from event to event, which is unusual for an environment like a glacier where you’d be expecting a lot of change,” Moran said. “This [identical waveform] means the quake is happening in the exact same place and nothing has changed—the ice is the same thickness and the rock is the same composition.” This quake is likely the result of a stick-slip behavior of the glacier, but researchers are still eager to learn more about it. Some of these glacier quakes are too small for people in the area to feel, but others, like the repeating Easton Glacier terminus quake, has been recorded on seismometers out to 100 kilometers away, said Moran. Mauri Pelto, a professor of environmental science at Nichols College and science director of the North Cascades Glacier Climate Project, told GlacierHub that his team of researchers and artists felt the small glacier quakes almost every day during their annual field season in the North Cascades. Pelto has studied the Easton Glacier for the past 40 years and seen significant terminus retreat, meaning that the identical waveform glacier quakes must be occurring somewhere on the ice-rock interface other than at the terminus. While these glacier quakes are of interest to researchers, they pose little to no threat to tourists and surrounding communities. John Mutter, a professor of earth and environmental sciences at Columbia University, told GlacierHub that few researchers focus on this topic for that precise reason. “One important motivation for studying earthquakes is that they cause damage and deaths. Glacier quakes do not,” Mutter said. Although there was initial concern when these quakes were first spotted on seismometers decades ago, seismologists have clearly defined the difference in volcanic activity and glacier activity and these relatively small quakes are too small to trigger volcanic events. As technologies and access to these areas improve, the data we have will follow suit, allowing seismologists and climatologists to have a better understanding of glacier movement and the overlap of climate patterns and ice flow.
Environmental Science
Cold War nuclear weapons tests scattered radioactive waste across the globe. That waste can still be found in trace amounts in the sea, the soil, and — according to a new study — the flesh of wild boars roaming the forests of southern Germany. Experts have long known that Europe’s wild boars have high levels of radioactivity, so high, in fact, that their meat is generally unsafe to eat. In some regions, hunters avoid the animals entirely, which has caused their numbers to surge. Scientists have believed this was the result of the 1986 meltdown of the Chernobyl nuclear plant, which sent radioactive material wafting over Europe. But the new research suggests that much of the radioactive material in wild boars predates the Chernobyl disaster. For the study, scientists examined 48 samples of wild boar meat from across southern Germany, finding high levels of radioactive cesium in most of the samples. Comparing the levels of short-lived cesium-137 with longer-lived cesium-135, they were able to infer that the boars had been contaminated with fallout from nuclear weapons tests carried out during the Cold War. Overall, 88 percent of the samples surpassed the safe limit for radioactive cesium in food. And nuclear weapons tests accounted for as much as 68 percent of the contamination in boars. The findings were published in the journal Environmental Science and Technology. Over decades, radioactive cesium from nuclear tests has slowly migrated underground. As a result, most animals have been exposed to less cesium over time, but not wild boars. Because boars root around in the ground for food, such as buried deer truffles, they are continually excavating radioactive fallout, which explains why levels of radioactive cesium remain persistently high in boars, even as they drop in other animals.
Environmental Science
Kate Gardner reviews Nomad Century: How to Survive the Climate Upheaval by Gaia Vince Crisis times The Bangladeshi capital Dhaka is confronted by thousands of people who arrive every day, displaced by monsoons and flooding caused by climate change. (Courtesy: Shutterstock/SM AKBAR ALI PJ) Dhaka, the capital city of Bangladesh, is home to 10 million climate refugees – people who have been internally displaced as monsoons and flooding destroy homes and farmland. With another 2000 arriving in the city every day, it’s proof that human migration caused by climate catastrophe is already a reality and accelerating fast. In Nomad Century, environmental-science journalist Gaia Vince explains that climate migration is going to happen on a huge scale this century and that the world needs to work together to manage it financially, safely and humanely. As she leads us through the climate scenarios detailed by the Intergovernmental Panel on Climate Change (IPCC), it is clear that even if we limit global heating to 2 °C (an outcome she deems unlikely), vast swathes of the Earth will be uninhabitable by 2050, displacing hundreds of millions of people. With a 4 °C temperature rise, billions of people will be affected in this way. Such a scenario sounds catastrophic, but Vince argues that, properly managed, it needn’t be. She explains how we can use the little time we have left to start planning – and begin moving people before disaster strikes them. Vince shares examples of where this is already happening on small scales and her text is packed with references to the studies behind her suggestions. Concentrating a growing world population into a smaller liveable area will be a massive social, political and technological challenge. New mega cities will have to be built in places like Canada, Greenland and Scotland. Sudden growth provides the opportunity to scale up sustainable housing, infrastructure and farming. Vince’s solutions always put people first, acknowledging that the world’s poorest will be hardest hit and that they must be helped not only to find safe new homes, but also new livelihoods. Managing large-scale migration is only part of the solution. We also need to decarbonize – remove some of the carbon dioxide already in our atmosphere – and fix damage done by climate change. This is the hard-science part of the book, with whirlwind tours of green energy generation, habitat restoration and geo-engineering. Every option is considered, and Vince suggests that ultimately most will be needed – there is no single solution to this crisis. Read more Climate tipping points: retreating from the brink and accelerating positive change Nomad Century deftly led me from the terror of what humans have done to sincere belief that we can and must create a better world that is liveable for every person. 2022 Penguin Books 288pp £20.00hb £9.99ebook
Environmental Science
Time and again, studies have shown that people of color in the US are exposed to much higher levels of fine-particulate air pollution than their white counterparts are. It's an injustice documented by "a mountain of evidence," as a 2021 New York Times report says.And on Tuesday, a study published in the journal Nature Communications presented information that's even more troubling. "Populations living in racially segregated communities not only breathe more fine-particle air pollution, they breathe a form of pollution that is much more concentrated in toxic, cancer-causing compounds," John Volckens, a Colorado State University engineering professor and co-author of the study, said in a statement. Those compounds comprise elements like lead, cadmium and nickel and often stem from the same human activities contributing to global warming -- industrial work in factories, for instance. They're also the harmful substances connected to serious health risks other than cancer, like neurological and respiratory damage. Previously, scientists had focused on figuring out where fine-particulate matter -- also called PM2.5 because each particle is less than 2.5 microns in diameter -- resides. Such studies have clearly shown PM2.5 permeating places where communities of color are located. And these conclusions illustrated how people of color are breathing in a greater number of particles known to add to the global burden of disease.  Clearly, the situation was already dire. But Volckens and fellow researchers wanted to dig deeper. They examined the toxic metal components in PM2.5 to understand where the most hazardous type of fine-particulate matter floats in the air -- and they specifically included an indicator of racial residential segregation in their analysis. In other words, the team looked to see whether those extra-hazardous PM2.5 clouds were concentrated in areas called home primarily by people of color. The results were concerning."While concentrations of total fine-particulate matter are two times higher in racially segregated communities, concentrations of metals from anthropogenic sources are nearly 10 times higher," the study states. The phrase "anthropogenic sources" simply refers to pollution sources originating in human activity.Connecting the dots, the study also highlights the possibility that known and recorded health disparities in communities of color may be blamed on these populations being exposed to higher amounts of such carcinogenic metals in the air. "Across the country, people in some racial and ethnic minority groups experience higher rates of poor health and disease for a range of conditions ... when compared to their white counterparts," the US Centers for Disease Control and Prevention says. That includes diabetes, hypertension, obesity, asthma, heart disease, preterm birth, and notably, in this case, cancer. "These disparities sometimes persist even when accounting for other demographic and socioeconomic factors, such as age or income," the CDC says. Defining 'redlining'Of the issue as a whole, Volckens said, "this is the unfortunate result of systemic racial and ethnic injustices, such as redlining, that have plagued our nation's history."Redlining nowadays is used to describe many forms of systemic, race-based discrimination in real estate. But in a nutshell, the term is rooted in government homeownership programs created as part of the New Deal in the 1930s. Basically, these programs offered government-insured mortgages for homeowners as an attempt to alleviate some of the economic downfall following the Depression. But the programs relied on maps that ranked neighborhoods from most desirable to most hazardous -- "most hazardous" neighborhoods didn't get the benefits. And most of those no-benefit neighborhoods were largely populated by Black and immigrant residents. Fast-forward through the years, and we find that businesses had worked with local zoning officials to use those unfairly drawn, simply segregated maps to decide where to put pollutive operations. Things like industrial plants, major roadways and, in coastal regions, shipping ports. Things directly associated with air pollution, including PM2.5. And as you might expect, officials weren't clamoring to put those operations in "most desirable" neighborhoods.An image showing the New Deal's redline map of Brooklyn, New York. Various sections are highlighted in different colors. National Archives and Records Administration, Mapping Inequality In fact, a study from March, published in the journal Environmental Science and Technology Letters, bluntly demonstrated the link between PM2.5 and redlined areas."Findings here highlight that present-day disparities in US urban pollution levels reflect a legacy of structural racism in federal policy-making -- and resulting investment flows and land use decisions -- apparent in maps drawn more than 80 years ago," the March study says.The only silver lining the new study's researchers were able to find was that some recent regulations on marine fuel oil seem to have reduced the concentration of a toxic metal called vanadium in coastal cities. Those policies also "sharply lessened differences in vanadium exposure by segregation," the authors write."Sweeping environmental cleanups, like the establishment of national clean-fuel standards, not only reduce air pollution nationwide, but also serve to reduce the pollution exposure disparities we see in many segregated communities," Jack Kodros of Colorado State University and lead author of the study, said in a statement.A timely realization, with COP27 just around the corner.
Environmental Science
Wastewater sector emits nearly twice as much methane as previously thought Municipal wastewater treatment plants emit nearly double the amount of methane into the atmosphere than scientists previously believed, according to new research from Princeton University. And since methane warms the planet over 80 times more powerfully than carbon dioxide over 20 years, that could be a big problem. "The waste sector is one of the largest anthropogenic sources of methane in the world," said Mark Zondlo, professor of civil and environmental engineering and associated faculty at the Andlinger Center for Energy and the Environment. "As cities continue to urbanize and develop net-zero plans, they can't ignore the liquid wastewater treatment sector." Zondlo led one of two new studies on the subject, both reported in papers published in Environmental Science & Technology. One study performed on-the-ground methane emissions measurements at 63 wastewater treatment plants in the United States; the other used machine learning methods to analyze published literature data from methane monitoring studies of various wastewater collection and treatment processes around the globe. "Not many people have studied the methane emissions associated with wastewater infrastructure, even though we know that it's a hotspot for methane production," said Z. Jason Ren, who led the second study. Ren is a professor of civil and environmental engineering and the Andlinger Center for Energy and the Environment. The Intergovernmental Panel on Climate Change (IPCC) has established guidelines that allow researchers and institutions like the U.S. Environmental Protection Agency (EPA) to estimate methane emissions from wastewater treatment plants based on their specific treatment processes. However, those guidelines were developed from limited measurements at a relatively small number of wastewater treatment plants. When the researchers used the Princeton Atmospheric Chemistry Experiment (PACE) Mobile Laboratory to quantify plant-wide emissions by measuring the plumes of 63 treatment plants on the east coast and in California, they found that the IPCC guidelines consistently underestimated treatment plants of all sizes and treatment processes. If the results from those 63 plants are representative, actual methane emissions from wastewater treatment facilities across the U.S. would be about 1.9 times greater than emissions estimates that use existing IPCC and EPA guidelines, meaning that those guidelines underestimate methane emissions equivalent to 5.3 million metric tons of carbon dioxide. Interestingly, the research team who performed the second independent study to analyze literature data on methane emissions came to a similar conclusion: estimated methane emissions from municipal wastewater treatment in the U.S. were around double of what existing guidelines would predict. "We were able to show, using two different approaches, that methane emissions are a much bigger issue for the wastewater sector than previously thought," Ren said. The usual suspects in wastewater methane emissions The researchers believe that since the IPCC guidelines were developed from limited measurements at a small number of wastewater treatment plants, they might not accurately represent the variation in emissions that exists between facilities. "The guidelines assume a certain level of efficiency in these wastewater treatment systems that may not exist on a plant-to-plant basis," said Daniel Moore, first author of the direct measurement study and a graduate student in civil and environmental engineering. He pointed to leaks and inefficient equipment that may go undetected at wastewater treatment plants but could lead to significant greenhouse gas emissions. Cuihong Song, first author of the critical review and a postdoctoral researcher in civil and environmental engineering at Princeton, said that treatment plants equipped with anaerobic digesters were among the biggest methane leakers. Anaerobic digesters are airtight vessels containing anaerobic microbes that work without oxygen to break down wastewater sludge or solid waste and produce methane-rich biogas in the process. That methane can be used to generate heat or electricity to power other aspects of the treatment process. But when anaerobic digesters operate inefficiently, leaks and pressure buildups can allow methane to escape as fugitive emissions. "If the digester is not gas-tight, you can end up with high methane emissions," Song said. The researchers found that plants with anaerobic digesters emitted more than three times the methane than plants without digesters. Higher emissions from anaerobic digesters could be a serious problem: While wastewater treatment plants equipped with anaerobic digesters account for less than 10% of all treatment plants in the U.S., most of those plants are large facilities that, combined, treat around 55% of the wastewater in the country. "A lot of money is going into decreasing emissions by implementing these digesters, because, in theory, they're closed systems. When they're working correctly, you can centralize the methane into one location," Moore added. "It's the inefficiencies and leakages that cause many of the problems." Along with anaerobic digesters, the critical review found that methane emissions from sewer systems contribute significantly to nationwide methane emissions. However, current guidelines largely do not account for fugitive methane emissions from sewers, which the researchers said are important to account for in future greenhouse gas inventories. "We have more than a million miles of sewers in the U.S., filled with rich organic matter that may be causing methane emissions, but we have very little understanding of their scope," Ren said. Better monitoring, better guidelines The researchers are now working with partners to build an inventory and methodology that would allow managers to easily monitor their methane emissions. By identifying the sources in the wastewater treatment process that release the most methane emissions, their work can also inform efforts to mitigate fugitive emissions. "Methane has a short lifetime in the atmosphere, so if we're able to cut off the spout of emissions across the country, methane's contribution to warming will quickly diminish," said Moore. "Ten years from now, we wouldn't have to worry so much about methane." Ren added that the methane produced from processes like anaerobic digestion also serves as a valuable energy source. "By identifying and mitigating fugitive methane emissions, we would see double benefits," he said. "We would reduce greenhouse gas emissions in the near term, and we would maximize the amount of methane we can recover from the wastewater treatment process." Still, more work is needed to monitor methane emissions at various timescales from treatment plants and sewer networks of different sizes and treatment processes. For example, few studies have performed long-term, continuous monitoring of methane emissions from wastewater treatment plants, even though the emissions rate can vary daily or even seasonally, being generally higher in the spring and summer than in the winter. "Ultimately, we need to have a full accounting of the emissions from plants across many timescales," Zondlo said. He added that preliminary analyses of subsequent measurements from additional plants at various times of the year have highlighted the importance of understanding seasonal variation in emissions. At the same time, researchers will need to develop better sampling methods to understand emissions from hard-to-reach areas like sewers, since the diffuse nature of sewer networks along with their high humidity levels make it difficult to capture an accurate picture of emissions with existing methodologies. By overcoming those hurdles and continuing their monitoring efforts, the researchers could contribute to a wider effort to create updated guidelines that better estimate methane emissions from the wastewater sector. "Many agencies are recognizing that methane emissions from wastewater sector are important to study," Ren said. "This research is not just reporting our own findings. We're echoing what the broader research community has observed and identified as a significant gap of knowledge." More information: Daniel P. Moore et al, Underestimation of Sector-Wide Methane Emissions from United States Wastewater Treatment, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c05373 Cuihong Song et al, Methane Emissions from Municipal Wastewater Collection and Treatment Systems, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c04388 Journal information: Environmental Science & Technology Provided by Princeton University
Environmental Science
Embracing plant-based foods is a way to start healing the planet. (Getty Images)In Unearthed, Yahoo Life discusses some of the most pressing issues facing our planet — and reveals what you can do to help make a real difference.First, the good news: Veganuary, the annual worldwide go-vegan campaign happening now in an aim to aid the ailing planet — held every January since 2014 — has made an impact.By going vegan for just a month, according to the campaign, the cumulative million-plus participants have saved an estimated 1.6 million gallons of water (equal to flushing a toilet about a half million times) and 103,840 tons of carbon emissions (equal to driving around the world about 15,000 times) — not to mention 3.4 million animals.Plus, the newfound eating habits tend to stick, according to this year’s Veganuary strategy report, which notes: “Participants report that they continue to reduce their consumption of animal products even after January, and food companies keep a larger and better variety of plant-based options on the shelves after successful product and menu launches during Veganuary.”In 2020, science journal the EAT-Lancet Report, indeed found that by “transforming eating habits, improving food production and reducing food waste” — admittedly a tall order — it is entirely possible to feed a future population of 10 billion people “a healthy diet within planetary boundaries.”So, what’s the bad news? Brace yourself: There’s a lot.That’s because animal agriculture is one of the largest sources of global greenhouse gas emissions, with direct connections to mass deforestation, pollution, hunger, antibiotic resistance, species extinction and more.Some sobering facts: Meat and dairy provide only 18% of the calories humans consume but use 83% of global farmland and are responsible for 60% of all agriculture’s greenhouse gas emissions, according to a 2018 watershed study from University of Oxford food sustainability researcher Joseph Poore.Further, his research found, farmed animals occupy 30% of the planet’s ice-free land and consume roughly the same percentage of all fresh water.Livestock farming is responsible for a large share of greenhouse gas emissions. Although the specific percentages vary greatly depending on the source — from 14.5%, according to the Food and Agriculture Organization of the United Nations (FAO), to 26% as found by Poore, and as high as 51%, according a recalculation of FAO’s finding’’ from the now-defunct environmental World Watch Magazine, which called out the FAO for its private sector partnerships with the meat, poultry, egg and dairy industries — we’re talking about more annual greenhouse gas emissions than all global transportation combined.“Very simply, even if we were to stop fossil fuel emissions immediately, today, emissions from our feeding systems alone would take us past the 1.5 degrees Celsius [34.7 degrees Fahrenheit],” says Nicola Harris, spokesperson for the global grassroots Plant Based Treaty, referring to the Paris Agreement on climate change goal — to limit global warming to well below 2, preferably to 1.5 degrees Celsius, compared to pre-industrial levels. “If we’re serious about the Paris Agreement and combating the climate emergency,” Harris tells Yahoo Life, “we have to take action on fossil fuels and the feed system in equal measure.”If this is all news to you, it might be because, according to a 2021 New York University study, the world’s biggest meat and dairy companies have spent a large amount of time, money and effort “into downplaying the link between animal agriculture and climate change, and into fighting climate policy more generally.” The report claims they’ve done it by lobbying Congress and the Environmental Protection Agency to block legislation that might limit production and by funding research that minimizes the link between climate change and animal agriculture, something explored further in the 2021 documentary Eating Our Way to Extinction. This has led Poore to an insight that's become a touchstone for activists: “A vegan diet is probably the single biggest way to reduce your impact on planet Earth, not just greenhouse gases, but global acidification, eutrophication [excessive plant and algal growth that lowers oxygen levels in water], land use and water use,” he said. “It is far bigger than cutting down on your flights or buying an electric car.”Here’s a breakdown of how animal agriculture is damaging the planet, and how to make positive change.DeforestationIn what were once pristine stretches of Amazon rainforest are now millions of farms related to animal agriculture — land cleared of trees either for grazing cattle or to grow grain that is then shipped around the world to feed livestock.An area of the Amazon rainforest is cleared for agriculture. (Jose Caldas/Brazil Photos/LightRocket via Getty Images)Poore, who was unavailable to speak with Yahoo Life but pointed to a recent YouTube lecture he presented on his findings, explains in that video, “Most of this deforestation has been for beef for the local South American market and for soy to feed pigs, especially in Asia.” To put it another way, he said, “The deforestation of the Amazon rainforest has largely taken place because many humans prefer the taste of animal proteins to vegetable proteins.”Using satellite data to track annual tropical forest lost to agriculture since 2000, Poore and his researchers found the planet has lost an area equal to the size of the “U.K., Belgium, Germany, the Netherlands and Poland combined” — all of which has released massive amounts of carbon dioxide into the atmosphere, threatening 13,000 known species with extinction.Deforestation is also meant to be permanent. “When the animals finish grazing, they light pasture maintenance fires to prevent the forests from coming back,” Dr. Sailesh Rao, founder and executive director of Climate Healers and producer of climate-focused documentaries including Cowspiracy, tells Yahoo Life.EutrophicationOnce forests are cleared and crops to feed cows and pigs are planted, the fields are typically sprayed with nitrogen fertilizer, which sends toxic run-off into water sources and, eventually, the ocean, causing algae blooms that choke marine life and create so-called dead zones, where nothing can survive. Agriculture is a leading cause of oceanic dead zones, according to the EPA.A green-algae bloom overtakes a salt marsh in Spain. (Mikel Bilbao /VW Pics/Universal Images Group via Getty Images)“Since the demand for meat has grown, these low-oxygen dead zones have been growing and growing,” Sylvia Earle, former chief scientist of the National Oceanic and Atmospheric Administration, said in the documentary Eating Our Way to Extinction. “‘OK,’ people say, ‘that’s too bad for the fish,’ ... but we need to understand that what we do to the ocean, we’re doing to ourselves.”Poore found that food production causes about 80% of eutrophication — including in a dead zone in the Gulf of Mexico that is “almost the size of Belgium, that doesn’t have enough oxygen for fish.”Since the 1950s, the documentary states, referencing a Nature journal study, the planet has lost almost 90% of all species of large fish in the ocean — the leading cause of which is overfishing, with a third of all edible fish caught in the ocean now being fed to livestock and farmed fish (which can create more methane than cows, notes Poore). “While some believe switching from a meat- to fish-based diet will help the planet,” notes the film’s narrator, Kate Winslet, “this simply could not be further from the truth.”Toxic emissionsAccording to a recent study published in the PLOS Climate journal, ending meat and dairy production would “pause” the growth of greenhouse gas emissions for 30 years, effectively canceling out emissions from all other economic sectors. That’s because animal agriculture is responsible for 65% of the world’s nitrous oxide emissions, and 33% of methane emissions, as was calculated by the U.N. in 2021. “Animal farming is the largest cause,” says Harris. “And what we know is we need to cut it by about 45% a decade to even have a chance of the 1.5 [Paris Agreement] target.”Carbon dioxide is more complicated, Harris says, because different studies show different numbers. “But what we do know is our feeding system, overall, is responsible for a third of human-caused greenhouse gas emissions, and the largest contribution is from animal agriculture," Harris says. "It’s also the largest driver of deforestation. So we’re not only looking at direct emissions from animals, but carbon going back into atmosphere [from deforestation] ... with 83% of farmland used for raising animals but offering just 18 percent of our calories.”Human falloutWhile people go vegan for many reasons, from animal welfare to health to wanting to help the planet, it’s common to keep such reasons siloed, points out Isaias Hernandez, founder of Queer Brown Vegan, a platform focused on educating how “social, racial and environmental issues are deeply interconnected.”Hernandez, who recently appeared with Billie Eilish, who is vegan, in a Vogue video about climate change, grew up food insecure in California’s San Fernando Valley, going to food banks and driving half an hour to get to the nearest grocery store offering fresh produce, eventually learning the term “food desert,” he explains. “I saw that racism was deeply interconnected into food systems,” Hernandez tells Yahoo Life.As an environmental science major at University of California Berkeley, Hernandez and his fellow students “talked about the fact that so many migrant farm workers choose to pick produce, as it’s less traumatic than slicing animals, which is a truly traumatic process,” Hernandez says, “and how it leads to mental health issues for people of color, or Latinos, like myself, including high rates of PTSD.”Furthermore, he says, much of the environmental destruction wrought by animal agriculture disproportionately affects indigenous lands and communities of color — including through manure pollution and runoff into Black and brown communities, “which then becomes a children’s issue, with high rates of asthma, like near North Carolina hog farms.” The bottom line, Hernandez stresses, is that “veganism was a way for me to divest away from this extractive system,” and to highlight that “these systems are products of colonialism and white supremacy.”What we can doHarris believes that if more people truly knew the facts, they would make changes. “I’m not sure enough people understand the gravity of the situation and how dangerous the emergency is, and we need more politicians to communicate it,” she says. Because other causes of the climate crisis have gotten through, she said, with people learning to turn off lights, walk instead of drive and recycle, she’s hopeful.“The biggest individual action you can take is to adopt a vegan diet,” Harris says. “There isn’t time for baby steps. This is, like, a do-or-die decade. What we do in the next couple of years will determine future of planet ... so we need to do everything and we need to do it now.”To get there, activists stress the importance of learning about all the reasons for going vegan — including intense and wide-scale animal suffering. “I think everyone has their own individual motivating factors to go vegan,” says Harris, who was first moved to change her diet after seeing horrific undercover footage from slaughterhouses — similar to the 49% of Veganuary participants who told the campaign that animal welfare was their driving factor. "But I took it upon myself to learn about all the different reasons — the climate crisis, food deserts — and you realize everything is interconnected,” she says.To get started, Rao stresses, “Find help. … You must know someone in your circle who is vegan. Have them as a buddy” for shopping, cooking and eating tips. Support can also be found through vegan “buddy” programs, social media, published starter guides including Main Street Vegan and the Veg News Guide to Being a Fabulous Vegan. There’s also a wealth of information on the Veganuary website.Beyond changing personal diets, activists stress the importance of supporting policymakers who are looking to alter the systemic problems of animal agriculture. “Trying to bring out city-level changes, and in schools, prisons and universities, is key,” says Harris.Poore sees room for that happening through government incentives for farms to be more responsible, as well as through digital tools that would allow farmers to measure their impacts and alert consumers to the footprint of their food choices as they shop.Rao agrees that making systemic change is key, and says he’s been “thrilled” to see plant-based policies created by vegan New York City Mayor Eric Adams, for example, who brought default-vegan menus to city hospitals and “vegan Fridays” to public school cafeterias. “To me, it’s a breakthrough of an institutional leader saying these things,” he says. “That gives me a lot of hope and faith in the future.”Wellness, parenting, body image and more: Get to know the who behind the hoo with Yahoo Life's newsletter. Sign up here.
Environmental Science
New nanotech identifies chemical composition and structure of impurities in air, liquid and living tissue Using conventional testing techniques, it can be challenging—sometimes impossible—to detect harmful contaminants such as nano-plastics, air pollutants and microbes in living organisms and natural materials. These contaminants are sometimes found in such tiny quantities that tests are unable to reliably pick them up. This may soon change, however. Emerging nanotechnology (based on a "twisted" state of light) promises to make it easier to identify the chemical composition of impurities and their geometrical shape in samples of air, liquid and live tissue. An international team of scientists led by physicists at the University of Bath is contributing toward this technology, which may pave the way to new environmental monitoring methods and advanced medicines. Their work is published in the journal Advanced Materials. The emerging chemical-detection technique is based on a light-matter interaction known as the Raman effect. The Raman effect occurs when a material that is illuminated at a certain color of light scatters and changes the light into a multitude of slightly different colors. It essentially produces a mini-rainbow that is dependent on how atoms within materials vibrate. Measuring the colors of the Raman rainbow reveals individual atomic bonds because molecular bonds have distinct vibrational patterns. Each bond within a material produces its own unique color change from that of the illumination. Altogether, the colors in the Raman rainbow serve to detect, analyze and monitor the chemical composition (chemical bonds) of complex molecules, such as those found within mixtures of environmental pollutants. "The Raman effect serves to detect pesticides, pharmaceuticals, antibiotics, heavy metals, pathogens and bacteria. It's also used for analyzing individual atmospheric aerosols that impact human health and the climate," said Dr. Robin Jones from the Department of Physics at Bath, who is the first-author of the study. Harmful pollutants Expanding, co-author Professor Liwu Zhang from the Department of Environmental Science at Fudan University in China said, "Aquatic pollutants, even in trace amounts, can accumulate in living organisms through the biological chain. This poses a threat to human health, animal welfare and wildlife. Generally, it is really hard to know exactly what the chemical composition of complex mixtures are." Professor Ventsislav Valev from Bath, who led the study, added, "Understanding complex, potentially harmful pollutants in the environment is necessary, so that we can learn how to break them down into harmless components. But it is not all about what atoms they are made of. The way the atoms are arranged matters a lot—it can be decisive for how molecules act, especially within living organisms. "Our work aims to develop new ways in which the Raman effect can tell us about the way atoms are arranged in space and now we have taken an important technological step using tiny helix shaped antennas made of gold." The Raman effect is very weak—only one out of 1,000,000 photons (light particles) undergo the color change. In order to enhance it, scientists use miniature antennas fabricated at the nanoscale that channel the incident light into the molecules. Often these antennas are made of precious metals and their design is limited by nanofabrication capabilities. The team at Bath used the smallest helical antennas ever employed: their length is 700 times smaller than the thickness of a human hair and the width of the antennas is 2,800 times smaller. These antennas were made from gold by scientists in the team of Professor Peer Fischer at the University of Stuttgart in Germany. "Our measurements show these helical antennas help to get a lot of Raman rainbow photons out of molecules," said Dr. Jones. "But more importantly, the helical shape enhances the difference between two types of light that are often used to probe the geometry of molecules. These are known as circularly polarized light. "Circularly polarized light can be left-handed or right-handed and our helices can, basically, handshake with light. And because we can make the helices twist to the left or to the right, the handshake with light that we devised can be both with left or right hands." "While such handshakes have been observed before, the key advance here is that we demonstrate for the first time that it is felt by molecules, as it affects their Raman rainbow. This is an important step that will allow us to distinguish efficiently and reliably between left- and right-handed molecules, first in the lab and then in the environment." Crystal violet In order to demonstrate that the new handshaking between light and antennas could be transmitted to molecules, the researchers made use of molecules—crystal violet—that are unable to 'handshake' with light by themselves. Yet these molecules behaved as if they could perform this function, expressing the 'handshaking' ability of gold nanohelices to which they were attached. "Another important aspect of our work here is that we worked with two industrial partners," said Professor Valev. "VSParticle produce standard nanomaterials for measuring Raman light. Having common standards is really important for researchers around the world to be able to compare results." He added, "Our industrial partner Renishaw PLC is a world-leading manufacturer of Raman spectroscopy and microscopy equipment. Such partnerships are essential, so that new technology can move out of the labs and into the real-world, where the environmental challenges are." Building on this work, the team is now working on developing more advanced forms of Raman technologies. More information: Robin R. Jones et al, Dense Arrays of Nanohelices: Raman Scattering from Achiral Molecules Reveals the Near‐Field Enhancements at Chiral Metasurfaces, Advanced Materials (2023). DOI: 10.1002/adma.202209282 Journal information: Advanced Materials Provided by University of Bath
Environmental Science
Grasshoppers threaten to devour Alberta crops following extreme heat Pest species taking wing early this year following hot, dry spring weather Driven by drought, heat-loving grasshoppers are thriving in Alberta, threatening to devour crops in central and southern parts of the province. Pest species of grasshoppers are taking wing early this year — and swarming in greater numbers. Robert Badry, who operates his family's cereal farm near Heisler, 160 kilometres southeast of Edmonton, said half an acre of his wheat crop was devoured in a matter of days. "The ground was literally moving with them," he said. "It was bare, you could see the soil. They just eat it to nothing." Grasshoppers have been long been a menace to agricultural producers. Like locusts, the insects are incredibly destructive. Even a moderate infestation — 10 grasshoppers per square metre — can consume up to 60 per cent of available vegetation. This year, the hot and dry weather that fuelled historic wildfires in Alberta has contributed to a scourge of grasshoppers now threatening to strip already stunted crops. Experts warn the infestations increase the risk of future outbreaks and serve as a reminder of the need to better monitor the pests. Recent rains may have slowed down the grasshoppers — but only temporarily, as populations have been booming for years. On his farm, Badry is concerned about the prospect of persistent outbreaks. He expects grasshoppers will move in again when arid weather returns. His crops are already suffering from lack of moisture, leaving them to susceptible to poor yields and pests. "Once the drought hits and then you see the grasshoppers, you can't be surprised, they go hand in hand," he said. "They flourish in the drought and the dry. "If it's not one thing it's another this year." The Alberta government monitors grasshopper outbreaks and tracks populations through an annual count conducted each August. Last year's survey warned of the ongoing risk of outbreaks in southern Alberta and along the border with Saskatchewan. Infestations were found in isolated pockets in the Peace River region. In the south and east, numbers have been expanding since 2021, the survey found. A plague of pests Meghan Vankosky, a research scientist with Agriculture and Agri-Food Canada and co-chair of the Prairie Pest Monitoring Network, said 2023 is the worst year for grasshoppers she has seen in more than two decades. After the wildfires and record rains in Alberta this year, it's an anxious time for producers, Vankosky said. "With the fire and the flood, we're talking about the Four Horsemen of the Apocalypse, right?" she quipped. Vankosky urges farmers to keep close watch over their crops. As other food sources such as wild grasses, flowers and leafy plants dry out, grasshoppers will move onto crops for a meal. "It's not just that we have a lot of grasshoppers this year, it's that the crops aren't necessarily growing very well and so that damage is just that much more noticeable," Vankosky said. "They do tend to prefer cereals, oats, wheat, rye, barley … But in a drought year, they're going to eat really anything they can find." Although the life cycle varies between species, grasshoppers invariably thrive in heat. Females lay eggs in soil late in the summer. After a winter just below the surface, nymphs hatch from their seed pods in spring. They feed on vegetation as they grow into adulthood, moult and grow wings. Warm weather increases survival rates and accelerates each phase. The insect's body temperature shoots up, allowing for faster growth and for females to produce more eggs, leading to increasingly bigger hatches each spring. Vankosky said higher temperatures this year have allowed the insects to take wing early, flying to new food sources. "When it's hot, they develop faster than normal, which is really what we've seen this year." Alberta is no stranger to outbreaks. In 2021, grasshoppers swarmed in Lethbridge, destroying fields and backyard gardens. Before its extinction in the late 19th century, a grasshopper known as the Rocky Mountain locust swarmed across the West. Blizzards of the bugs would obscure the sun. Grasshopper expert Dan Johnson, a professor of environmental science at the University of Lethbridge, recalls outbreaks in the 1980s when cars and trucks were forced off the road by clouds of grasshoppers hanging thick over highways. Johnson describes this year's infestations in Alberta as "patchy," with isolated hot spots where populations are dense. "Where they are, they're very heavy," he said. "And it's a double whammy because they come when the drought is already causing trouble." Friend or foe Johnson said pest populations have been creeping up since 2018. Recent infestations should serve as a reminder that not all grasshoppers pose a threat to producers, he said, adding that the pests can be selectively managed with pesticides. Some species are harmless to crops and play an important part in the grasslands ecosystem, serving as meals for birds and other prey. "There are about 50 fairly common species and only about five of them are the bad guys," Johnson said. Pest species include the clear-winged grasshopper, Packard's grasshopper, the migratory grasshopper and the two-striped grasshopper, which is considered the largest threat. The Bruner's spur-throated grasshopper is a relatively new villain on the list of species prevalent in Alberta, Johnson said. Populations of Bruner's grasshoppers have shot up in recent years with numbers reported from Edmonton to the Peace River region. Johnson expects their range and numbers will continue to grow. Governments and producers need to be better prepared for possible invasions, Johnson said. The current surge in numbers has been made possible by sequential years of drought: warm, dry springs and hot early summers. If the trend continues, serious infestations will continue to escalate, he said. Grasshoppers and locusts are a global threat to food security and an international effort is required to knock back problem species, he said. Predicting population booms — and the extent of the damage that will be caused — can be incredibly difficult, but the risks must be managed. "We can never really wipe them out," he said. "But if we can make sure they don't get completely out of hand, that's the way to do it."
Environmental Science
Study clarifies that marine protected areas are managed with climate change in mind Scientific findings don't always translate neatly into actions, especially in conservation and resource management. The disconnect can leave academics and practitioners disheartened and a bit frustrated. "We want conservation science to be informing real-world needs," said Darcy Bradley, a senior ocean scientist at The Nature Conservancy and a former director of University of California Santa Barbara's Environmental Markets Lab. "Most managers and practitioners also want to incorporate science into their work," added Cori Lopazanski, a doctoral student at UCSB's Bren School of Environmental Science & Management. Lopazanski and Bradley were particularly curious how much science was finding its way into the management plans of marine protected areas, or MPAs. These are areas of the ocean set aside for conservation of biodiversity, cultural heritage and natural resources. The pair led a study investigating the management plans for 555 marine protected areas to clarify how the documents incorporated recommendations for climate resilience. The team found that many plans contain forward-looking strategies, even when they didn't explicitly reference "climate change" or related terms. The results appear in the journal Conservation Letters. This is the first study to examine this question in detail on an international scale. The authors considered marine protected areas of various sizes, locations and layouts across 52 countries, with plans written in nine languages. Their list included practically any marine reserve that barred extractive activities at least somewhere within its borders, including the Channel Islands National Marine Sanctuary, just off the coast of Santa Barbara. Previous studies mostly focused on the explicit language of management plans. This literal approach gave the appearance that marine protected areas weren't being managed effectively for climate change. In contrast, Lopazanski, Bradley and their co-authors searched the plans for strategies that promote resilience. The results appear worrying at first. Just over half of the plans in the study did not explicitly include strategies to tackle climate change impacts. In fact, about 22% didn't mention climate change at all. "You could mistakenly draw the conclusion that we have a long way to go to really prepare the world's MPAs for climate change," Bradley stated. However, a more holistic review revealed a different picture. Management plans overwhelmingly contained key principles for building resilience, even when they didn't explicitly mention climate change. Roughly speaking, 94% outlined long-term objectives, 99% included threat-reduction strategies, 98% had monitoring programs, and 93% incorporated adaptive management. Adaptive management evolves to keep up with changing circumstances. It's a continual process of evaluating what is and is not working, and correcting course to keep on target. It begins with setting objectives for the area: conservation goals, species and communities of interest, etc. Managers then assess what's happening in the area to develop strategies to meet these goals. The objectives and assessment then inform the MPA's design, including its size, shape and location. Once it's established, monitoring can begin to track indicators for the objectives. With a clear goal and active observation, managers can implement strategies and interventions, such as addressing pollution, removing invasive species, and restoring habitat. Adaptive management offers dynamic protection. "We don't have a ton of evidence about which types of climate strategies are going to be most effective well into the future because climate change impacts are a moving target," Bradley said. So she was thrilled to see how many management plans incorporated principles of adaptive management. Managing with the future in mind is particularly important in our changing world. In a recent study, Lopazanski and her colleagues found that marine heat waves impact ecological communities regardless of whether they are protected inside an MPA. The results raise the question of whether marine protected areas will remain effective conservation tools. Lopazanski believes this critique misses the point. Marine protected areas will experience losses under climate change just like protected areas on land. That doesn't mean these parks, reserves and sanctuaries aren't worthwhile. "There are some things that marine protected areas do really well," she said. They're particularly effective at mitigating the impact of fishing and other extractive activities. That's why MPAs have to be one part of a more comprehensive conservation and management plan for our ocean biodiversity and marine resources. What's more, large marine heat waves are a relatively new phenomenon, and dealing with that uncertainty is part of designing an effective MPA. "It's easy to criticize MPAs as a static strategy, ill-suited to deal with the dynamic nature of climate change," Bradley said, "But a deeper look at the plans reveals that they are more dynamic than they appear." The authors compiled many different management strategies in the paper, highlighting some they think are underutilized. They also peppered the study with examples and lessons from different MPAs. They were particularly impressed by the management plan of the Greater Farallones National Marine Sanctuary, off the coast of San Francisco. Its comprehensive plan included diverse strategies that targeted different climate change impacts and challenges facing that specific region. "This study can be a resource for managers who are looking to make their MPAs more resilient," Lopazanski said. In fact, utility was one of the study's key aims. This research was a collaboration between academic scientists and conservation practitioners. It was intentionally designed to gather information that would be immediately actionable and useful for real-world MPA management. A document to bring academics and managers just a bit closer together. More information: Cori Lopazanski et al, Principles for climate resilience are prevalent in marine protected area management plans, Conservation Letters (2023). DOI: 10.1111/conl.12972 Journal information: Conservation Letters Provided by University of California - Santa Barbara
Environmental Science
The West’s weather whiplash should not influence long-term water management There’s a water contradiction in the West with serious long-term water scarcity in low reservoirs and depleting groundwater tables, while California is in the middle of an extremely wet and snowy winter. As water levels in dangerously depleted Lake Mead and Lake Powell drop to record lows, California has experienced one of the wettest winters on record, with a statewide snowpack that’s almost 200 percent of normal for this time of year and a flooding risks from rain and snow events in March. In this way California is unlike the other six states currently negotiating a set of reductions to address drought conditions and dropping water levels in the Colorado River Basin. The means for managing water scarcity in the Basin does not always apply to California, where there can be an abundance of water and flooding followed by extreme droughts. What is a “normal” year, when California has experienced a whiplash of wet and dry over the last six years? We had wet years in 2022-23, 2018-19 and 2016-17, the second wettest year on record. We experienced dry years in 2021-22 (the third driest on record) in 2020-21 and in 2019-20. This extreme weather variability is something we must consider in our long-term water management plans, especially knowing that climate models project even greater extremes of wet and dry periods. Our research projects that future drought and wet periods in the Colorado River basin could be two times larger than that experienced in the historical record. California is also prone to the high variability in wet and dry periods due aridification along with atmospheric rivers in the winter. This all reinforces the need to be prepared with management options that are adaptable to an ever-changing climate. Despite the weather whiplash, the amount of available water is only going in one direction — down. Studies have shown that available water has been reduced by as much as 20 percent over the last century. In California, renewed emphasis has been placed on how too much water and how to store excess flows during wet years through groundwater recharge and increasing reservoir storage. Managing water through climate change takes smart people working together to develop innovative solutions. For instance, California Department of Water Resources in conjunction with federal agencies is now implementing Forecasted Informed Reservoir Operations (FIRO) to use improved weather forecast to operate reservoirs and potentially store more water. Orange County Water District has one of the state-of-art groundwater replenishment system that is the world’s largest water purification system for indirect potable reuse. Las Vegas and the Southern Nevada Water Authority has led the way in water conservation reducing water use per capita by 48 percent over the past 20 years. Effective water management solutions will involve an “all of the above” approach like that proposed in the August 2022 California Water Supply Strategy plan, which includes developing new water supplies, expanding storage capacity, reducing demand, improving water management strategies. The water challenges in the West are real and this wild year highlights how we don’t want to overreact due to the conditions of any one wet or dry year. Our government leaders must work together, stay committed to the investments required to enhance infrastructure, and enable innovative solutions to our water future. Thomas Piechota is professor of engineering and environmental science and policy at Chapman University in Orange, Calif., where he studies the climate impacts on regional water resources and society. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Environmental Science
During the summer of 2018, the Mendocino Complex Fire ripped through UC's Hopland Research and Extension Center (HREC), transforming the Northern California property's grassy, oak-dotted hillsides into a smoldering, ash-covered wasteland. "It felt like something out of the Lord of the Rings -- like Mordor. It was hard to imagine much surviving," said Justin Brashares, a professor of environmental science, policy and management at the University of California, Berkeley. But mere months after the fire, animals like coyote, gray foxes and black-tailed jackrabbits were seen returning to the area, spotted by grid of motion-sensor camera traps that Brashares' lab has operated since 2016 at the HREC, a multidisciplinary research and education facility located on the banks of the Russian River about and 13 miles south of Ukiah. "We were surprised that many species seem to be resistant [to the impacts of the fire]," said Kendall Calhoun, a graduate student at UC Berkeley and a member of Brashares' lab. Calhoun is the lead author of a new study that analyzed more that 500,000 camera grid images taken at the HREC in the years before and after the Mendocino Complex Fire to understand how the blaze impacted small- and medium-sized mammals on the property. The study, which appeared Monday in the journal Ecosphere, is one of the first studies to compare continuous wildlife observations made before and after a megafire. It is also one of a limited number of studies to focus on the impacts of megafires on California's oak woodlands. Oak woodland ecosystems comprise a large portion of the state, and yet are underrepresented in wildfire research compared to the conifer forests of the Sierra Nevada. "For the great majority of Californians, these oak woodlands and grassland savannahs are what we think of as the characteristic biome or ecosystem type for our state," Brashares said. "It's the primary ecosystem type for livestock grazing, and it's also the primary habitat type that's used to grow grapes for wine. It's a critical ecosystem type, and it's worth managing well." Of the eight animal species included in the study, six were found to be "resistant" to the impacts of the fire, using the area in the same ways and with approximately the same frequency as they did before the fire. These species included coyote, black-tailed jackrabbit, gray fox, racoon, striped skunk and bobcat. Western gray squirrel and black-tailed deer, however, appeared to be more vulnerable to the impacts of the fire. Brashares and Calhoun believe many of the species were able to remain in the area thanks to small patches of tree cover spared by the fire. Photos from the camera traps reveal many animals taking refuge in these patches, using them to obtain food and resources while more heavily burned areas recovered. Some animals were even observed using these locations more often after the fire than before. These findings highlight the importance of using techniques like grazing and prescribed burning to reduce the intensity of wildfires when they happen. These lower severity fires are more likely to leave the tree canopy intact and create the types of forest heterogeneity that can benefit fire-adapted ecosystems. "Even this incredibly hot and devastating fire still managed to leave behind these little patches of unburnt areas, andwe were surprised at how quickly many species were able to move into those habitat patches and then spread back out into the burned areas as they recovered," Brashares said. "This finding is very valuable for forest management because we can do things to the landscape that will increase the chance that when fire does come through, it will leave behind some of these fragments." An approaching inferno Calhoun was halfway around the world visiting New Zealand when he received a text message from study co-author Kaitlyn Gaynor informing him that the HREC was on fire. "I think my immediate text back was, 'Is everyone okay?'" Calhoun said. For two years, Calhoun had been helping to maintain the 36 camera traps spread across the property that had been set up in collaboration with the California Department of Fish and Wildlife to test a new way to monitor wildlife populations across the state. Calhoun had originally joined Brashares' lab hoping to study impacts of megafire on wildlife diversity, but the unpredictability of wildfire had made it difficult to find a study site. The Mendocino Complex Fire -- while terrifying and destructive -- provided him with a rare opportunity. "From what I heard, it was really scary as the fire was coming up to the property because people live on site, so there was a big rush to evacuate. The fire ended up burning more than half of the area," Calhoun said. "I was a continent away when I found out, but I was interested in hitting the ground running and making sure we got all the data we needed when I got back." Calhoun and the team first returned to the site about two months after the fire, when trees were still smoldering and the HREC resembled a "moonscape." The team's first task was to check on the cameras, 13 of which had been partially melted by the fire. In addition to replacing broken camera parts, they also checked to make sure the camera traps were set up at the same position and with the same orientation as they were before the fire, to ensure that their data remained as consistent as possible. Every three months, the team visits all 36 cameras on the site, downloading the photos, ensuring that everything is working correctly and removing any grass or debris blocking the view. They then spend countless hours reviewing each shot to sort out which photos contain animals, then identify the animals and log the data. "A lot of the data that we collect is just grass blowing in the wind," Calhoun said. In addition to small- and medium-sized mammals, the cameras also capture photos of larger animals, like black bears and mountain lions. Because these apex predators have huge home ranges -- often many times larger than the HREC's 5,300 acres -- it is impossible to get accurate information about their distributions from the study area. Calhoun said that, anecdotally, these animals were spotted much less frequently after the fire, suggesting that they were slower to return to the area after the blaze. After completing his Ph.D. this summer, Calhoun plans to continue his work as a 2023 Smith Fellow, studying how broad changes in fire regimes are affecting wildlife species across California. As part of the work, he hopes to obtain broader scale data on apex predators to better understand what happens to these animals when large fires destroy their home ranges. "For my next project, I'm really interested in looking at the broadscale effects of fire on those really wide-ranging species, like mountain lions and bears, and then also how wildfires might be impacting their relationship with people," Calhoun said. "The conflict between bears and humans, especially in Lake Tahoe, is really big in the in the news right now, and I think that either climate change or fire might be driving some of those interactions." Additional study co-authors include Benjamin R. Goldstein, Kaitlyn Gaynor, Alex McInturff and Leonel Solorio of UC Berkeley. This work was supported in part by the California Department of Fish and Wildlife (CDFW Grant # P1680002) and by the NSF Graduate Research Fellowship program. Story Source: Journal Reference: Cite This Page:
Environmental Science
Some spiders can transfer mercury contamination to land animals, study shows Sitting calmly in their webs, many spiders wait for prey to come to them. Arachnids along lakes and rivers eat aquatic insects, such as dragonflies. But, when these insects live in mercury-contaminated waterways, they can pass the metal along to the spiders that feed on them. Now, researchers reporting in the journal Environmental Science & Technology Letters have demonstrated how some shoreline spiders can move mercury contamination from riverbeds up the food chain to land animals. Most mercury that enters waterways originates from industrial pollution and other human activities, but it can also come from natural sources. Once in the water, microbes transform the element into methylmercury, a more toxic form, which biomagnifies and increases in organisms up the food chain. Scientists increasingly recognize spiders living on lakeshores and riverbanks as a potential link between contamination in waterways and animals that mostly live on land, such as birds, bats and amphibians, which eat the insects. So, Sarah Janssen and colleagues wanted to assess if shoreline spiders' tissues contain mercury from nearby riverbeds and establish how these animals could connect mercury pollution in water and land animals. The researchers collected long-jawed spiders along two tributaries to Lake Superior, and they sampled sediments, dragonfly larvae and yellow perch fish from these waterways. Next, the team measured and identified the mercury sources, including direct industrial contamination, precipitation and runoff from soil. The team observed that the origin of mercury in the sediments was the same up the aquatic food chain in wetlands, reservoir shorelines and urban shorelines. For instance, when sediment contained a higher proportion of industrial mercury, so did the dragonfly larvae, spider and yellow perch tissues that were collected. Based on the data, the researchers say that long-jawed spiders could indicate how mercury pollution moves from aquatic environments to terrestrial wildlife. The implication of these findings is that spiders living next to the water provide clues to the sources of mercury contamination in the environment, informing management decisions and providing a new tool for monitoring of remediation activities, explain the researchers. The team also collected and analyzed tissues from two other types of arachnids from some sites: fishing spiders and orb-weaver spiders. A comparison of the data showed that the mercury sources varied among the three taxa. The team attributes this result to differences in feeding strategies. Fishing spiders hunt near water but primarily on land; orb-weavers eat both aquatic and terrestrial insects; but it's the long-jawed species that feed most heavily on adult aquatic insects. These results suggest that although long-jawed spiders can help monitor aquatic contaminants, not every species living near the shore is an accurate sentinel, the researchers say. More information: Sarah E. Janssen et al, Mercury Isotope Values in Shoreline Spiders Reveal the Transfer of Aquatic Mercury Sources to Terrestrial Food Webs, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00450 Journal information: Environmental Science & Technology Letters Provided by American Chemical Society
Environmental Science
Wildlife biologist explains bat myths It's officially spooky season: Nights are creeping in earlier. A fall chill has descended. Skeletons and witches and jack-o-lanterns dot every street. So you can expect to see swarms of bats swooping overhead as you greet trick-or-treaters, right? "Unfortunately, the month when we finally stop and think about bats, they're not as available to see in this area," said University of Maryland wildlife biologist Shannon Browne, Ph.D. '21, a bat expert and lecturer in the Department of Environmental Science and Technology. Instead, they're most active in the summer, when their preferred prey, insects, are plentiful. Browne remembers watching bats dive around her aunt's swimming pool in Bowie, Maryland, for sips of water. For her doctoral dissertation, she examined bat populations in Maryland, Washington, D.C., Virginia and Delaware to determine the habitats where they're most active. She discovered that suburban neighborhoods are the favored sports for some of Maryland's vulnerable bat species, especially during fall mating season. Now, she's using these findings as she serves as an expert witness on how highway projects or building construction, such as widening the American Legion Bridge and adding toll lanes to I-270, could destroy bat habitats. Browne explains why they're (mostly) not little Draculas, where they actually sleep, and why you do want them flittering around your backyard during the swampiest months. Myth: They're after your blood Out of the 1,400 bat species in the world, just three are vampire bats—and they don't live in Maryland. Instead, they prefer the warm, humid climates of Central and South America, and their preferred meal sources are large, domestic animals like cattle, horses and pigs. "They are super cool and very specialized," Browne said. With special grooves in their teeth, they create a small nick in the skin. Their saliva has an enzyme that allows blood to continue flowing instead of clotting. Research on these palm-sized bats could help lead to treatments for strokes and other human diseases, Browne said. Myth: They always live in caves During the warmer months, bats can roost in many areas: tucked into the loose bark of a tree, behind a leaf, hanging from the eaves of a church or curled up in the rafters of an abandoned barn. The 10 species that live in Maryland are tiny, weighing as little as a nickel and as much as a quarter, and are about the size of a hand with their wings extended. During the winter, half these species migrate south, and the others head into caves to hibernate. "Caves offer a great winter hibernating habitat," said Browne. With constant humidity levels and temperatures a degree or two above freezing, they offer a safe environment where bats can huddle together or on their own, out of the elements and protected from predators. Myth: They can't see While bats are famous for using echolocation, they do have some vision. Echolocation requires a lot of energy, so they use it judiciously. "They're basically screaming at an extremely high frequency," she said. They have to think, "Is it worth my effort, the calories, to explore this thing that may be food, or to decide if this is an obstacle or predator I need to avoid, or a potential mate I should explore?" They can use their limited vision to supplement echolocation as they travel familiar paths, she said, much like a human can commute home without stopping to carefully read every road sign. Myth: They're just scary sky rats Bats get a bad rep for carrying rabies (in fact, less than 1% of do), but they play an important role in the ecosystem. In Maryland, they're one of the primary predators of pests like mosquitoes, stink bugs and crop-destroying beetles. Just one little brown bat, a species common in the state, can eat 1,200 insects in an hour. In other parts of the world, they pollinate plants like bees and feed on fruit and disperse seeds like birds to help rejuvenate entire forests. Bats are under threat themselves, not only from human-induced habitat loss, but from White-nose Syndrome, a fungus that has killed nearly 7 million bats across the country since 2006. Wind turbines also cause mortalities in migrating bats and birds. "We don't need to fear them. They are not trying to harm us," Browne said. "They're just going about their lives and raising families, just like we are." Provided by University of Maryland
Environmental Science
A not so silver lining: Microplastics found in clouds could affect the weather From the depths of the seas to snow on mountains and even the air above cities, microplastics are turning up increasingly often. Now, in Environmental Science & Technology Letters, researchers have analyzed microplastics in clouds above mountains. They suggest that these tiny particles could play a role in cloud formation and, in turn, affect weather. Microplastics—plastic fragments smaller than five millimeters—originate from a myriad of items used daily, such as clothing, packaging and car tires. As research in the field evolves, scientists are not only detecting microplastics in the atmosphere but also investigating how they may play a role in cloud formation. For example, a group of researchers recently detected plastic granules, which had water-attracting surfaces, in Japanese mountaintop clouds. So, to learn more, Yan Wang and colleagues set out to look for microplastics in mountain clouds, used computer models to figure out how they could have gotten there, and tested how the particles could have impacted—and been impacted by—the clouds. Wang and the team first collected 28 samples of liquid from clouds at the top of Mount Tai in eastern China. Then they analyzed the samples and found: - Low-altitude and denser clouds contained greater amounts of microplastics. - Particles were made of common polymers, including polyethylene terephthalate, polypropylene, polyethylene, polystyrene and polyamide. - The microplastics tended to be smaller than 100 micrometers in length, although some were as long as 1,500 micrometers. - Older, rougher particles had more lead, mercury and oxygen attached to their surfaces, which the researchers suggest could facilitate cloud development. To investigate where the plastic particles in the clouds originated, Wang and the team developed computer models that approximated how the particles traveled to Mount Tai. These models suggested that airflow from highly populated inland areas, rather than from over the ocean or other nearby mountains, served as the major source of the fragments. In laboratory experiments, the researchers demonstrated that microplastics exposed to cloud-like conditions—ultraviolet light and filtered cloud-sourced water—had smaller sizes and rougher surfaces than those exposed to pure water or air. Additionally, particles impacted by the cloud-like conditions had more lead, mercury and oxygen-containing groups. These results suggest that clouds modify microplastics in ways that could enable the particles to affect cloud formation and the fate of airborne metals. The researchers conclude that more work is needed to fully understand how microplastics affect clouds and the weather. More information: Characterization of Microplastics in Clouds over Eastern China, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00729. pubs.acs.org/doi/abs/10.1021/acs.estlett.3c00729 Provided by American Chemical Society
Environmental Science
How climate change could cause havoc to the extraordinary lifespans of bats The extraordinary lifespans of bats could be under threat from rising global temperatures, according to new research published in Proceedings of the Royal Society B: Biological Sciences. The study by researchers from University College Dublin and University of Bristol found that the hibernation cycle of a group of wild greater horseshoe bats affected by fluctuations in the weather had affected the molecular mechanism thought to give bat species their long lives. Telomeres are pieces of DNA that act as a protective structure at the end of chromosomes. Each time a cell divides, they shorten. And it is this shortening that is associated with aging and aging-related diseases. Data from the new study showed that bats who more frequently arose from hibernating due to warmer conditions during the 2019/2020 hibernation period had significantly shorter telomeres compared to those recorded in previous, colder, winters. "We were surprised and then worried at this finding, given that the predicted rise in global temperatures could limit the beneficial effects of hibernation in our wild bats," said UCD Professor Emma Teeling. The lead author of study Dr. Megan Power, from UCD School of Biology and Environmental Science, worked with a population of wild greater horseshoe bats (Rhinolophus ferrumequinum) in the U.K., which have been monitored since 1959 by Dr. Roger Ransome, who now holds the record for the longest mammal field study by an individual. Carrying out the very first longitudinal study of telomeres in hibernating bats, she tracked more than 200 individuals across three winters to determine the beneficial effects of hibernation on telomeres. Her work showed hibernation acts like a form of rejuvenation, where the telomeres extend rather than shortening during the hibernation season. This is most likely due to the expression of the enzyme telomerase which allows telomeric DNA to replicate itself in bats without causing harm. In other mammals, including humans, the enzyme usually drives cancer when switched on in non-egg and sperm cells. "It is fascinating that telomeres can extend in length, and it will be interesting to further investigate the potential role of telomerase in this process," said Professor Gareth Jones, University of Bristol. Hibernation is different from sleeping, as the latter does not involve the same large drop in body temperature and metabolism. Dr. Power said the study highlights the serious potential consequences that changing climatic conditions could have for the long-lived temperate of bats. "We found that climate plays a huge role, showing how susceptible our native mammals can be to fluctuations in weather, with worrying implications given our forecasted climate changes. Species with long-life spans and a slow reproductive rate, like bats, are particularly vulnerable to environmental change. Therefore, it is important for us to understand how bats are affected by and cope with rapid climate change." More information: Megan L. Power et al, Hibernation telomere dynamics in a shifting climate: insights from wild greater horseshoe bats, Proceedings of the Royal Society B: Biological Sciences (2023). DOI: 10.1098/rspb.2023.1589 Journal information: Proceedings of the Royal Society B Provided by University College Dublin
Environmental Science
Eight years ago, the data was sound but only suggestive, the evidence strong but circumstantial. Now, the University of Nebraska–Lincoln’s Karrie Weber and colleagues have experimentally confirmed that nitrate, a compound common in fertilizers and animal waste, can help transport naturally occurring uranium from the underground to groundwater. Their new research backs a 2015 Weber-led study showing that aquifers contaminated with high levels of nitrate — including the High Plains Aquifer residing beneath Nebraska — also contain uranium concentrations far exceeding a threshold set by the Environmental Protection Agency. Uranium concentrations above that EPA threshold have been shown to cause kidney damage in humans, especially when regularly consumed via drinking water. “Most Nebraskans do rely on groundwater as drinking water,” said Weber, associate professor in the School of Biological Sciences and Department of Earth and Atmospheric Sciences. “In Lincoln, we rely on it. A lot of rural communities, they’re relying on groundwater. “So when you have high concentrations (of uranium), that becomes a potential concern.” Research had already established that dissolved inorganic carbon could chemically detach traces of natural, non-radioactive uranium from underground sediment, ultimately priming it for transport into groundwater. But the 2015 study, which found that certain areas of the High Plains Aquifer contained uranium levels up to 89 times the EPA threshold, had convinced Weber that nitrate was contributing, too. So, with the help of 12 colleagues, Weber set out to test the hypothesis. To do it, the team extracted two cylindrical cores of sediment — each roughly 2 inches wide and running 60 feet deep — from an aquifer site near Alda, Nebraska. That site not only contains natural traces of uranium, the researchers knew, but also allows groundwater to flow east into the adjacent Platte River. Their goal? Recreate that flow in the samples of sediment, then determine whether adding some nitrate to the water would increase the amount of uranium that got carried away with it. “One of the things we wanted to make sure of was that we did not alter the state of the uranium or the sediments or the (microbial) community when we collected the samples,” Weber said. “We did everything we could to preserve natural conditions.” “Everything” meant immediately capping and wax-sealing the extracted cores, sliding them into airtight tubes, flushing those tubes with argon gas to dispel any oxygen, and putting them on ice. Back at the lab, Weber and her colleagues would eventually remove 15-inch segments from each of the two cores. Those segments consisted of sand and also silt that contained relatively high levels of uranium. Later, the team would fill multiple columns with that silt before pumping simulated groundwater through them at roughly the same rate it would have traveled underground. In some cases, that water contained nothing extra. In others, the researchers added nitrate. And in still other cases, they added both nitrate and an inhibitor designed to halt the biochemical activity of microorganisms living in the sediment. The water containing nitrate, but lacking the microbial inhibitor, managed to carry away roughly 85% of the uranium — compared with just 55% when the water lacked nitrate and 60% when it contained nitrate but also the inhibitor. Those results implicated both the nitrate and the microbes in further mobilizing the uranium. They also supported the hypothesis that a series of biochemical events, kicked off by the microbes, was transforming the otherwise-solid uranium into a form that could be easily dissolved in water. First, bacteria living in the sediment donate electrons to the nitrate, catalyzing its transformation into a compound called nitrite. That nitrite then oxidizes — steals electrons from — the neighboring uranium, ultimately turning it from a solid mineral into an aqueous one ready to surf the trickle of water seeping through the silt. After analyzing DNA sequences present in its sediment samples, the team identified multiple microbial species capable of metabolizing nitrate to nitrite. Though that uranium-mobilizing biochemistry had been known to unfold in highly contaminated areas — uranium mines, sites where nuclear waste is processed — Weber said the new study is the first to establish that the same mobilization process also takes place in natural sediment. “When we first got this project funded, and we were thinking about this, it was as a primary contaminant leading to secondary contamination,” she said of the nitrate and uranium. “This research supports that, yes, that can happen.” Still, as Weber said, “Nitrate isn’t always a bad thing.” Both her previous research and some forthcoming studies suggest that nitrate mobilizes uranium only when the compound approaches its own EPA threshold of 10 parts per million. “If we reflect upon what we published prior, that data suggests there’s a tipping point. The important thing,” she said, “is not to have too much.” The team reported its findings in the journal Environmental Science & Technology. Weber authored the study with Nebraska’s Jeff Westrop, Pooja Yadav, Alicia Chan, Anthony Kohtz, Olivia Healy, Daniel Snow, P.J. Nolan and Donald Pan; Kate Campbell of the U.S. Geological Survey; Rajesh Singh, from India’s National Institute of Hydrology; along with Sharon Bone and John Bargar of the SLAC National Accelerator Laboratory. Journal Environmental Science & Technology
Environmental Science
New water treatment approach helps to avoid harmful chemicals The water coming out of your faucet is safe to drink, but that doesn't mean it's completely clean. Chlorine has long been the standard for water treatment, but it often contains trace levels of disinfection byproducts and unknown contaminants. Georgia Institute of Technology researchers developed the minus approach to handle these harmful byproducts. Instead of relying on traditional chemical addition (known as the plus approach), the minus approach avoids disinfectants, chemical coagulants, and advanced oxidation processes typical to water treatment processes. It uses a unique mix of filtration methods to remove byproducts and pathogens, enabling water treatment centers to use ultraviolet light and much smaller doses of chemical disinfectants to minimize future bacterial growth down the distribution system. "The minus approach is a groundbreaking philosophical concept in water treatment," said Yongsheng Chen, the Bonnie W. and Charles W. Moorman IV Professor in the School of Civil and Environmental Engineering. "Its primary objective is to achieve these outcomes while minimizing the reliance on chemical treatments, which can give rise to various issues in the main water treatment stream." Chen and his student Elliot Reid, the primary author, presented the minus approach in the paper, "The Minus Approach Can Redefine the Standard of Practice of Drinking Water Treatment," in the Environmental Science & Technology journal. The minus approach physically separates emerging contaminants and disinfection byproducts from the main water treatment process using these already proven processes: - Bank filtration withdraws water from naturally occurring or constructed banks like rivers or lakes. As the water travels through the layers of soil and gravel, it naturally filters out impurities, suspended particles, and certain microorganisms. - Biofiltration uses biological processes to treat water by passing it through filter beds made of sand, gravel, or activated carbon that can support the growth of beneficial microorganisms, which in turn can remove contaminants. - Adsorption occurs when an adsorbent material like activated carbon is used to trap contaminants. - Membrane filtration uses a semi-permeable membrane to separate particles and impurities from the main treatment process. The minus approach is intended to engage the water community in designing safer, more sustainable, and more intelligent systems. Because its technologies are already available and proven, the minus approach can be implemented immediately. It can also integrate with artificial intelligence (AI) to improve filtration's effectiveness. AI can aid process optimization, predictive maintenance, faulty detection and diagnosis, energy optimization, and decision-support systems. AI models have also been able to reliably predict the origin of different types of pollution in source water, and models have also successfully detected pipeline damage and microbial contamination, allowing for quick and efficient maintenance. "This innovative philosophy seeks to revolutionize traditional water treatment practices by providing a more sustainable and environmentally friendly solution," Chen said. "By reducing the reliance on chemical treatments, the minus approach mitigates the potential risks associated with the use of such chemicals, promoting a safer water supply for both human consumption and environmental protection." More information: Elliot Reid et al, The Minus Approach Can Redefine the Standard of Practice of Drinking Water Treatment, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c09389 Journal information: Environmental Science & Technology Provided by Georgia Institute of Technology
Environmental Science
Scientists fear methane erupting from the burst Nord Stream pipelines into the Baltic Sea could be one of the worst natural gas leaks ever and pose significant climate risks.Neither of the two breached Nord Stream pipelines, which run between Russia and Germany, was operational, but both contained natural gas. This mostly consists of methane – a greenhouse gas that is the biggest cause of climate heating after carbon dioxide. The extent of the leaks is still unclear but rough estimates by scientists, based on the volume of gas reportedly in one of the pipelines, vary between 100,000 and 350,000 tonnes of methane.Jasmin Cooper, a research associate at Imperial College London’s department of chemical engineering, said a “lot of uncertainty” surrounded the leak.“We know there are three explosions but we don’t know if there are three holes in the sides of the pipe or how big the breaks are,” said Cooper. “It’s difficult to know how much is reaching the surface. But it is potentially hundreds of thousands of tonnes of methane: quite a big volume being pumped into the atmosphere.”Nord Stream 2, which was intended to increase the flow of gas from Russia to Germany, reportedly contained 300m cubic metres of gas when Berlin halted the certification process shortly before Russia invaded Ukraine.That volume alone would translate to 200,000 tonnes of methane, Cooper said. If it all escaped, it would exceed the 100,000 tonnes of methane vented by the Aliso Canyon blowout, the biggest gas leak in US history, which happened in California in 2015. Aliso had the warming equivalent of half a million cars. “It has the potential to be one of the biggest gas leaks,” said Cooper. “The climate risks from the methane leak are quite large. Methane is a potent greenhouse gas, 30 times stronger than CO2 over 100 years and more than 80 times stronger over 20 years.”Prof Grant Allen, an expert in Earth and environmental science at Manchester University, said it was unlikely that natural processes, which convert small amounts of methane into carbon dioxide, would be able to absorb much of the leak.Allen said: “This is a colossal amount of gas, in really large bubbles. If you have small sources of gas, nature will help out by digesting the gas. In the Deepwater Horizon spill, there was a lot of attenuation of methane by bacteria. “My scientific experience is telling me that – with a big blow-up like this – methane will not have time to be attenuated by nature. So a significant proportion will be vented as methane gas.”Unlike an oil spill, gas will not have as polluting an effect on the marine environment, Allen said. “But in terms of greenhouse gases, it’s a reckless and unnecessary emission to the atmosphere.”Germany’s environment agency said there were no containment mechanisms on the pipeline, so the entire contents were likely to escape. The Danish Energy Agency said on Wednesday that the pipelines contained 778m cubic metres of natural gas in total – the equivalent of 32% of Danish annual CO2 emissions.This is almosttwice the volume initially estimated by scientists. This would significantly bump up estimates of methane leaked to the atmosphere, from 200,000 to more than 400,000 tonnes. More than half the gas had left the pipes and the remainder is expected be gone by Sunday, the agency said.Jean-Francois Gauthier, vice-president of measurements at the commercial methane-measuring satellite firm GHGSat, said evaluating the total gas volume emitted was “challenging”. “There is little information on the size of the breach and whether it is still going on” Gauthier said. “If it’s a significant enough breach, it would empty itself.“It’s safe to say that we’re talking about hundreds of thousands of tonnes of methane. In terms of leaks, it’s certainly a very serious one. The catastrophic instantaneous nature of this one – I’ve certainly never seen anything like that before.”In terms of the climate impact, 250,000 tonnes of methane was equivalent to the impact of 1.3m cars driven on the road for a year, Gauthier said.
Environmental Science
Study shows social media content opens new frontiers for sustainability science researchers With more than half of the world's population active on social media networks, user-generated data has proved to be fertile ground for social scientists who study attitudes about the environment and sustainability. But several challenges threaten the success of what's known as social media data science. The primary concern, according to a new study from an international research team, is limited access to data resulting from restrictive terms of service, shutdown of platforms, data manipulation, censorship and regulations. The study, published in the journal One Earth, is the first known to evaluate the scope of environmental social media research and its potential to transform sustainability science. The 17-member research team analyzed 415 studies, published between 2011 and 2021, that examined social media content related to the environment. "Ideas about climate change and our environment are increasingly coming from social media," said Derek Van Berkel, assistant professor at the University of Michigan's School for Environment and Sustainability and one of the study' three lead authors. "Online communities like Reddit, or simply news stories shared by your friends on Facebook, have become digital landscapes where many ideas are shaped and formed." Understanding how those ideas are shaped aids science communicators in honing environmental messaging and prompts them to fill gaps where information is lacking or misrepresented. Despite the potential public benefits of social media data science, the authors argue, current business models of social media platforms have generated a vicious cycle in which user data is treated as a private asset that can be purchased or sold for profit. This has raised public concern and mistrust of social media companies, leading to a greater demand for more regulation. The study supports the idea of replacing this vicious cycle with a "virtuous cycle." "A virtuous cycle requires the collaboration of SM companies, researchers, and the public," said co-lead study author Johannes Langemeyer from the Institute of Environmental Science and Technology at the Autonomous University of Barcelona. "For their part, sustainability researchers can foster more trust and cooperation by embracing high ethical standards. Inclusivity, transparency, privacy protection, and responsible use of the data are key requirements—and will lead to an improved standardization of research practices moving forward," Langemeyer said. A promising example of cooperation from a social media platform was initiated in January 2021 when Twitter set a new standard for broader access to researchers by introducing a new academic research product track, which for the first time allowed free full-archive searches for approved researchers. Such an approach could have served as a model for wider open access across social media platforms. But confirming the fears of researchers, Twitter recently announced that as of Feb. 9, 2023, the company will no longer support free access. "SM data has the potential to usher in a revolution in the current practices of sustainability research, especially in the social sciences, with an impact on par with that of Earth observation in the environmental sciences," said co-lead study author Andrea Ghermandi from the Department of Natural Resources and Environmental Management at the University of Haifa in Israel. The study concludes that social media data assessments can support the 2015 U.N. Sustainable Development Goals that serve as a universal call to action to end poverty, protect the planet, and ensure that by 2030 all people enjoy peace and prosperity. "Achieving the U.N. Sustainable Development Goals will require large-scale, multi-country efforts as well as granular data for tailoring sustainability efforts," the study authors wrote. "The shared values and goals of working for a sustainable future may provide common ground for the cooperation needed to fully realize the contribution that SM data offers." More information: Andrea Ghermandi et al, Social media data for environmental sustainability: A critical review of opportunities, threats, and ethical use, One Earth (2023). DOI: 10.1016/j.oneear.2023.02.008 Journal information: One Earth Provided by University of Michigan
Environmental Science
Tens of thousands of sites across the U.S. may be polluted with toxic so-called “forever chemicals,” a team of scientists argued in a study released on Wednesday. The researchers said that in the absence of information proving otherwise, contamination from per- and polyfluoroalkyl substances (PFAS) should be presumed at 57,412 locations spread across all 50 states and the District of Columbia. The areas in question include sites that discharge jet fuel firefighting foam, certain industrial facilities and places where waste contains these cancer-linked chemicals, according to the study, published in Environmental Science & Technology Letters. “PFAS contamination at these locations is very likely,” senior author Alissa Cordner, co-director of the PFAS Project Lab at Northeastern University, said in a statement. Known for their propensity to linger in the human body and in the environment, PFAS are linked to many illnesses, including testicular cancer, thyroid disease and kidney cancer. These so-called forever chemicals are notorious for their presence in aqueous film forming foam (AFFF) — the product used to fight jet fuel fires at military bases and airports. However, they are also found in industrial discharge and in a variety of household products. Because testing for these substances is very sporadic, researchers have become aware of “many data gaps in identifying known sites of PFAS contamination,” according to Cordner, who is also an associate professor of sociology at Whitman College. Cordner and her colleagues — who come from a variety of academic and government institutions — developed a “presumptive contamination” model that aggregates high-quality, publicly available data into a single, accessible map. Their model, the authors explained, could be a critical tool to governments, industries and communities looking to identify potential exposure sources. Among the 57,412 presumptive PFAS contamination sites, the authors identified 49,145 industrial facilities, 4,255 wastewater treatment plants, 3,493 current or former military sites and 519 major airports. “While it sounds scary that there are over 57,000 presumptive contamination sites, this is almost certainly a large underestimation,” co-author Phil Brown, director of Northeastern University’s Social Science Environmental Health Research Institute, said in a statement. “The scope of PFAS contamination is immense, and communities impacted by this contamination deserve swift regulatory action that stops ongoing and future uses of PFAS while cleaning up already existing contamination,” Brown added. The authors said they validated 503 such sites using the PFAS Project Lab’s Contamination Site Tracker to evaluate how their model stands up to known contamination areas. Of these sites, the authors found 72 percent either appeared directly on the presumptive contamination map or would have appeared there had sufficient nationwide data been available. The 28 percent of sites not captured by the model included facilities that don’t tend to be associated with such contamination — such as restaurants and senior centers — as well as locations with relatively low levels of PFAS. Other places within that 28 percent were businesses like car washes, sewage companies and textile cleaners, which are not identified by North American Industry Classification System, according to the study. “PFAS testing is expensive and resource intensive,” co-author Kimberly Garrett, a post-doctoral researcher at Northeastern University, said in a statement. “We have developed a standardized methodology that can help identify and prioritize locations for monitoring, regulation, and remediation,” Garrett added. The presumptive contamination sites can be viewed on an interactive map published by the PFAS Project Lab. “Not only do we all have PFAS in our bodies, but we also know that PFAS affects almost every organ system,” co-author Linda Birnbaum, the former director of the National Institute of Environmental Health Sciences and the National Toxicology Program, said in a statement. “It is essential that we understand where PFAS are in our communities so that we can prevent exposures,” Birnbaum added.
Environmental Science
New research this week is the latest to show that microplastics have polluted just about everywhere on Earth. Scientists discovered plastic particles in cloud samples collected from atop a mountain in Eastern China. The team also found evidence from lab experiments that these microplastics could potentially affect cloud formation and the weather, though more data will be needed to understand exactly how. The study was led by scientists from Shandong University. Among other things, they were inspired by a recent study published in September—one where scientists found microplastics in samples of mist collected at the peaks of Mount Fuji and Mount Oyama, both in Japan. The team decided to look for and analyze microplastics in the clouds surrounding the top of Mount Tai, a well-visited and culturally important mountain that’s close to densely populated areas of Eastern China. They studied 28 liquid samples collected during summer 2021. The team found microplastics in all but four of the samples. These samples contained common plastics such as polyethylene terephthalate, polypropylene, and polyethylene. Samples collected from low-altitude and denser clouds also tended to have greater amounts of microplastics. The concentration of plastics found in the samples overall was substantially lower than those collected from the atmosphere of urban areas, the researchers noted, but much higher than those found in nearby rainfall, remote polar regions, and the cloud water previously collected from Mount Fuji and Mount Oyama in Japan. “This finding provides significant evidence of the presence of abundant [microplastics] in clouds,” the researchers wrote in their paper, published Wednesday in Environmental Science & Technology Letters. The team additionally conducted a deeper analysis of the microplastics they found, as well as modeling and lab experiments. The older plastic particles tended to be rougher and smaller, for instance, and contained more lead, mercury and oxygen on average than fresher plastics. In the lab, they found that exposing plastics to cloud-like conditions—namely, ultraviolet radiation and filtered water—could cause these same sorts of changes. In other words, they found evidence that clouds can change the makeup of microplastics once they get there, possibly in ways that could then affect cloud formation and subsequently the weather. There’s still a lot that we don’t know about the specific effects of microplastics, both on the environment and our health, but what we have learned so far hasn’t been comforting. Studies have identified over a hundred chemicals in plastic that could potentially harm us or other animals, including those that disrupt the regulation of important hormones. Chemicals from plastic pollution can also leak into soil and freshwater causing long-term negative consequences for the surrounding ecosystem. The study authors say that more research will be needed to figure out how microplastics interact with clouds and the potential impacts of these interactions on cloud formation and the presence of toxic metals in the atmosphere. Many scientists, environmental, and public health organizations have already begun to call for widespread reductions in plastic pollution based on the possible dangers we know about.
Environmental Science
For much of the last century, many cities across the United States and Canada burned their trash and waste in municipal incinerators. Most of these facilities were closed by the early 1970s due to concerns about the pollution they added to the air, but a new Duke University study finds that their legacy of contamination could live on in urban soils. "We found that city parks and playgrounds built on the site of a former waste incinerator can still have greatly elevated levels of lead in their surface soils many decades after the incinerator was closed," said Daniel D. Richter, professor of soils at Duke's Nicholas School of the Environment, who co-led the research. Exposure to lead in soil has been linked to potential long-term health problems, particularly in children. These include possible damage to the brain and nervous system, slowed growth and development, and learning and behavioral problems. To conduct their study, Richter and his students collected and analyzed surface soil samples from three city parks in Durham, N.C. that are located on former incinerator sites closed in the early 1940s. Samples collected from a two-acre section of East Durham Park contained lead levels over 2000 parts per million, more than five times higher than the current U.S. Environmental Protection Agency (EPA) standard for safe soils in children's play areas. Samples collected from Walltown Park mostly contained low lead levels, "but about 10% were concerning and a few were very high," Richter noted. Samples collected from East End Park all contained levels of soil lead below the current EPA threshold for children's safety "and presented no cause for concern," he said. The sharp differences in lead levels between the three parks underscores the need for increased monitoring, he stressed. "Determining where contamination risks persist, and why contamination is decreasing at different rates in different locations, is essential for identifying hotspots and mitigating risks," Richter said. "Many cities should mobilize resources to do widespread sampling and monitoring, and create soil maps and, more specifically, soil lead maps." "That's where we really need to go," Richter said. "Not just in Durham but in hundreds of other cities where parks, as well as churches, schools and homes, may have been built on former waste incinerator and ash disposal sites." By analyzing historic surveys of municipal waste management, the Duke team found that about half of all cities surveyed in the U.S. and Canada incinerated solid waste between the 1930s and 1950s. "These incinerators burned all kinds of garbage and trash, including paint, piping, food cans and other products that contained lead back then," Richter said. The leftover ash, in which lead and other contaminants were concentrated, was sometimes covered with a too-thin layer of topsoil or even spread around parks, new construction sites or other urban spaces as a soil amendment. "Historical surveys indicate a lack of appreciation for the health and environmental hazards of city-waste incinerator ash. Back then, they didn't know what we do now," he said. New technology could help make sampling and monitoring more feasible at the thousands of sites nationwide that may be contaminated, he added. Using a portable x-ray fluorescence instrument, his lab is now able to do a preliminary analysis on a soil sample for multiple metals, including lead, in just 20 seconds. Making use of historical records about waste incineration and ash disposal could also speed efforts to identify hotspots. In their paper, Richter and his students provide histories gleaned from archived public works records, old street maps and newspaper clippings showing where ash was burned and disposed of in six sample cities: Los Angeles; New York City; Baltimore; Spokane, Wash.; Jacksonville, Fla.; and Charleston, S.C. "This is something you could do for many cities to guide monitoring efforts," Richter said. "There's been a lot of interest in mitigating lead exposure in cities, but most until now has been focused on reducing risks within the home. Our study reminds us that risks exist in the outdoor environment, too," he said. Richter and his students published their peer-reviewed findings Sept. 11 in Environmental Science & Technology Letters. His co-authors on the new paper were Enikoe Bihari, a 2023 Master of Environmental Management graduate of the Nicholas School who conducted much of the research as part of her Master's Project, and Garrett Grewal, a senior at Duke majoring in Earth and Climate Sciences. Funding came from Duke University and the National Institute of Environmental Health Sciences (P42ES010356). Story Source: Journal Reference: Cite This Page:
Environmental Science
How a drought affects trees depends on what's been holding them back Droughts can be good for trees; certain trees, that is. Contrary to expectation, sometimes a record-breaking drought can increase tree growth. Why and where this happens is the subject of a new paper in Global Change Biology. A team of scientists led by Joan Dudney at UC Santa Barbara examined the drought response of endangered whitebark pine over the past century. They found that in cold, harsh environments—often at high altitudes and latitudes—drought can actually benefit the trees by extending the growing season. This research provides insights into where the threats from extreme drought will be greatest, and how different species and ecosystems will respond to climate change. Many factors can constrain tree growth, including temperature, sunlight and the availability of water and nutrients. The threshold between energy-limited and water-limited systems turns out to be particularly significant. Trees that try to grow in excessively cold temperatures—often energy-limited systems—can freeze to death. On the other hand, too little water can also kill a tree, particularly in water-limited systems. Over time, many tree species have adapted to these extreme conditions, and their responses are broadly similar. They often reduce growth-related activities, including photosynthesis and nutrient uptake, to protect themselves until the weather improves. "Interestingly, the transition from energy- to water-limited growth can produce highly unexpected responses," explained Dudney, an assistant professor in the Bren School of Environmental Science & Management and the Environmental Studies Program. "In cold, energy-limited environments, extreme drought can actually increase growth and productivity, even in California." Dudney and her colleagues extracted 800 tree cores from whitebark pine across the Sierra Nevada, comparing the tree rings to historical records of climate conditions. This climate data spanned 1900 to 2018, and included three extreme droughts: 1959–61, 1976–77, and 2012–15. They recorded where tree growth and temperature showed a positive relationship, and where the relationship was negative. The authors found a pronounced shift in growth during times of drought when the average maximum temperature was roughly 8.4° Celsius (47.1° Fahrenheit) between October and May. Above this threshold, extreme drought reduced growth and photosynthesis. Below this temperature, trees grew more in response to drought. "It's basically 'How long is the growing season?'" Dudney said. Colder winters and higher snowpack often lead to shorter growing seasons that constrain tree growth. Even during an extreme drought, many of the trees growing in these extreme environments did not experience high water stress. This surprised the team of scientists, many of whom had observed and measured the unprecedented tree mortality that occurred at slightly lower elevations in the Sierra Nevada. Dudney was curious about whether drought impacts growth in just the main trunk, or the whole tree. Without more data, the trends they saw could be a result of disparate processes all responding to the drought differently, she explained. Fortunately, whitebark pine retains its needles for roughly eight years. This provided additional data that could address this question. The researchers shifted their attention from dendrology to chemistry. Atoms of the same element can have different weights, or isotopes, thanks to the number of neutrons they contain. Several aspects of a plant's metabolism can influence the relative abundance of (heavy) carbon-13 and (light) carbon-12 in tissues such as their leaves and needles. These changes provide a rough guide to the amount of water stress a tree might have experienced during drought. This was a boon for the researchers, because isotopic data from the pine needles spanned drought and non-drought years. Analyzing needle growth, carbon and nitrogen isotopes revealed that the whole tree was affected by the threshold between water-limited and energy-limited systems. Trunk growth, needle growth, photosynthesis and nutrient cycling responded in opposite directions to drought above and below the threshold between energy- and water-limited systems. The future of whitebark pine is highly uncertain. The species—recently listed as threatened under the Endangered Species Act—faces many threats, including disease, pine beetle infestation and impacts from altered fire regimes. It's clear from this research that drought and warming will likely exacerbate these threats in water-limited regions, but warming may be beneficial for growth in energy-limited environments. "This research can help develop more targeted conservation strategies," said Dudney, "to help restore this historically widespread tree species." Indeed, the pine's range encompasses a diverse region, stretching from California to British Columbia, and east to Wyoming. The findings also have implications more broadly. Approximately 21% of forests are considered energy-limited, and an even higher percentage can be classified as water-limited. So transitions between these two climatic regimes likely occur around the globe. What's more, the transition seems to have an effect on nitrogen cycling. Trees in water-limited environments appeared to rely less on symbiotic fungi for nitrogen, which is critical for tree growth in harsh, energy-limited environments. "Droughts are leading to widespread tree mortality across the globe," Dudney said, "which can accelerate global warming." Deciphering the many ways trees respond to drought will help us better predict where ecosystems are vulnerable to climate change and how to develop more targeted strategies to protect our forests. More information: Joan Dudney et al, The energy–water limitation threshold explains divergent drought responses in tree growth, needle length, and stable isotope ratios, Global Change Biology (2023). DOI: 10.1111/gcb.16740 Journal information: Global Change Biology Provided by University of California - Santa Barbara
Environmental Science
Bottom sediments from five North Carolina lakes near coal-fired power plants shows coal ash contamination that likely entered the lakes by three different routes. (Avner Vengosh) DURHAM, N.C. -- An analysis of sediments from five North Carolina lakes near coal-burning power plants has found that coal ash pollution of surface waters has been more persistent and widespread than was previously known. The findings, from scientists at Duke University and Appalachian State University, show that large quantities of coal ash have been transferred and deposited in lake sediments since the beginning of coal operations in North Carolina. “The bottom sediments of a lake represent a complete history of what has fallen into the lake water and settled to the bottom,” said Avner Vengosh, a Duke University Distinguished Professor of Environmental Quality at the Nicholas School of the Environment. “Using our age-dating methods, we were able to go back in time, in some cases even before the coal plant was built, and reconstruct the history of the lakes.” Coal ash is the residual material from burning coal to generate electricity, and is known to contain hazardous metals including lead, chromium, cadmium, mercury, arsenic, selenium and molybdenum, many of which have been tied to human cancers and other health effects. The contaminants are not locked into the lake sediments, Vengosh said. A chemical analysis of the pore water within the lake sediments indicated the metals leached out from the buried coal ash and could enter the aquatic food chain. The study appears Oct. 3 in the journal Environmental Science & Technology. “These are recreational lakes,” said Zhen Wang, a PhD student at Duke Nicholas School of the Environment and the lead author of the study. “Some of them, like Hyco Lake, were originally built for the coal plant, but over the years, it has become very desirable real estate where people build their dream homes. It looks very pristine and beautiful, but if you dig in, you find piles of toxic coal ash.” The five lakes in the study were created for nearby coal plants: Hyco Lake and Mayo Lake, North of Durham in Person County; Belews Lake, northwest of Greensboro in Rockingham, Forsyth and Stokes counties; Mountain Island Lake, northwest of Charlotte in Mecklenburg County; and Lake Sutton, northwest of Wilmington in Brunswick County. For comparison, the researchers also sampled Lake Waccamaw in Columbus County, west of Wilmington, a natural lake that was dammed in 1926 so it wouldn’t dry out during droughts. “By looking at the microscope we were able to identify the different types of coal ash that were deposited over time in the lakes," said Ellen Cowan, a professor of Geology at Appalachian State University who was a co-author of the study. “At several of the sites, it appears that coal ash was initially just dumped into the nearby lake,” Cowan said. “Over time, when the Clean Air Act was enforced and scrubbers were added to the coal plant smokestacks to catch fine particulates, we see changes in the coal ash with higher proportions of small particles.”   Yet, the tiny particles of coal ash contain the highest concentrations of toxic elements, which made contamination worse for the lakes, Vengosh said. “The toxicity of the coal ash actually becomes worse because those small particles contain higher concentrations of the trace elements.” The study authors suggest the coal ash could reach lakes by three possible routes:  Atmospheric emissions of coal ash, particularly before the installation of the scrubbers, settled in nearby lands and was washed back into the lake by its watershed; climate events like tropical storms and hurricanes flooded and flushed the nearby coal ash impoundments to overflow into the nearby lakes; and ordinary flows of effluents from the coal ash ponds reached the lake as part of their routine operation. “While previously we thought that lakes and groundwater are being contaminated by leaking or effluents discharge from coal ash ponds, the new findings indicate that we have underestimated the environmental impact of coal ash,” Vengosh said. “We thought that the majority of the coal ash is restricted to coal ash ponds and landfills. Now we see it’s already in the open environment.” The study authors warn that this is a much larger problem and given climate change it will only grow worse. “We did a very detailed examination of five lakes, but there are numerous lakes or open water reservoirs next to coal plants not only North Carolina, but all over the country,” Vengosh said. “The phenomenon that we discovered probably applies to many other sites across the US and all of them are going to be vulnerable to more extreme weather events and flooding that we know is coming from global warming.” This research was supported by the National Science Foundation (EAR-1932649, EAR-1932087). CITATION: “Legacy of Coal Combustion: Widespread Contamination of Lake Sediments and Implications for Chronic Risks to Aquatic Ecosystems,” Zhen Wang, Ellen Cowan, Keith Seramur, Gary Dwyer, Jessie Wilson, Randall Karcher, Stefanie Brachfeld, Avner Vengosh. Environmental Science & Technology, Oct. 3, 2022. DOI: 10.1021/acs.est.2c04717
Environmental Science
Plastics entering the world's oceans have surged by an unprecedented amount since 2005 and could nearly triple by 2040 if no further action is taken, according to research published on Wednesday. An estimated 171 trillion plastic particles were afloat in the oceans by 2019, according to peer-reviewed research led by the 5 Gyres Institute, a U.S. organization that campaigns to reduce plastic pollution. Marine plastic pollution could rise 2.6-fold by 2040 if legally binding global policies are not introduced, it predicted. The study looked at surface-level plastic pollution data from 11,777 ocean stations in six major marine regions covering the period from 1979 to 2019. "We've found an alarming trend of exponential growth in microplastics in the global ocean since the millennium," Marcus Eriksen, co-founder of the 5 Gyres Group said in a statement. "We need a strong legally binding U.N. global treaty on plastic pollution that stops the problem at the source," he added. Microplastics are particularly hazardous to the oceans, not only contaminating water but also damaging the internal organs of marine animals, which mistake plastic for food. Experts said the study showed that the level of marine plastic pollution in the oceans has been underestimated. "The numbers in this new research are staggeringly phenomenal and almost beyond comprehension," said Paul Harvey, a scientist and plastics expert with Environmental Science Solutions, an Australian consultancy focused on pollution reduction. The United Nations kicked off negotiations on an agreement to tackle plastic pollution in Uruguay in November, with the aim of drawing up a legally binding treaty by the end of next year. Environmental group Greenpeace said that without a strong global treaty, plastic production could double within the next 10 to 15 years, and triple by 2050. A separate international treaty was agreed on Sunday to help protect biodiversity in the world's high seas.
Environmental Science
FOR IMMEDIATE RELEASE September 29, 2022 Contact: Jillian McKoy, jpmckoy@bu.edu Michael Saunders, msaunder@bu.edu ## During the first year of the COVID-19 pandemic, global road travel and commercial flight activity decreased by 50 percent and 60 percent, respectively, compared to pre-pandemic levels. During the lockdowns that cities imposed in the initial months of COVID, flight activity in particular was reduced to a near standstill, decreasing by 96 percent—nearly triple the percentage of flight reductions that followed the 9/11 attacks. This unexpected and widespread halt in travel provided a rare opportunity for researchers to explore the impact of these mobility changes on air pollution, specifically ultrafine particles. Now, a new study by Boston University School of Public Health (BUSPH) has found that ultrafine particle concentration dropped by nearly 50 percent due to reduced aviation and road activity during the first few months of the pandemic. Published in the journal Environmental Science & Technology Letters, the study analyzed measurements of ultrafine particles, referred to as particle number concentration (PNC), that were collected before and during the first year of COVID at a rooftop site near Boston’s Logan International Airport. The findings revealed that during the state-of-emergency period from April-June 2020, average PNC was 48 percent lower than pre-pandemic levels, corresponding with flight activity that was 74 percent lower, highway traffic volume that was 51 percent lower, and local traffic volume that was 39 percent lower than pre-pandemic levels. Total air quality measurements occurred from April 2020 through June 2021 and the researchers compared them with pre-pandemic measurements from 2017 and 2018. By June 2021, traffic volume returned to pre-COVID levels, while flight activity remained 44 percent lower than normal. Similar to traffic volume, average PNC levels also returned to normal by summer 2021—except when the site was downwind from Logan Airport. The findings build upon previous studies on PNC, which have focused primarily on road traffic emissions, during much shorter time periods. The new study is the first to distinguish between aviation and automobile-related contributions to PNC over several months, providing a clearer understanding of the unique emissions produced by each transportation source. Identifying and quantifying the emissions sources that contribute most to air pollution levels in a given area or region is crucial for air quality management, the researchers say. “Urban air pollution is a serious public health threat, and residing in neighborhoods near sources of ultrafine particles, such as major roadways, trains and airports, has been shown to have elevated adverse health impacts,” says study lead author Sean Mueller, a PhD student in the Department of Environmental Health at BUSPH. “Our work shows that while airplanes can contribute to some of the highest community-level exposures to ultrafine particles, these exposures occur predominantly during specific meteorological conditions. Following the differences in road and flight activity patterns before and during the pandemic allowed us to understand that PNC in the community typically follows road traffic patterns—i.e. high during typical commuting rush hour, and lower after midnight—but that the highest air pollution levels occur when the site is downwind of Logan Airport.” Ultrafine particles, which are 800 times smaller than a human hair, are particularly toxic pollutants that can cause inflammation in the lungs, brain, and other organs. They are also not regulated by the US Environmental Protection Agency. Approximately 40 million people in the US, including many in lower-income neighborhoods, live near major airports and bear the brunt of the health impacts that follow exposure to these pollutants. In the absence of federal oversight, there are still policy changes that can help reduce exposures, including increasing the adoption of sustainable aviation fuel technology, such as low-sulfur fuel and electric engines, says study senior author Dr. Kevin Lane, assistant professor of environmental health at BUSPH. “The EPA currently considers there to be insufficient health evidence at this time to promulgate an ultrafine particle air quality standard, so more research is needed to support regulation development,” Lane says. “While waiting for federal action and the development and integration of new technology to reduce exposure to air pollution, action can be taken at the local level by continuing to bring near-airport communities, researchers and airport administrators together to explore mechanisms to reduce community exposure, including integration of in-home air filtration such as HEPA filters.” ** About Boston University School of Public Health Founded in 1976, Boston University School of Public Health is one of the top five ranked private schools of public health in the world. It offers master's- and doctoral-level education in public health. The faculty in six departments conduct policy-changing public health research around the world, with the mission of improving the health of populations—especially the disadvantaged, underserved, and vulnerable—locally and globally Journal Environmental Science & Technology Method of Research Observational study Subject of Research Not applicable Article Title Changes in Ultrafine Particle Concentrations near a Major Airport Following Reduced Transportation Activity during the COVID-19 Pandemic Article Publication Date 15-Aug-2022 Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Environmental Science
It is “virtually certain” that future extreme events in Antarctica will be worse than the extraordinary changes already observed, according to a new scientific warning that stresses the case for immediate and drastic action to limit global heating. A new review draws together evidence on the vulnerability of Antarctic systems, highlighting recent extremes such as record low sea ice levels, the collapse of ice shelves, and surface temperatures up to 38.5C above average over East Antarctica in 2022 – the world’s largest ever recorded heatwave. Records for Antarctic sea ice, which varies every year between a February minimum and a September maximum, “have been tumbling in recent years”, said study co-author Dr Caroline Holmes, a polar climate scientist at the British Antarctic Survey. “One clear metric of how things are changing is that the summer minimum has broken a new record three times in the past seven years,” she said at a press briefing. Sea ice extent in July 2022 hit a record low for that time of year, but was surpassed by a new record this July – one that was “three times further away from the average than what we’ve seen previously”, Holmes said. Antarctic land ice – which contributes to sea level rise when it melts – has also declined since the 1990s, said Assoc Prof Anna Hogg of the University of Leeds, a study co-author. Between 1992 and 2020, the Antarctic and Greenland ice sheets have contributed a 2.1cm rise to the global mean sea level. The rate of ice sheet loss from Antarctica “matches the IPCC worst case” for predicted ice loss under high greenhouse gas emissions scenarios, Hogg said. “The observations show we’re tracking [along] the most extreme prediction of what might happen.” This is despite global emissions currently tracking closer to an intermediate emissions pathway. Ice shelves, which fringe three-quarters of the Antarctic coastline, have also retreated in recent decades. Large sections of the Larsen-A, Larsen-B, and Wilkins ice shelves “collapsed catastrophically” in 1995, 2002 and 2008 respectively, the study noted. Ten Antarctic ice shelves have also experienced major ice calving events since 2009. “We should be deeply concerned about the environment of Antarctica in the years that are coming under continued fossil fuel burning,” said the study’s lead author, Prof Martin Siegert of the University of Exeter. “This is the most extreme natural laboratory on the planet. Our ability to measure and observe is very difficult … but we really must try harder to understand the processes that are causing these extreme events and their interconnectivity.” The study noted that given additional global heating of at least 0.4C was now unavoidable, to limit heating to 1.5C as per the Paris agreement, “it is virtually certain that future Antarctic extreme events will be more pronounced than those observed to date”. Prof Tim Naish, director of the Antarctic Research Centre at the Victoria University of Wellington, who was not involved in the research, said the increasing occurrence of extreme Antarctic events showed that “the policy response so far has been inadequate to address the climate crisis”. “Antarctica is experiencing more and more extreme events,” he said in a statement. “In some cases we are getting dangerously close to tipping points, which once crossed will lead to irreversible change with unstoppable consequences for future generations.” The research was published in the journal Frontiers in Environmental Science.
Environmental Science
A flooded-damaged supermarket in Lexington, Kentucky, August 2, 2022.Associated Press This story was originally published by Grist and is reproduced here as part of the Climate Desk collaboration. Appalachian states like Kentucky have a long, turbulent history with coal and mountaintop removal—an extractive mining process that uses explosives to clear forests and scrape soil in order to access underlying coal seams. For years, researchers have warned that land warped by mountaintop removal may be more prone to flooding due to the resulting lack of vegetation to prevent increased runoff. Without trees to buffer the rain and soil to soak it up, water pools together and heads for the least resistant path—downhill. In 2019, a pair of Duke University scientists conducted an analysis of floodprone communities throughout the region for Inside Climate News that identified the most “mining damaged areas.” These included many of the same Eastern Kentucky communities that saw river levels rise by 25 feet in just 24 hours this past week.  “The findings suggest that long after the coal mining stops, its legacy of mining could continue to exact a price on residents who live downstream from the hundreds of mountains that have been leveled in Appalachia to produce electricity,” wrote Inside Climate News’ James Bruggers. Now those findings feel tragically prescient. From July 25 to 30, Eastern Kentucky saw a mixture of flash floods and thunderstorms bringing upwards of four inches of rain per hour, swelling local rivers to historic levels. To date, the flooding has claimed at least 37 lives. Nicolas Zégre, director of West Virginia University’s Mountain Hydrology Laboratory, studies the hydrological impacts of mountaintop removal mining and how water moves through the environment. While it’s too early to know how much the area’s history of mining contributed to this year’s flooding, he said he thinks of Appalachia as “climate zero,” a region built on the coal industry, which contributed to rising global temperatures and increased carbon in the atmosphere. “Whether it was the 2016 flood in West Virginia or the recent floods in Kentucky, there’s more intense rainfall due to warmer temperatures,” Zégre said, “and then that rainfall was falling on landscapes that have had their forests removed.” To some regional scientists, strip mining isn’t the end-all-be-all link to increased flooding. A 2017 Environmental Science and Technology study monitored how mountaintop removal mining might actually help store precipitation. When a mountaintop is rocked by explosions, leftover material is packed into areas known as valley fills. According to the authors, “mined watersheds with valley fills appear to store precipitation for considerable periods of time.” The study did note that material found inside valley fills often contains toxic chemicals and heavy metals created by the mining process. These compounds are subsequently washed into streams during heavy rain, a process known as alkaline mine drainage. According to a 2012 study, also from Environmental Science and Technology, alkaline mine drainage has polluted as much as 22 percent of all streams in central Appalachia. Despite the fact that Kentucky and greater Appalachia have fueled much of the world’s energy supply for decades, many communities in the region struggle with poverty and aging infrastructure. Those conditions are likely to make it harder for many towns to recover from severe flooding—a particular concern given that climate change is expected to cause a mix of droughts and wetter summers throughout the Ohio River Basin. Still, Kentucky Democratic Governor Andy Beshear said he was unsure why the region continues to be flooded “I wish I could tell you why we keep getting hit here in Kentucky,” Beshear said in a statement announcing a flood relief program this past week. “I wish I could tell you why areas—where people may not have that much—continue to get hit and lose everything.”  The link between flood risk and mining damage means coal country flooding is more than an Appalachian issue. But Zégre told Grist that acknowledging the extraction process, and properly funding research to study the impacts, often gets pushed to the wayside, just like the region. “Because [mountaintop removal mining] happens in backwoods Appalachia, nobody really thinks about it happening,” Zégre said. “They’re just people in DC who are just grateful to be able to turn on their light and have inexpensive electricity to charge their cars.”
Environmental Science
Researchers discover that the ice cap is teeming with microorganisms There are no plants, and only very few animals: people rarely come here. The large glaciers in Greenland have long been perceived as ice deserts. Gigantic ice sheets where conditions for life are extremely harsh. But now, it seems, we have been wrong. There is much more life on the glaciers than we thought. Headed by Professor Alexandre Anesio, a group of researchers from the Department of Environmental Science at Aarhus University have discovered that the glaciers are teeming with life. Microbes that have adapted to life on the ice. And not just one or two species. Several thousand different species. "A small puddle of melt-water on a glacier can easily have 4,000 different species living in it. They live on bacteria, algae, viruses and microscopic fungi. It's a whole ecosystem that we never knew existed until recently," says Alexandre Anesio. What do the microbes live on? Over the past 50 years, researchers have repeatedly been surprised by the hardiness of life. Life has been found several kilometers underground—where there is neither sun nor oxygen. Billions of microorganisms "eat" minerals in the bedrock and so can survive. Researchers have shown that life can even survive in space. In 2007, European researchers placed a colony of more than 3,000 microscopic water bears (tardigrades) outside a satellite and sent them into orbit around the Earth. The orbit lasted 10 days, after which the satellite returned to Earth. No less than 68% of the microbes survived the vacuum of space and the lethal radiation. Therefore, it might not come as a surprise that life also thrives on the glaciers. After all there is sun, oxygen and water. Nevertheless, until recently, researchers believed that the ice had too little nourishment to sustain life. But they were wrong. There is nourishment. Just in incredibly small quantities, explains Alexandre Anesio. Black algae One of the microorganisms on the ice that the researchers spent most time investigating is a small black algae. The algae grows on top of the ice and tinges it black. There is a reason why the black algae so interesting for the researchers. "When the ice darkens, it becomes more difficult to reflect sunlight. Instead, heat from the sun's rays is absorbed by the ice, which starts to melt. The more the ice melts, the warmer the temperature on Earth. The algae therefore play an important role in global warming," says Alexandre Anesio. In recent years, larger and larger areas of the ice have become stained by the algae, making the ice melt even faster. Alexandre Anesio has calculated that the algae are increasing the ice melt by about 20%. The algae on the ice also existed before people kicked off global warming through industrialization. However, climate change means spring arrives ever earlier to the Arctic and as a result the algae have a longer season to grow and spread. "The algae spread a little more every year. When I travel to Greenland, I now see vast areas where the ice is completely dark because of the algae," he says. Looking for an algaecide Alexandre Anesio and his colleagues are spending a lot of time on the black algae because they are trying to find out whether the algae growth can be slowed down in some way or another. The is a balance In most ecosystems—a kind of equilibrium—because the various organisms keep each other in check. So Alexandre Anesio wants to learn more about the relationship between the different microbes. "The various microorganisms on the ice affect each other. Some leave nutrition that others live off. Small viral particles attack and consume bacteria. We believe that some of the fungal spores could eat the black algae. This is what we're looking for," he says. However, he stresses that, even if they do find a way to curb algae growth, this will not solve climate change. Although it could slow it down. Algae growth is a consequence of our releasing too many greenhouse gases into the atmosphere. And this is where the problem must be solved. We need to focus on slowing down our emissions. The same pigment as in black tea Algae is found virtually everywhere. In the sea, in lakes, on trees and rocks, and even as small spores in the air. Most algae are greenish. Like plants and trees, they are green because of chlorophyll. A molecule that enables them to photosynthesise. But it's different for the black algae. "Because the algae live on the ice, they're bombarded with sunlight and radiation. To protect themselves, they produce a lot of black pigment. It's actually the same pigment as in black tea. The pigment forms a protective layer outside the algae and protects the chlorophyll molecules against the dangerous radiation," says Alexandre Anesio.When the pigment absorbs the sun's rays, it generates heat. This heat makes the ice around the algae melt. And this actually benefits the algae. They need both water and micronutrients from the ice to live. And they can only use the water when it is liquid. NASA also has an eye on his research Alexandre Anesio's research into life on the ice is important for a better understanding of climate change. However, NASA is also following his research results closely. The results may be crucial in the hunt for life in space. "NASA has approached us several times because we're working with life that lives in one of the most inhospitable places on Earth. If life thrives on and under the ice, there's a probability that we'll also find life in the ice on Mars or Jupiter's and Saturn's ice moons, for example," he says. Before NASA sent their Perseverance rover to Mars, they even invited Alexandre Anesio to a meeting. "They were afraid that the rover would take with it microbes from Earth. Microbes that may be able to survive on Mars and pollute the samples they were going to take from Mars. So, they wanted to know what conditions life can survive in. What are the boundaries for life?" NASA is so interested in the research of life in the ice because we haven't found liquid water on any other planets in the solar system. Not yet, anyway. But we've found plenty of ice. However, there is evidence to suggest that there are liquid oceans beneath the frozen surface of Saturn's moon, Enceladus and Jupiter's moon, Europa—and one of the necessities of life, as we know it, is liquid water. Therefore, NASA and other space agencies are very interested in learning more about the type of life that can live on and under the ice. Because organisms that resemble those in Greenland are probably those they'll be looking for on the ice moons. "Like us, they're very interested in how the microorganisms on the ice function. How much nutrition do they need? What type of nutrition? And how does the ecosystem they are part of work? These are questions that we hope to be able to answer in the future," says Alexandre Anesio. Related research is published in the journal Geobiology. More information: James A. Bradley et al, Active and dormant microorganisms on glacier surfaces, Geobiology (2022). DOI: 10.1111/gbi.12535 Provided by Aarhus University
Environmental Science
Methane leaks alone from Turkmenistan’s two main fossil fuel fields caused more global heating in 2022 than the entire carbon emissions of the UK, satellite data has revealed. Emissions of the potent greenhouse gas from the oil- and gas-rich country are “mind-boggling”, and an “infuriating” problem that should be easy to fix, experts have told the Guardian. The data produced by Kayrros for the Guardian found that the western fossil fuel field in Turkmenistan, on the Caspian coast, leaked 2.6m tonnes of methane in 2022. The eastern field emitted 1.8m tonnes. Together, the two fields released emissions equivalent to 366m tonnes of CO2, more than the UK’s annual emissions, which are the 17th-biggest in the world. Methane emissions have surged alarmingly since 2007 and this acceleration may be the biggest threat to keeping below 1.5C of global heating, according to scientists. It also seriously risks triggering catastrophic climate tipping points, researchers say. The Guardian recently revealed that Turkmenistan was the worst in the world for methane “super emitting” leaks. Separate research suggests a switch from the flaring of methane to venting may be behind some of these vast outpourings. Flaring is used to burn unwanted gas, putting CO2 into the atmosphere, but is easy to detect and has been increasingly frowned upon in recent years. Venting simply releases the invisible methane into the air unburned, which, until recent developments in satellite technology, had been hard to detect. Methane traps 80 times more heat than CO2 over 20 years, making venting far worse for the climate. Experts told the Guardian that the Cop28 UN climate summit being hosted in the United Arab Emirates in December was an opportunity to drive methane-cutting action in Turkmenistan. The two petrostates have close ties and there is pressure on the UAE to dispel doubts that a big oil and gas producer can deliver strong outcomes from the summit. Tackling leaks from fossil fuel sites is the fastest and cheapest way to slash methane emissions, and therefore global heating. Action to stem leaks often pays for itself, as the gas captured can be sold. But the maintenance of infrastructure in Turkmenistan is very poor, according to experts. ‘Out of control’ “Methane is responsible for almost half of short-term [climate] warming and has absolutely not been managed up to now – it was completely out of control,” said Antoine Rostand, the president of Kayrros. “We know where the super emitters are and who is doing it,” he said. “We just need the policymakers and investors to do their job, which is to crack down on methane emissions. There is no comparable action in terms of [reducing] short-term climate impacts.” Super-emissions from oil and gas installations were readily ended, Rostand said, by fixing valves or pipes or, at the very least, relighting flares: “It’s very simple to do, it has no cost for the citizen, and for the producers, the cost is completely marginal.” The satellite data used by Kayrros to detect methane has been collected since the start of 2019 and Turkmenistan’s overall emissions show a level trend since then. Satellites have also detected 840 super-emitting events, ie leaks from single wells, tanks or pipes at a rate of a few tonnes an hour or more, the most from any nation. Most of the facilities leaking the methane were owned by Turkmenoil, the national oil company, Kayrros said. Further undetected methane emissions will be coming from Turkmenistan’s offshore oil and gas installations in the Caspian Sea, but the ability of satellites to measure methane leaks over water is still being developed. Kayrros also did some high-resolution monitoring of the North Bugdayly field in western Turkmenistan. The number of super-emitter events there doubled to almost 60 between 2021 and 2022, with one recent super-emitter pouring out methane for almost six weeks. Turkmenistan is China’s biggest supplier of gas and is planning to double its exports to the country. Until 2018, Turkmen citizens had received free gas and electricity. However, the country is also very vulnerable to the impacts of the climate crisis, with the likelihood of severe drought projected to increase “very significantly” over the 21st century and yields of major crops expected to fall. ‘Huge opportunity’ Speaking freely about the repressive and authoritarian state is difficult but sources told the Guardian it was a “very depressing” situation, with Turkmenistan probably the worst country in the world in dealing with methane leaks. They said preventing or fixing the leaks represented a “huge opportunity” but that the lack of action was “infuriating”. Turkmenistan could stop the leaks from ageing Soviet-era equipment and practices, they said, and the country could be the “world’s biggest methane reducer”. But the huge gas resources on tap meant “they never cared if it leaked”. It was also not a priority for the president, Serdar Berdimuhamedov, they said, without whose approval little happens. This is despite Berdimuhamedov, then deputy chair of the cabinet of ministers, telling the UN climate summit Cop26 in Glasgow in 2021 that Turkmenistan was reducing greenhouse gas emissions “by introducing modern technologies in all spheres of the state’s economy”, with “special attention” to the reduction of methane emissions. Berdimuhamedov also welcomed the Global Methane Pledge (GMP) to cut emissions, but Turkmenistan has failed to join the 150 nations now signed. Neither are Turkmenoil and Turkengas, the state companies, members of a voluntary UN initiative to cut leaks, the Oil and Gas Methane Partnership 2.0 (OGMP2), which covers about 40% of global oil and gas production. “The president hasn’t followed up,” said a source. Largest hotspot Recent scientific research, published in the journal Environmental Science and Technology, found that the west coast of Turkmenistan was “one of the largest methane hotspots in the world”. Detailed analysis of satellite data revealed 29 different super-emitter events between 2017 and 2020, although older satellite data showed that “this type of emission has been occurring for decades”. The researchers said 24 of the 29 super-emitter events came from flare stacks that had been extinguished and were then venting methane directly into the air, and that all were managed by state companies. The other five were linked to pipeline leaks. The scientists said that “the more frequent emitters would conflict with Turkmen law, which bans continuous gas flaring and venting”. “Flaring is very easy to identify from the flame itself,” said Itziar Irakulis-Loitxate, of the Universitat Politècnica de València in Spain, who led the study. “But venting was something that you could not identify easily until two years ago.” The switch to venting, a far worse environmental practice, was “mind-boggling”, according to another expert. The scientists said the prevalence of venting “points to the risks of penalising flaring without effective measures to control venting”. The World Bank founded a global initiative to end flaring in 2015. ‘Forcing mechanism’ The UN climate summit in December represented an opportunity for change, sources said, as it is being hosted by the UAE, which has strong links with Turkmenistan and expertise in oil and gas production. The most recent visit by Sheikh Mansour bin Zayed, the UAE’s deputy prime minister, to Turkmenistan was in February. He met Berdimuhamedov and discussed with him bilateral cooperation “in vital sectors such as oil and gas”. The UAE is a member of the Global Methane Pledge and the state oil company, Adnoc, is a member of the OGMP2. Adnoc recently announced a partnership to develop a “supergiant gas field” called Galkynysh and other energy projects in Turkmenistan. However, Adnoc did not respond to a request for information on how the company would help limit methane emissions in the country. The Guardian understands diplomatic efforts are being made to urge Turkmenistan to cut its methane emissions. “We are really hoping Cop28 is a forcing mechanism,” a source said. The Guardian contacted Turkmenoil, Turkmengaz, the Turkmenistan ministry of foreign affairs and the Turkmen embassy in the UK for comment, but none responded.
Environmental Science
Warming ends when carbon pollution stops - Department of Earth and Environmental Science, University of Pennsylvania, Philadelphia, PA, United States An Editorial on the Frontiers in Science Lead Article The Zero Emissions Commitment and climate stabilization Key points - Improved Earth system models now consider the complex interplay between the oceans and terrestrial biosphere and their key role of actively drawing down carbon from the atmosphere, showing the direct and immediate impact of our efforts to reduce carbon emissions. - Long-term warming will largely be determined by carbon dioxide (CO2) emissions but aerosols and short-lived, human-generated greenhouse gases such as methane must also be considered. - Caveats to the Zero Emissions Commitment (ZEC) and climate stabilization involve the precise magnitude of the carbon budgets that remain for avoiding critical warming thresholds, as well as processes such as the warming of the deep ocean and its consequences, including rising sea levels. - At this point, the obstacles to climate action are neither physical nor technological but political, which means they can be overcome with the right sense of urgency. What is most urgently required is a rapid phaseout of human-induced activities that produce carbon pollution. A recent article in The Hill insisted that “scientists failed for decades to communicate” (1) the threat of climate change. The scientific paper on which the article was reporting did not really say that—it was instead providing a more nuanced discussion of sea level rise “tail risk”. But the irony here is that the opposite of what was asserted by the news article is arguably true. If anything, we scientists have failed to communicate the prospects for averting catastrophic warming. In the days when I was working on my PhD, in the early 1990s, we were taught that the warming of the planet would persist for decades even if we suddenly stopped burning fossil fuels and emitting carbon into the atmosphere. This is due to what is known as “thermal inertia”—the slow, sluggish response of the oceans. Climate models showed that surface warming would continue for 30 years or more, as the oceans slowly continue to warm, even after carbon pollution ceases. This so-called “committed warming” would seem to render our efforts to avert disaster somewhat futile. Even if we turned off the metaphorical carbon faucet, the water level of warming would continue to rise. Extending the metaphor, that water would soon spill from our kitchen sink onto the kitchen floor. With apologies to Greta Thunberg, rather than burning, our house would instead be flooding. But that picture is fundamentally incomplete—there is a “drain” too in the form of the ocean carbon cycle. That drain causes the water level, i.e., the planetary temperature, to stabilize. Through a somewhat fortuitous coincidence of nature, there are offsetting tendencies in ocean physics and ocean chemistry. The positive “thermal inertia” (the physics) is almost perfectly offset by a negative “carbon cycle inertia” (the chemistry). To be more specific, the rate at which the ocean surface tends to continue to warm up due to the carbon already emitted is nearly identical to the rate at which the oceans absorb and bury atmospheric carbon dioxide (CO2), lowering the atmospheric greenhouse effect and cooling the lower atmosphere and surface. The two effects essentially cancel each other out. And so, instead, we get an essentially flat temperature curve—the stable metaphorical water level—when human carbon emissions approach zero. In the old days, climate modelers would simply set the atmospheric CO2 constant to represent a scenario in which human carbon emissions cease. However, this amounted to an erroneous implicit assumption of zero carbon cycle inertia. In the newer, more realistic modeling framework, based on more comprehensive Earth system models, the oceans as well as the terrestrial biosphere are allowed to play an interactive role in the behavior of the system, which includes the key role of actively drawing down carbon from the atmosphere. While this new more realistic framework emerged more than a decade ago (2), only far more recently has it truly penetrated public climate discourse. Arguably, both scientists and journalists are at fault (3) for the continued notion that we are due decades of additional warming even after we cease fossil fuel burning and other activities generating carbon pollution. We now know this is not true. This is hardly a minor technical matter. It fundamentally changes our sense of agency in averting disaster. It means our efforts to reduce carbon emissions have a direct and immediate impact. It is the reason we can meaningfully define a “carbon budget”—there is a fixed amount of fossil fuel we can afford to burn and stay below critical planetary temperature levels such as 1.5°C or 2.0°C. We can estimate that budget and work toward policies that can keep us collectively within it, at least in principle. Old habits die hard, and some scientists have remained skeptical of this revised understanding. Indeed, I myself took several years to accept the paradigm-shifting implications of this finding. But this finding appears quite robust, having been affirmed by a solid body of work over the past decade. The current state of the science is adroitly summarized in a comprehensive new review “The Zero Emissions Commitment and climate stabilization” by a team of nearly two dozen experts on this research topic in the current Frontiers in Science Lead Article by Palazzo Corner et al. (4). Palazzo Corner et al. focus on the “Zero Emissions Commitment” or “ZEC”, which is defined as how much warming (if any) we can expect upon reaching zero carbon emissions. Technically, that is zero “net emissions”, as the possibility exists—at least theoretically—for artificial drawdown of atmospheric carbon, i.e., so-called “negative emissions”. Yet there is no evidence at present that such technology could be deployed at scale let alone in the rapid timeframe now necessary. To the extent that the ostensible promise of negative emissions technology may be little more than a techno-optimistic mirage, my preference is to omit the “net” prefix. What is truly required is a rapid phaseout of anthropogenic activities that produce carbon pollution. Palazzo Corner et al. show that, on average, the various models used in the recent “ZECMIP” intercomparison study indicate a ZEC close to zero, at least on a 50-year timescale (greater uncertainties apply for longer timescales due to uncertainties in the longer-term behavior of, e.g., the global carbon cycle and for larger amounts of cumulative carbon emissions). For at least 1000 gigatons (trillion tons) of emitted carbon (“GtC”) (thus far, for comparison, we have burned about 680 GtC, so it is plausible we can remain below that limit), the average ZEC across models is not only close to zero but very slightly negative (just under −0.1°C). There is nonetheless a substantial range in estimated ZEC among the various individual models: anywhere from a cooling of −0.3°C to a warming of +0.3°C. Palazzo Corner et al. note, accordingly, that there is a roughly 66% likelihood that additional warming will be less than 0.3°C once zero emissions are reached. Conversely, that implies a 33% chance the warming could be more than this. Given we are currently at roughly 1.2°C warming relative to the pre-industrial era, that could mean reaching the oft-cited threshold for catastrophic warming of 1.5°C even if we were somehow to bring carbon emissions to zero today. While there is a good chance we would avert 1.5°C warming in that scenario, to paraphrase Clint Eastwood, we must ask ourselves one question: do we feel lucky? Now, there are certainly some additional caveats to this story, and they are explored in some detail by Palazzo Corner et al. First of all, ZECMIP features only a subset of available climate system models, and the inferences drawn are quite limited beyond the 100-year time horizon, where the behavior of long-response components of the climate system—the ice sheets, the deep ocean, etc.—remains a bit vague. Unsurprisingly, there is a large degree of variation in model predictions beyond that time horizon, with some models indicating that additional warming could eventually kick in. Given the complexities of the ocean carbon cycle in particular and ocean mixing processes more generally (which are relevant for both carbon and heat burial), the seemingly coincidental balance between positive climate inertia and negative carbon cycle inertia is tenuous and subject to possible revision as scientific understanding continues to advance. Another wild card is that other factors besides CO2 come into play when we examine future global temperature scenarios. Among these are sulfate aerosols (which exert a cooling effect) and black carbonaceous aerosols (which exert a warming effect) from coal burning and other short-lived greenhouse gases generated from human activity, such as methane, nitrous oxide, and lower atmospheric ozone pollution. In most scenarios, the effects of these competing short-term radiative constituents cancel each other out, constituting a near zero sum game as we phase out fossil fuel burning and the other underlying anthropogenic activities that generate them. But this depends on the details of policies impacting those activities, including, in the case of methane, agriculture, hydroelectric dam construction, and natural gas extraction. The cancellation is also dependent on us getting the radiative physics right. In the case of sulfate aerosols—particularly so-called “indirect effects” such as cloud nucleation—there is still substantial uncertainty. So, while long-term warming will largely be determined by what happens to CO2, these other contributors will matter too if the activities producing them continue. Yet another caveat involves the precise magnitude of the carbon budgets that remain for avoiding critical warming thresholds. The amount of CO2 that can still be emitted for a 50% chance of staying below 1.5°C of warming is conventionally estimated to be roughly 100 GtC. At the current rate of emissions, we would run through that budget in less than a decade. Yet that estimate may be overly liberal as it is dependent on other assumptions. Among these is how we define the pre-industrial baseline relative to which net warming is measured. It has typically been taken to be the late 19th century (the mid-point of the first 50 years of available widespread surface temperature measurements). There is evidence, however, that human-caused warming began before that, perhaps as early as the mid-18th century. Taking into account the earlier human-caused warming potentially reduces the budget for averting 1.5°C warming by as much as 40% (5). But the most significant caveat of all is that surface temperatures are not the only things that matter during the climate crisis. While some impacts, like extreme weather events, appear to be tied to surface warming, others, like rising sea levels and ice sheet destabilization, depend on the warming of the deep ocean. That would continue for decades and centuries to come (see Figure 1 from Palazzo Corner et al., 2023). Our earlier sink analogy, while useful for understanding the competing impacts on the warming level, is an imperfect one: even if surface temperature levels are constant, actual (rather than metaphorical) water levels will continue to rise for some time. Figure 1 Reproduced from Figure 1 in Palazzo Corner S, et al. 2023, licensed under CC BY-SA 4.0. Stylized schematic of how atmospheric carbon dioxide (CO2) concentrations, ocean heat uptake, and global surface temperature can evolve under net zero CO2 emissions. Process timescales are illustrative. Comment (MM): This shows the average response of climate models for a scenario of sudden instantaneous cessation of global carbon emissions (black curve); the trajectories for CO2 concentrations (light purple); ocean heat uptake (cyan); surface temperature (orange); and sea level rise (dark blue). Moreover, as the ocean continues to absorb atmospheric carbon, ocean acidification—which impacts coral reefs and other calcareous ocean biota, such as mollusks and crustaceans—would continue to worsen, threatening food webs in the ocean. So, the penalty of procrastination remains, underscoring the importance of decarbonizing our societal machinery as rapidly as possible if we are to remain within our adaptive capacity as a civilization. Nevertheless, this new study offers hope. At a time when climate advocates have become disillusioned by the lack of progress—and this is understandable given the “commitment gap” that still remains between what has been promised and what is required—this latest study reminds us that the obstacles to climate action are neither physical nor technological. At this point, they remain political. History teaches us that political obstacles can be overcome, so there remains both urgency and agency when it comes to the ongoing climate battle. Statements Author contributions MM: Visualization, Writing – original draft, Writing – review & editing. Conflict of interest The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. References 1. Elbein S. Catch-22: Scientific communication failures linked to faster-rising seas (2023). The Hill. Available at: https://thehill.com/policy/energy-environment/4057045-catch-22-scientific-communication-failures-linked-to-faster-rising-seas/ (Accessed July 10, 2023). 2. Meinshausen M, Meinshausen N, Hare W, Raper SC, Frieler K, Knutti R, et al. Greenhouse-gas emission targets for limiting global warming to 2°C. Nature (2009) 458(7242):1158–62. doi: 10.1038/nature08017 3. Hertsgaard M, Huq S and Mann ME. How a little-discussed revision of climate science could help avert doom (2022). Washington Post. Available at: https://www.washingtonpost.com/outlook/2022/02/23/warming-timeline-carbon-budget-climate-science/ (Accessed July 10, 2023). Keywords: carbon budget, thermal inertia, climate change, CO2, net zero, Zero Emissions Commitment, committed warming Citation: Mann ME. Warming ends when carbon pollution stops. Front Sci (2023) 1:1256273. doi: 10.3389/fsci.2023.1256273 Received: 10 July 2023; Accepted: 03 October 2023; Published: 14 November 2023. Edited by:Frontiers in Science, Frontiers Media SA, Switzerland Reviewed by:Shawn Marshall, University of Calgary, Canada Copyright © 2023 Mann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Michael E. Mann, mmann00@sas.upenn.edu
Environmental Science
From tiny plankton to massive whales, microplastics have been found throughout the ocean food chain. One major source of this pollution are fibers shed while laundering synthetic fabrics. Although many studies show microfibers are released during machine washing, it’s been less clear how hand washing contributes. Now, researchers reporting in ACS Environmental Science & Technology Water report that hand washing can drastically cut the amount of fibers shed compared with using a machine. When clothing made from plastic fibers, such as polyester and nylon, are laundered, the fabric sheds microscopic fibers that eventually end up in wastewater and the environment. Though researchers have investigated the amount and types of microplastic fibers shed while laundering clothing, most studies have focused on washing machines. In many countries, however, it is still common to manually launder clothing. A team has previously reported on the effects of washing fabric by hand, but the study was not comprehensive. So, Wang, Zhao, Xing and colleagues wanted to systematically investigate microplastic fiber release from synthetic textiles with different methods of hand washing in contrast to machine washing. The team cleaned two types of fabric swatches made from 100% polyester and a 95% polyester-5% spandex blend with hand washing methods and a washing machine. The researchers found that: Manual methods released far fewer fibers. For example, the 100% polyester fabric shed an average of 1,853 microplastic pieces during hand washing compared with an average of 23,723 pieces from the same fabric that was machine laundered. By weight, machine laundering released over five times more microplastics than the traditional method. The fibers released from hand washing tended to be longer. Adding detergent, pre-soaking the fabrics and using a washboard increased the number of released fibers with manual methods, but still not to the same extent as using a machine. In contrast, they found that temperature, detergent type, wash time and the amount of water used had no meaningful effects on the amount of microplastics shed while hand washing. The researchers say that these results will help clarify the sources of microplastic pollution in the environment and can provide guidance for “greener” laundering methods. The authors acknowledge funding from the Zhejiang Provincial Natural Science Foundation of China, the National Natural Science Foundation of China and the Scientific Research Foundation of Hangzhou Dianzi University.
Environmental Science
MainThe Brazilian Legal Amazon (BLA) has the largest tropical rainforest area, the highest biodiversity and the largest amount of aboveground biomass in the world1,2,3. Since 2000, the Indigenous territories (ITs) and the protected areas (PAs) in the BLA have increased substantially and by 2013 they accounted for 43% of the total land area and covered about half of the total forest area in the region1. ITs and PAs have important roles in forest and biodiversity conservation and climate mitigation in the BLA4,5. Many studies have investigated the effects of the ITs/PAs in reducing deforestation and strengthening forest conservation in the BLA1,6,7,8,9,10, but their results differ substantially as different approaches and datasets were used. A comparison of deforestation rates between the PAs and the non-protected areas (non-PAs) showed that the deforestation rates in the PAs were 1.6–2.2 times6 or ten times1,7,8 lower than those in the non-PAs, respectively. PAs are usually located in remote areas with small deforestation pressure (Extended Data Fig. 1), whereas the non-PAs often have high deforestation pressure. The strict-protected PAs, located in more remote areas, have more forest cover and less deforestation pressure in comparison to sustainable-use PAs and ITs (Extended Data Fig. 2). The comparison of deforestation rate between PAs and non-PAs may not fully show the effects of the PAs in reducing deforestation. Spatial matching methods were applied to assess the effects of PAs on deforestation in the BLA9,10; however, due to extensive deforestation and forest degradation11, it is challenging to find locations in the non-PAs with conditions similar to those in each PA to serve as the controls for the spatial matching methods. Various studies have assessed the effects of ITs/PAs using the official Brazilian deforestation dataset (PRODES)6,7,8, which reports deforestation of primary forests based on the reference forest map in the 1980s and does not include secondary forests that are important for the carbon cycle and biodiversity conservation12.The conflicts between forest conservation and socio-economic development in the BLA persist but vary over the years13,14,15,16. Forest conservation in ITs/PAs has encountered increasing threats from loosened environmental laws and regulations, changing governmental policies and massive economic development17,18, especially after 201219. About 100 × 106 ha, including at least 20% of the area in strict protection PAs and ITs, have applications pending for mineral prospecting or mining operations20,21. The satellite data show that illegal mining in the BLA substantially increased from 2000 to 2012 (an average area of 2,500 ha yr−1) and hit a record high area in 2020 (10,000 ha yr−1) amid widespread protests from Indigenous people22. Primary forest loss reached a decade high in 202023. Between March and September 2020, Brazil passed 27 legislative acts that weakened environmental protection24. Fines for violation of environmental and conservation laws dropped by 72% from March to August 2020, despite an increase in deforestation24. Besides, COVID-19 started to spread widely in the Amazon in early 2020, causing high mortality rates among Indigenous groups25, creating favourable conditions for encroachment of Indigenous lands by illegal loggers and miners.PAs in the BLA have two types of governance and institution (national versus state PAs) and two types of management objective (strict protection versus sustainable use; Extended Data Figs. 1 and 2). To what degree were deforestation dynamics in the BLA and ITs/PAs under different governance and management affected by the COVID-19 pandemic and changing policies in recent years? Here, we investigate and report on (1) interannual changes in forest areas and deforestation in the BLA, ITs, PAs and non-PAs (that is, non-designated areas) and (2) the effects of ITs, national PAs (nPAs) and state PAs (sPAs) in reducing deforestation from 2000 to 2021 under the changing environmental laws and governance, natural disturbances and COVID-19. We generated annual evergreen forest maps by using daily images acquired by moderate resolution imaging spectroradiometer (MODIS) onboard the Terra satellite from 2000 to 2021 and the forest mapping algorithm1,2.ResultsIncreased areas of ITs/PAs for forest conservationThe number and area of ITs/PAs in the BLA increased from 1980 to 2018 with noticeable characteristics (Fig. 1a). The ITs area rose slowly from 0.2 × 106 ha in 1980 to 4 × 106 ha in 1988 but started to increase rapidly after 1988 and reached 74 × 106 ha in 2000 and 115 × 106 ha in 2016. This large and rapid increase in numbers and areas of ITs was driven by the 1988 Constitution’s requirement that the government demarcate all Indigenous lands within 5 years. Although the Constitution’s requirement has yet to be fulfilled, it stimulated substantial expansion of ITs, including 40 × 106 ha demarcated with financial support from the G7 Pilot Program to Conserve the Brazilian Rainforest. In 2000, the Brazil’s National Protected Areas System was officially established to expand PAs and better manage the PAs for forest conservation. The nPAs area increased from 9 × 106 ha in 1980 to 27 × 106 ha in 2000, 62 × 106 ha in 2008 and 66 × 106 ha in 2018. The sPAs area increased from less than 1 × 106 ha in 1980–1989 to 25 × 106 ha in 2000, 56 × 106 ha in 2007 and 60 × 106 ha in 2018. When lumped together, the total area of ITs/PAs substantially expanded from 10 × 106 ha in 1980 to 126 × 106 ha in 2000 and 241 × 106 ha in 2018. The numbers of ITs, nPAs and sPAs also had similar changes over time and substantially increased from 2, 11 and 2 in 1980 to 227, 70 and 95 in 2000 and 387, 146 and 191 in 2018, respectively.Fig. 1: Cumulative areas and numbers of ITs and PAs and cumulative forest areas in ITs/PAs in the BLA.a, Cumulative areas and numbers of ITs, nPAs and sPAs from 1980 to 2018. ITs data in 2017 and 2018 are not available. b, Cumulative forest areas in ITs, nPAs and sPAs from 2000 to 2018.Full size imageAs most ITs/PAs are covered by forests, the large increase of ITs/PAs strengthens forest conservation substantially in the BLA. Figure 1b shows the interannual change in forest area in ITs, nPAs and sPAs during 2000–2018. Forest area in ITs increased from 67 × 106 ha in 2000 to 105 × 106 ha in 2016, an increase of 55%. Forest area in nPAs increased from 25 × 106 ha in 2000 to 62 × 106 ha in 2018, an increase of 146%. Forest area in sPAs increased from 18 × 106 ha in 2000 to 50 × 106 ha in 2018, an increase of 169%. We overlaid the 2000 annual forest map (a total of 394 × 106 ha of forest) with the 2018 boundary maps of ITs/PAs and these ITs/PAs in 2018 covered 206 × 106 ha forest, accounting for 52% of the total forest area in 2000, which clearly shows the importance of the ITs/PAs for forest conservation.Interannual change of forest area during 2000–2021We used our annual forest maps to quantify the interannual changes in forest area during 2000–2021 (Fig. 2a). The total forest area in the BLA decreased substantially from 394 × 106 ha in 2000 to 366 × 106 ha in 2021, a loss of 28 × 106 ha (~7% of the forest area in 2000 or an area loss larger than Brazil’s state of Rondônia). The 2018 ITs/PAs boundary maps were overlaid on the annual forest maps to calculate the interannual change of forest area in ITs/PAs during 2000–2021. Total forest area in ITs, nPAs and sPAs decreased from 105.7 × 106 ha, 62.6 × 106 ha and 50.5 × 106 ha in 2000 to 104.9 × 106 ha, 62.2 × 106 ha and 50.0 × 106 ha in 2021, respectively, with the average loss rates of 0.04% yr−1, 0.03% yr−1 and 0.10% yr−1 over their forest areas in 2000 (Fig. 2b–d). As some of the ITs/PAs overlap each other, we combined the ITs/PAs together; the total forest area in the combined ITs/PAs decreased slightly from 206.4 × 106 ha in 2000 to 204.9 × 106 ha in 2021, an average annual loss rate of 0.08 × 106 ha yr−1 (0.04% yr−1). To our surprise, ITs/PAs together had a slightly larger forest area in 2018–2021 (204.6 × 106 ha) than in 2014–2017 (204.1 × 106 ha), which may be related to the recovery of forests after severe damage in 2015/2016 and, to a much lesser extent, from tree-planting projects. In comparison, the forest area in the non-PAs (Fig. 2e) decreased from 187.8 × 106 ha in 2000 to 161.3 × 106 ha in 2021, with an average annual loss rate of 1.3 × 106 ha yr−1 (0.7% yr−1), about 14 times more than the ITs/PAs. The loss of 27 × 106 ha forest area during 2000–2021 in the non-PAs accounted for ~95% of the total loss of forest area in the BLA over the same period.Fig. 2: Interannual changes of forest areas in the BLA from 2000 to 2021.a–e, Forest areas in the BLA (a), ITs (b), nPAs (c), sPAs (d) and non-PAs (e). f, Forest area map in 2000. g–i, Forest area change rates in ITs (g), nPAs (h) and sPAs (i). Forest area change rates are calculated from annual forest areas between 2000 and 2021 on the basis of linear regression analysis at the 90% confidence level.Full size imageGeographically, the interannual change of forest area in ITs, nPAs and sPAs from 2000 to 2021 had noticeable spatiotemporal patterns. The trend analysis of forest areas (Fig. 2f–i) showed divergent dynamics among ITs/PAs. ITs/PAs in the southern and eastern portions of the BLA (the ‘arc of deforestation’) had the largest losses of forest area. The ITs/PAs in the northern portion of the BLA either had no significant change in forest area or had increased forest area. In total, 39.8% of nPAs (59 out of 146 nPAs), 40.4% of ITs (154 out of 387 ITs) and 44.0% of sPAs (84 out of 191 sPAs) had significant forest area loss from 2000 to 2021 (P < 0.1).Spatiotemporal dynamics of primary forest area lossWe used the 2001 forest map as the reference map and identified the first year the forest pixels in 2001 were classified as non-forest pixels during 2002–2021 (Fig. 3a,b) and we counted the number of pixels with a change from forest to non-forest (forest loss) in a year as annual gross forest area loss over the BLA, ITs, PAs and non-PAs from 2002 to 2021 (Fig. 3c–g). The cumulative gross forest area losses during 2002–2021 were 49 × 106 ha for the BLA, including 2.1 × 106 ha for ITs, 1.2 × 106 ha for nPAs, 2.8 × 106 ha for sPAs and 43.1 × 106 ha for non-PAs. The combined ITs/PAs had a 5.9 × 106 ha gross forest area loss from 2002 to 2021, accounting for ~12% of total gross forest area loss in the BLA, which clearly indicates the critical role of the ITs/PAs in forest conservation.Fig. 3: Annual gross forest area losses in the BLA from 2002 to 2021.a, Spatial distribution of the year a forest pixel in 2001 was classified from forest to non-forest (forest loss) in the ITs/PAs. b, Spatial distribution of the year a forest pixel in 2001 was classified from forest to non-forest (forest loss) outside ITs/PAs. c–g, Annual gross forest area loss in the BLA (c), ITs (d), nPAs (e), sPAs (f) and non-PAs (g). We used the 2001 forest map as the reference and identified the first year that forest pixels became non-forest pixels during 2002–2021. The dark grey pixels in a and b are forest in 2001.Full size imageThe interannual change of gross forest area loss in the BLA (Fig. 3c) during 2002–2021 reveals three interesting results. First, annual gross forest area losses in 2005, 2007, 2010 and 2015 were substantially larger than those in previous and subsequent years. Years 2005, 2007, 2010 and 2015 were characterized by strong El Niño events (Extended Data Fig. 3), high air temperature or severe drought26. Second, annual gross forest area loss in 2013 was least during 2000–2021. Year 2013 was a year with high air temperature without an El Niño or Atlantic Multidecadal Oscillation event. Third, when those years (2005, 2007, 2010, 2013 and 2015) are taken out, the temporal dynamics of annual gross forest area loss show five noticeable phases and each of these phases lasted a few years: (1) increased forest area loss in 2002–2004, (2) reduced forest loss in 2006–2009, (3) increased forest area loss in 2011–2014, (4) decreased forest area loss in 2016–2018 and (5) increased forest area loss in 2019–2021. In 2018–2021, the annual gross forest area loss rates increased 3.6 times in ITs/PAs, larger than the increase in non-PAs (1.6 times), indicating increasing deforestation pressure and an alarming signal in ITs/PAs.Varying effects of ITs/PAs on annual forest area lossTo investigate the effect of ITs/PAs on reducing forest area loss after they were established, we selected those ITs/PAs that were established after 2002 and analysed the average annual gross forest area loss rates before and after their establishment years (Fig. 4). The results show that 72 ITs, 32 nPAs and 38 sPAs had substantially reduced annual forest area loss rates but 49 ITs, 21 nPAs and 27 sPAs still had small to moderate increases in annual forest area loss rates; the other 24 ITs, 2 nPAs and 7 sPAs had no change in forest area loss rates as they had little or no forest area loss (Fig. 4a–c). There was no clear geographical cluster among those ITs/PAs with reduced or increased forest loss rates (Fig. 4d–f). Most of the sPAs in the northern and western BLA had small increases in annual forest area loss rates. In terms of institution and governance, the average annual gross forest area loss rates were reduced substantially for the nPAs (36%, 13.7 × 103 ha) and the ITs (30%, 10.7 × 103 ha) but only slightly for the sPAs (5%, 2.7 × 103 ha; Fig. 4g and Extended Data Fig. 4a). In terms of management objectives, the average annual gross forest area loss rates were reduced substantially for the strict-protection PAs (48%, 8.6 × 103 ha) and the ITs (30%, 10.7 × 103 ha) but only moderately for the sustainable-use PAs (11%, 7.7 × 103 ha; Fig. 4h and Extended Data Fig. 4b).Fig. 4: The effects of ITs/PAs on annual forest area loss rates in the BLA (2001–2021).a–c, Comparison of average annual gross forest area loss rates before and after ITs/PAs establishment (after the year 2002) by ITs (a), nPAs (b) and sPAs (c). d–f, Spatial distribution of the changes in annual gross forest area loss rates before and after ITs/PAs establishment by ITs (d), nPAs (e) and sPAs (f). g,h, The changes in average annual gross forest loss rates before and after the ITs/PAs establishment by governance (nPAs, ITs and sPAs) (g) and management (strict protection and sustainable use) (h). All ITs/PAs established before and during 2002 are white polygons.Full size imageDifferent deforestation dynamics inside different PA typesForest area in nPAs with the strict-protection objective (Fig. 5a) varied slightly during 2000–2013 but decreased moderately in 2014–2016, with a net loss of 0.14 × 106 ha (0.45%) from 2000 to 2021. Forest area in the nPAs with the sustainable-use objective (Fig. 5a) decreased continuously from 2000 (31.7 × 106 ha) to 2021 (31.5 × 106 ha), a net loss of 0.22 × 106 ha (0.7%). Forest area in sPAs with the strict-protection objective (Fig. 5b) decreased from 5.91 × 106 ha in 2000 to 5.88 × 106 ha in 2021, a net loss of 0.03 × 106 ha (0.51%). Forest area in sPAs with the sustainable-use objective (Fig. 5b) decreased from 44.65 × 106 ha in 2000 to 43.53 × 106 ha in 2016, a net loss of 1.12 × 106 ha (2.5%) but it had a modest recovery by 2021 (44.19 × 106 ha), thus a net loss of 0.46 × 106 ha (1.0%) from 2000 to 2021.Fig. 5: Annual forest areas and gross forest area loss rates in PAs with different governance and management regimes from 2000 to 2021.a, Annual forest areas in nPAs (strict protection, left Y-axis, and sustainable use, right Y-axis). b, Annual forest areas in sPAs (strict protection, left Y-axis, and sustainable use, right Y-axis). c, Area proportion of annual gross forest area loss to forest area in 2001 in nPAs (strict protection and sustainable use). d, Area proportion of annual gross forest area loss to forest area in 2001 in sPAs (strict protection and sustainable use).Full size imageDifferences in annual gross forest area loss rates were small and not significant between the PAs with different governance and management (Fig. 5c,d). The sPAs with the sustainable-use objective had the highest average gross forest area loss rate (0.31 ± 0.14% yr−1 of the forest area in 2001), followed by nPAs with the strict-protection objective (0.11 ± 0.05% yr−1) and nPAs with the sustainable-use objective (0.09 ± 0.04% yr−1). The sPAs with the strict-protection objective had the lowest average gross forest area loss rate (0.04 ± 0.04% yr−1) due to these PAs being far away from the ‘arc of deforestation’ and therefore were under little deforestation pressure (Fig. 4f). The gross forest area loss rates increased from 2018 to 2021; these losses were probably related to the loosened forest conservation policies during the Bolsonaro presidential administration that began in January 201917,18,27. Gross forest area loss from 2018 to 2021 increased 1.5 times in nPAs with the strict-protection objective and 5 times in nPAs with the sustainable-use objective. Gross forest area loss from 2018 to 2021 increased 12.4 times in the sPAs with the strict-protection objective and 4.3 times in sPAs with the sustainable-use objective.DiscussionInterannual change of forest area and deforestationAnnual forest maps from our mapping tools, PRODES28 and Global Forest Watch (GFW)29 were evaluated for South America30 and the BLA1, showing higher accuracy in our annual forest maps. The interannual changes of forest area and deforestation during 2000–2017 from these datasets were reported1. Here, we extended the data record from 2017 to 2021 (Fig. 6). We compared our forest area with the newly developed MapBiomas31 forest area and they had similar interannual change trends in forest area in the BLA (Extended Data Figs. 5 and 6). During 2000–2020, the average annual forest area from our dataset is ~4% lower than the MapBiomas and our forest area declined by 8% (32.7 × 106 ha), close to the MapBiomas (7%, 27.9 × 106 ha). As the MapBiomas dataset has not provided annual gross forest area loss data to the public yet, it was not used for detailed analysis in this study. Annual gross forest area loss from our dataset was 1.0 × 106 ha in 2018 and increased to 2.9 × 106 ha in 2021 (Fig. 6a). Annual gross forest area loss from the PRODES deforestation dataset was 0.8 × 106 ha in 2018 and increased to 1.3 × 106 ha in 2021 (Fig. 6b). Compared to the PRODES dataset, annual gross forest area loss from our dataset was higher but had a similar temporal trend. Our previous study explained the differences in forest area loss estimates between the PRODES dataset and our forest map, using annual forest maps in the BLA from 2002 to 20161. Annual gross forest area losses from the GFW dataset in 2018–2020 were slightly higher than those from our dataset (Fig. 6c). Annual forest area loss from the GFW dataset has been modified in numerous ways and these forest area loss data may not be comparable in those years32. Despite their differences in forest definition, in the satellite image data used and, in the forest mapping algorithms, all three datasets report that annual forest area loss in the BLA increased from 2018 to 2021. Together these results provide strong satellite-based evidence of increased forest area loss in 2020/2021.Fig. 6: Comparison of annual gross forest area loss rates from this study, the official Brazilian deforestation dataset PRODES and GFW in the BLA from 2002 to 2021.a, Annual gross forest area loss rates from this study. b, Annual gross forest area loss rates from PRODES. c, Annual gross forest area loss rates from GFW.Full size imageEffects of ITs/PAs on forest conservationSeveral studies analysed the PRODES deforestation datasets and assessed the effects of the ITs/PAs on forest conservation under changing laws, policies and climate change9,33,34. One study found that during 2000–2008 ITs/PAs in the states outside of the ‘arc of deforestation’ had little impact on deforestation within their boundaries but ITs and nPAs in the ‘arc of deforestation’ were more effective in reducing deforestation than were sPAs9. Another study reported that 91 sPAs established between 2005 and 2016 reduced deforestation both within their boundaries and in their adjacent surroundings during 2005–201733. The third study also reported that ITs/PAs reduced deforestation during 2000–2010 in the BLA34. The PAs with the strict-protection objective reduced deforestation more than PAs with the sustainable-use objective and the ITs were particularly effective at reducing deforestation in locations with high deforestation pressure34.Our study uses a longer (2000–2021) and updated dataset to assess the effects of the ITs/PAs for forest conservation in the BLA. Our results over the 2000–2013 period agree with the findings from these previous publications and showed that ITs/PAs reduced deforestation within their boundaries. PAs with the strict-protection objective had small forest area losses in 2000–2013 but large forest area losses in 2013–2021 (Fig. 5). ITs/PAs in the states outside of the ‘arc of deforestation’ reduced deforestation within their borders (Fig. 4), which differs from that of ref. 9, possibly due to the different study periods or to limited data availability in PRODES caused by cloud cover1. Deforestation fronts also reached or encroached into some ITs/PAs (Fig. 3a and Extended Data Fig. 7). Those PAs with large deforestation areas were more likely to experience downgrading, downsizing and degazettement35. To avoid deforestation and reconcile the conflicts between forest conservation and socio-economic development, both intensive agriculture production in the deforested areas and forest restoration projects need to be fully explored. While these measures are important for environmental quality and for generating employment, they cannot be expected to have a ‘land sparing’ effect in reducing gross deforestation rates19. PAs with the sustainable-use objective, where small resident populations have low-impact uses of natural resources36, are designed to promote conservation. They have been demonstrated to offer a win–win solution for biodiversity conservation and socio-economic development37; however, substantial deviations from the intended low-impact uses have sometimes occurred38.Investigation of various driving factors and development of models that predict the effects of ITs/PAs on reducing deforestation is important for stake-holders and decision-makers. Several studies identified a few driving factors (for example, Indigenous land rights, institutional context and property rights of ITs) and different methods (spatial matching and regression discontinuity model)39,40,41,42. Our exploratory data analyses (Supplementary Information) reveal the importance and limitation of the governance (ITs, nPAs and sPAs), management (strict protection and sustainable use), sizes and locations of the ITs/PAs and forest areas within the ITs/PAs for reducing deforestation. Our preliminary study also highlights the need for casual analysis and more efforts by the research community, stake-holders and decision-makers, more data collection (for example, politics43, social-economic conditions and management practices in individual ITs/PAs), integrated analytics and models across multiple spatial and temporal scales.Challenges for forest conservation in ITs/PAsThe changes in laws, policies, agriculture (soybean and beef cattle) and climate strongly affect forest conservation44,45 and recent changes in those factors suggest that forest conservation in the ITs/PAs and non-PAs could face increasing challenges in the coming years. As shown in Extended Data Fig. 8, the Action Plan for the Prevention and Control of Deforestation in the Legal Amazon (PPCDAm)23,46, which reduced deforestation, had been implemented since 2004 but was interrupted in 2019 by the Bolsonaro’s administration. The Forest Code and the credit restrictions to owners of deforested lands implemented by the Brazil’s Central Bank as of 200819, which also reduced deforestation, had also been weaken since 201247. The Bolsonaro’s administration is well known for its proximity to agribusiness18,48 and since 2019 has implemented various measures that weaken forest policies and laws and impede their enforcement. In 2021, there was an even stronger pushback against the Brazilian legal framework governing the PAs: five draft bills (PL 490/2007, PL 191/2020, PL 2633/2020, PL 2159/2021 and PLS 510/2021) would further loosen constraints to the economic activities in the ITs, reduce government authority over PAs and provide incentives for agricultural expansion in these areas. The Brazilian Supreme Court is currently trying a case (RE 1017365) that could restrict the constitutional provisions that favoured the demarcation of Indigenous lands and could result in only areas effectively possessed by Indigenous peoples in 1988 being granted demarcation as Indigenous lands. In addition, as the prices of soybeans and beef increased substantially in 2021 (Extended Data Fig. 9), more deforestation for agriculture expansion is likely to continue.Our study also reveals that annual gross forest area loss rates were high in the El Niño and tropical Atlantic dipole drought years, such as 2005, 2007, 2010 and 2015 (Fig. 3), which may be related to the extensive tree mortality caused by the drought–fire interactions45. Over the period 2002–2021, a total of 10.4 × 106 ha forest was burned in ITs/PAs and 55.1 × 106 ha forest in non-PAs and out of these burned forest areas deforestation occurred in 31% for ITs/PAs and 48% for non-PAs (Extended Data Fig. 10). As the Amazon is projected to have more frequent and severe droughts in the future49, how to manage and protect the remaining intact forests in the ITs/PAs continues to be a major concern.SummaryThe results of this study confirm the critical role of ITs/PAs in forest conservation and raise serious concerns about increased deforestation in the BLA in 2018–2021, especially in the ITs/PAs. The large infrastructure projects50 and the severe COVID-19 pandemic could further increase deforestation and forest degradation in the ITs/PAs. Both deforestation and forest degradation had severe potential impacts on biodiversity and carbon stock in the BLA2,51,52. More attention is urgently needed to strengthen the environmental policies and laws, uphold the existing legal protections and resist the changes being staged in Brazil’s National Congress. More investments from the Brazilian government, private companies and international organizations for the expansion and management of ITs/PAs are also critically needed53. Further reducing deforestation and forest degradation and supporting forest conservation and Indigenous people could prevent passing the tipping point for the Amazon forest ecosystems to flip into savanna ecosystems54.MethodsAnnual forest cover data from MODIS forest mapping toolWe published annual forest maps in the BLA during 2000–2017, which were generated by using the time-series MOD09A1 data product (8 d temporal resolution, 500 m spatial resolution) and the forest mapping algorithm1,2,55. The MOD09A1 8 d composite selects the best-quality observation within each 8 d period. We used cloud-free MOD09A1 observations based on the quality layer. The forest mapping algorithm is mainly based on the unique features of evergreen forest in terms of vegetation greenness, land surface water content and phenology1,2,55, specifically, all good-quality observations for an evergreen forest pixel in a year have the enhanced vegetation index (EVI) ≥ 0.2 and the land surface water index (LSWI) ≥ 0. Our annual MODIS forest maps in the BLA had high overall accuracy (>97%) when they were validated by three independent ground reference datasets1: (1) 18 5 × 5 km2 sample blocks from the Global Land Cover Validation Reference Dataset, which was produced from analyses of very high spatial resolution images at 2 m spatial resolution (a total of 1,268 pixels at the 500 m spatial resolution); (2) 416 10 × 10 km2 sample blocks from the TREES-3 forest and non-forest dataset at 30 m spatial resolution (a total of 262,514 pixels at the 500 m spatial resolution); and (3) 1,991 stratified random sample pixels at 500 m spatial resolution for forest changes generated by visual interpretation of time-series Landsat-5 TM, Landsat 7 ETM+ and Landsat 8 OLI images at 30 m spatial resolution. Our annual forest maps have area and spatial distribution similar to the forest maps derived from microwave images1, which are less affected by frequent clouds in the Amazon.In this study, we made a minor improvement to the forest mapping algorithm, as a few pixels were contaminated by clouds or aerosols but may not be detected by the quality layer, which resulted in the EVI values dropping substantially although LSWI and the normalized difference vegetation index (NDVI) did not change much. Thus, we added another criterion to identify evergreen forest, that is, we classified a pixel as evergreen forest when it had >90% cloud-free observations that met the criteria of EVI ≥ 0.2 and LSWI ≥ 0 and had annual minimum LSWI ≥ 0. We analysed MOD091 data from February 2000 to December 2021 and we generated the annual evergreen forest maps from 2000 to 2021. We applied a 3 yr consistency check procedure to reduce the potential error in the annual maps of evergreen forest. We used the annual evergreen forest maps to generate two reports. One report is on forest area by year. In this report, we counted primary forest and secondary forest together as forest (no separation into these two categories). The second report is on ‘primary forest’ (used 2001 as the reference year) and primary forest area loss. Here, we tracked which year deforestation first occurred for individual pixels. It is possible that some of forest pixels in 2001 were ‘reforested or recovered forest’. As we have no data before 2000, we do not know how many pixels there were for this case, thus we kept this caveat during our data analysis and result interpretation.Annual forest area data from the MapBiomas projectThe MapBiomas project was launched in 2015 and generates annual land-cover and land-use maps in Brazil31. The algorithm theoretical basis document (ATBD) of MapBiomas presents the cross-reference of the MapBiomas land-cover and land-use classes with classes from other classification systems, including Food and Agriculture Organization of the United Nations (FAO), Brazilian Institute of Geography and Statistics (IBGE) and National Greenhouse Gas Emissions Inventory. The MapBiomas project (Collection 6) uses six steps to generate the annual land-cover and land-use maps by analyses of surface reflectance data from Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+) and the Operational Land Imager and Thermal Infrared Sensor (OLI-TIRS). The first step is to generate annual Landsat mosaics for specific temporal windows. The second step is to derive the spectral and temporal attributes from Landsat spectral bands to train the Random Forest classifier. The third step is to generate annual land-cover and land-use maps in each biome and cross-cutting theme using the random Forest algorithm (the classification of aquaculture, mining, irrigation, rice and citrus is based on the U-Net convolutional neural network classifier) and training samples. The fourth step is to apply spatiotemporal filters to reduce noise, including gap fill, spatial filter, a 3–5 yr temporal filter, frequency filter and incident filter. The fifth step is to merge the filtered land-cover and land-use maps of each biome and cross-cutting themes and apply the spatiotemporal filters again. The sixth step is the accuracy assessment based on the 75,000 independent samples per year from 1985 to 2018. At the level-1 (forest, non-forest natural formation, farming, non vegetation area and water) classes, the land-cover and land-use maps have 91% global accuracy. We used the level-1 class of forest from the MapBiomas land-cover and land-use maps in this study. For more information about the MapBiomas project see https://mapbiomas-br-site.s3.amazonaws.com/Metodologia/ATBD_Collection_6_v1_January_2022.pdf.Annual forest cover loss data from the GFWThe 30 m GFW (v.1.8) annual forest cover loss in 2001–2020 was generated by using the decision-tree algorithms and time-series Landsat images acquired during the growing season29. In terms of the year in which a pixel experienced forest loss, the data producers reported that they are 75% confident that the forest loss occurred within the stated year and 97% confident that it
Environmental Science
Each human body contains a complex community of trillions of microorganisms that are important for your health while you’re alive. These microbial symbionts help you digest food, produce essential vitamins, protect you from infection and serve many other critical functions. In turn, the microbes, which are mostly concentrated in your gut, get to live in a relatively stable, warm environment with a steady supply of food. But what happens to these symbiotic allies after you die? As an environmental microbiologist who studies the necrobiome — the microbes that live in, on and around a decomposing body — I've been curious about our postmortem microbial legacy. You might assume that your microbes die with you — once your body breaks down and your microbes are flushed into the environment, they won’t survive out in the real world. In our recently published study, my research team and I share evidence that not only do your microbes continue to live on after you die, they actually play an important role in recycling your body so that new life can flourish. Related: What is the vaginal microbiome? Microbial life after death When you die, your heart stops circulating the blood that has carried oxygen throughout your body. Cells deprived of oxygen start digesting themselves in a process called autolysis. Enzymes in those cells — which normally digest carbohydrates, proteins and fats for energy or growth in a controlled way — start to work on the membranes, proteins, DNA and other components that make up the cells. The products of this cellular breakdown make excellent food for your symbiotic bacteria, and without your immune system to keep them in check and a steady supply of food from your digestive system, they turn to this new source of nutrition. Gut bacteria, especially a class of microbes called Clostridia, spread through your organs and digest you from the inside out in a process called putrefaction. Without oxygen inside the body, your anaerobic bacteria rely on energy-producing processes that don’t require oxygen, such as fermentation. These create the distinctly odorous-gases signature to decomposition. From an evolutionary standpoint, it makes sense that your microbes would have evolved ways to adapt to a dying body. Like rats on a sinking ship, your bacteria will soon have to abandon their host and survive out in the world long enough to find a new host to colonize. Taking advantage of the carbon and nutrients of your body allows them to increase their numbers. A bigger population means a higher probability that at least a few will survive out in the harsher environment and successfully find a new body. A microbial invasion If you're buried in the ground, your microbes are flushed into the soil along with a soup of decomposition fluids as your body breaks down. They're entering an entirely new environment and encountering a whole new microbial community in the soil. The mixing or coalescence of two distinct microbial communities happens frequently in nature. Coalescence happens when the roots of two plants grow together, when wastewater is emptied into a river or even when two people kiss. The outcome of mixing — which community dominates and which microbes are active — depends on several factors, such as how much environmental change the microbes experience and who was there first. Your microbes are adapted to the stable, warm environment inside your body where they receive a steady supply of food. In contrast, soil is a particularly harsh place to live – it's a highly variable environment with steep chemical and physical gradients and big swings in temperature, moisture and nutrients. Furthermore, soil already hosts an exceptionally diverse microbial community full of decomposers that are well adapted to that environment and would presumably outcompete any newcomers. It's easy to assume that your microbes will die off once they are outside your body. However, my research team's previous studies have shown that the DNA signatures of host-associated microbes can be detected in the soil below a decomposing body, on the soil surface and in graves for months or years after the soft tissues of the body have decomposed. This raised the question of whether these microbes are still alive and active or if they are merely in a dormant state waiting for the next host. Our newest study suggests that your microbes are not only living in the soil but also cooperating with native soil microbes to help decompose your body. In the lab, we showed that mixing soil and decomposition fluids filled with host-associated microbes increased decomposition rates beyond that of the soil communities alone. We also found that host-associated microbes enhanced nitrogen cycling. Nitrogen is an essential nutrient for life, but most of the nitrogen on Earth is tied up as atmospheric gas that organisms can't use. Decomposers play a critical role recycling organic forms of nitrogen such as proteins into inorganic forms such as ammonium and nitrate that microbes and plants can use. Our new findings suggest that our microbes are likely playing a part in this recycling process by converting large nitrogen-containing molecules like proteins and nucleic acids into ammonium. Nitrifying microbes in the soil can then convert the ammonium into nitrate. Next generation of life The recycling of nutrients from detritus, or nonliving organic matter, is a core process in all ecosystems. In terrestrial ecosystems, decomposition of dead animals, or carrion, fuels biodiversity and is an important link in food webs. Living animals are a bottleneck for the carbon and nutrient cycles of an ecosystem. They slowly accumulate nutrients and carbon from large areas of the landscape throughout their lives then deposit it all at once in a small, localized spot when they die. One dead animal can support a whole pop-up food web of microbes, soil fauna and arthropods that make their living off carcasses. Insect and animal scavengers help further redistribute nutrients in the ecosystem. Decomposer microbes convert the concentrated pools of nutrient-rich organic molecules from our bodies into smaller, more bioavailable forms that other organisms can use to support new life. It's not uncommon to see plant life flourishing near a decomposing animal, visible evidence that nutrients in bodies are being recycled back into the ecosystem. That our own microbes play an important role in this cycle is one microscopic way we live on after death. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Jennifer DeBruyn is an environmental microbiologist at the University of Tennessee Institute of Agriculture. She studies the decomposition of human and animal mortalities and contaminant biodegradation. She co-directs “Backyard STEM”, a curriculum program for Tennessee 4-H agents focused on environmental-science education for youth.
Environmental Science
Discovery of invisible nutrient discharge on Great Barrier Reef raises concerns Scientists using natural tracers off Queensland's coast have discovered the source of previously unquantified nitrogen and phosphorous that are having a profound environmental impact on the Great Barrier Reef. The findings, published today in Environmental Science & Technology, indicate current efforts to preserve and restore the health of the Reef may require a new perspective. Southern Cross University's Dr. Douglas Tait leads the study, "Submarine groundwater discharge exceeds river inputs as a source of nutrients to the Great Barrier Reef." Submarine groundwater discharge is any water released to the ocean below the waterline from various sources, including underground aquifers and the seafloor. The research team, which also includes CSIRO, AIMS and Gothenburg University (Sweden), collected data from offshore transects, rivers and coastal bores in an area from south of Rockhampton to north of Cairns. The use of radium isotopes allowed the scientists to track how much nutrient is transported from the land and shelf sediments via invisible groundwater flows. Southern Cross Professor Damien Maher said the team's work showed that groundwater discharge was 10–15 times greater than river inputs, something previously unaccounted for. "Groundwater discharge accounted for approximately one-third of new nitrogen and two-thirds of phosphorous inputs, indicating that nearly twice the amount of nitrogen enters the Reef from groundwater compared to river waters," Professor Maher said. Professor Maher added that the lion's share of efforts to mitigate the impact of nutrients on the Reef were focused on outflow from river systems. Lead author Dr. Douglas Tait said, "Nutrients are essential to support the incredible biodiversity of the Great Barrier Reef. However, an excess of nutrients can lead to detrimental issues such as harmful algal blooms, crown-of-thorns starfish outbreaks, and fish diseases, which have been on the rise in the Reef over the past few decades. "Our study underscores the need for a strategic shift in management approaches aimed at safeguarding the Great Barrier Reef from the effects of excess nutrients." Dr. Tait said unlike river outflow, nutrients in groundwater could be stored for decades underground before being discharged into coastal waters, meaning research and strategies to protect the Reef needed to be long-term. "This study sheds new light on the complex nutrient dynamics within the Great Barrier Reef," Dr. Tait said. "Our understanding and ability to manage the sources of nutrients is pivotal in preserving the Reef for generations to come." More information: Submarine Groundwater Discharge Exceeds River Inputs as a Source 2 of Nutrients to the Great Barrier Reef, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.3c03725 Journal information: Environmental Science & Technology Provided by Southern Cross University
Environmental Science
DENVER (AP) — Exxon Mobil’s scientists were remarkably accurate in their predictions about global warming, even as the company made public statements that contradicted its own scientists’ conclusions, a new study says. The study in the journal Science Thursday looked at research that Exxon funded that didn’t just confirm what climate scientists were saying, but used more than a dozen different computer models that forecast the coming warming with precision equal to or better than government and academic scientists. This was during the same time that the oil giant publicly doubted that warming was real and dismissed climate models’ accuracy. Exxon said its understanding of climate change evolved over the years and that critics are misunderstanding its earlier research. Scientists, governments, activists and news sites, including Inside Climate News and the Los Angeles Times, several years ago reported that “Exxon knew” about the science of climate change since about 1977 all while publicly casting doubt. What the new study does is detail how accurate Exxon funded research was. From 63% to 83% of those projections fit strict standards for accuracy and generally predicted correctly that the globe would warm about .36 degrees (.2 degrees Celsius) a decade. The Exxon-funded science was “actually astonishing” in its precision and accuracy, said study co-author Naomi Oreskes, a Harvard science history professor. But she added so was the “hypocrisy because so much of the Exxon Mobil disinformation for so many years … was the claim that climate models weren’t reliable.” Study lead author Geoffrey Supran, who started the work at Harvard and now is a environmental science professor at the University of Miami, said this is different than what was previously found in documents about the oil company. “We’ve dug into not just to the language, the rhetoric in these documents, but also the data. And I’d say in that sense, our analysis really seals the deal on ‘Exxon knew’,” Supran said. It “gives us airtight evidence that Exxon Mobil accurately predicted global warming years before, then turned around and attacked the science underlying it.” The paper quoted then-Exxon CEO Lee Raymond in 1999 as saying future climate “projections are based on completely unproven climate models, or more often, sheer speculation,” while his successor in 2013 called models “not competent.” Exxon’s understanding of climate science developed along with the broader scientific community, and its four decades of research in climate science resulted in more than 150 papers, including 50 peer-reviewed publications, said company spokesman Todd Spitler. “This issue has come up several times in recent years and, in each case, our answer is the same: those who talk about how ‘Exxon Knew’ are wrong in their conclusions,” Spitler said in an emailed statement. “Some have sought to misrepresent facts and Exxon Mobil’s position on climate science, and its support for effective policy solutions, by recasting well intended, internal policy debates as an attempted company disinformation campaign.” Exxon, one of the world’s largest oil and gas companies, has been the target of numerous lawsuits that claim the company knew about the damage its oil and gas would cause to the climate, but misled the public by sowing doubt about climate change. In the latest such lawsuit, New Jersey accused five oil and gas companies including Exxon of deceiving the public for decades while knowing about the harmful toll fossil fuels take on the climate. Similar lawsuits from New York to California have claimed that Exxon and other oil and gas companies launched public relations campaigns to stir doubts about climate change. In one, then-Massachusetts Attorney General Maura Healey said Exxon’s public relations efforts were “ reminiscent of the tobacco industry’s long denial campaign about the dangerous effects of cigarettes.” Oreskes acknowledged in the study that she has been a paid consultant in the past for a law firm suing Exxon, while Supran has gotten a grant from the Rockefeller Family Foundation, which has also helped fund groups that were suing Exxon. The Associated Press receives some foundation support from Rockefeller and maintains full control of editorial content. Oil giants including Exxon and Shell were accused in congressional hearings in 2021 of spreading misinformation about climate, but executives from the companies denied the accusations. University of Illinois atmospheric scientist professor emeritus Donald Wuebbles told The Associated Press that in the 1980s he worked with Exxon-funded scientists and wasn’t surprised by what the company knew or the models. It’s what science and people who examined the issue knew. “It was clear that Exxon Mobil knew what was going on,’’ Wuebbles said. “The problem is at the same time they were paying people to put out misinformation. That’s the big issue.” There’s a difference between the “hype and spin” that companies do to get you to buy a product or politicians do to get your vote and an “outright lie … misrepresenting factual information and that’s what Exxon did,” Oreskes said. Several outside scientists and activists said what the study showed about Exxon actions is serious. “The harm caused by Exxon has been huge,” said University of Michigan environment dean Jonathan Overpeck. “They knew that fossil fuels, including oil and natural gas, would greatly alter the planet’s climate in ways that would be costly in terms of lives, human suffering and economic impacts. And yet, despite this understanding they choose to publicly downplay the problem of climate change and the dangers it poses to people and the planet.” Cornell University climate scientist Natalie Mahowald asked: “How many thousands (or more) of lives have been lost or adversely impacted by Exxon Mobil’s deliberate campaign to obscure the science?” Critics say Exxon’s past actions on climate change undermine its claims that it’s committed to reducing emissions. After tracking Exxon’s and hundreds of other companies’ corporate lobbying on climate change policies, InfluenceMap, a firm that analyzes data on how companies are impacting the climate crisis, concluded that Exxon is lobbying overall in opposition to the goals of the Paris Agreement and that it’s currently among the most negative and influential corporations holding back climate policy. “All the research we have suggests that effort to thwart climate action continues to this day, prioritizing the oil and gas industry value chain from the “potentially existential” threat of climate change, rather than the other way around,” said Faye Holder, program manager for InfluenceMap. “The messages of denial and delay may look different, but the intention is the same.” ___ Bussewitz reported from New York. ___ Follow AP’s climate and environment coverage at https://apnews.com/hub/climate-and-environment ___ Follow Seth Borenstein on Twitter at @borenbears and Cathy Bussewitz at @cbussewitz ___ Associated Press climate and environmental coverage receives support from several private foundations. See more about AP’s climate initiative here. The AP is solely responsible for all content.
Environmental Science
Communities of color disproportionately exposed to ‘forever chemicals’ in drinking water: study Residents of communities with bigger Black and Hispanic populations are more likely to be exposed to harmful levels of “forever chemicals” in their water supplies, a new study has found. This increased risk of exposure is the result of the disproportionate placement of pollution sources near watersheds that serve these communities, according to the study, published on Monday in Environmental Science & Technology. Some such sites known to discharge per- and polyfluoroalkyl substances (PFAS) include industrial manufacturers, airports, military bases, wastewater treatment plants and landfills, per the study. “Our work suggests that the sociodemographic groups that are often stressed by other factors, including marginalization, racism, and poverty, are also more highly exposed to PFAS in drinking water,” first author Jahred Liddie, a PhD student at the Harvard Chan School of Public Health, said in a statement. “Environmental justice is a major emphasis of the current administration and this work shows it should be considered in the upcoming regulations for PFAS in drinking water,” Liddie added. Known for its presence in both industrial discharge and in firefighting foam used to combat jet fuel fires, PFAS is common in waste released by military bases, civilian and manufacturing facilities. These compounds — of which there are thousands — linger in both the human body and in the environment and have been linked to a variety of illnesses and cancers. To draw their conclusions, the Harvard researchers used PFAS monitoring data from 7,873 community water systems in 18 states where such information is widely available. Their analysis included 44,111 samples collected between January 2016 and August 2022. The researchers discovered that PFAS detection had a positive association with the number of PFAS sources and proportions of people of color served by a given water system. Each additional industrial site, military fire training area and airport in a community’s watershed was linked to a 10-108 percent hike in levels of PFOA and a 20-34 percent increase in levels of PFOS, according to the study. PFOA and PFOS are two of the most notorious and well-studied types of PFAS. Senior author Elsie Sunderland, a Harvard professor of environmental chemistry, said she found the results particularly concerning because previous research has shown that “marginalized populations are susceptible to greater risks of adverse health outcomes compared to other populations.” “Regulating releases from PFAS sources and ensuring that people have safe drinking water is especially important in the most vulnerable communities to protect public health,” Sunderland added. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Environmental Science
Gas stoves in California homes are leaking cancer-causing benzene, researchers found in a new study published on Thursday, though they say more research is needed to understand how many homes have leaks.In the study, published in Environmental Science and Technology on Thursday, researchers also estimated that over 4 tons of benzene per year are being leaked into the atmosphere from outdoor pipes that deliver the gas to buildings around California - the equivalent to the benzene emissions from nearly 60,000 vehicles. And those emissions are unaccounted for by the state.The researchers collected samples of gas from 159 homes in different regions of California and measured to see what types of gases were being emitted into homes when stoves were off. They found that all of the samples they tested had hazardous air pollutants, like benzene, toluene, ethylbenzene and xylene (BTEX), all of which can have adverse health effects in humans with chronic exposure or acute exposure in larger amounts.Of most concern to the researchers was benzene, a known carcinogen that can lead to leukemia and other cancers and blood disorders, according to the National Cancer Institute.The finding could have major implications for indoor and outdoor air quality in California, which has the second highest level of residential natural gas use in the United States.Study: Cancer-causing gas leaking from CA stoves, pipesBy DREW COSTLEY2 hours agoIn this 2022 image provided by PSE Healthy Energy, a gas stove is tested for benzene in California. Stoves in California homes are leaking the cancer-causing gas benzene, researchers found in a new study published on Thursday, Oct. 20, though they say more research is needed to understand how many homes have leaks. (PSE Healthy Energy via AP)In this 2022 image provided by PSE Healthy Energy, a gas stove is tested for benzene in California. Stoves in California homes are leaking the cancer-causing gas benzene, researchers found in a new study published on Thursday, Oct. 20, though they say more research is needed to understand how many homes have leaks. (PSE Healthy Energy via AP)Gas stoves in California homes are leaking cancer-causing benzene, researchers found in a new study published on Thursday, though they say more research is needed to understand how many homes have leaks.In the study, published in Environmental Science and Technology on Thursday, researchers also estimated that over 4 tons of benzene per year are being leaked into the atmosphere from outdoor pipes that deliver the gas to buildings around California - the equivalent to the benzene emissions from nearly 60,000 vehicles. And those emissions are unaccounted for by the state.The researchers collected samples of gas from 159 homes in different regions of California and measured to see what types of gases were being emitted into homes when stoves were off. They found that all of the samples they tested had hazardous air pollutants, like benzene, toluene, ethylbenzene and xylene (BTEX), all of which can have adverse health effects in humans with chronic exposure or acute exposure in larger amounts.ADVERTISEMENTOf most concern to the researchers was benzene, a known carcinogen that can lead to leukemia and other cancers and blood disorders, according to the National Cancer Institute.The finding could have major implications for indoor and outdoor air quality in California, which has the second highest level of residential natural gas use in the United States.SCIENCESparkling fish, murky methods: the global aquarium tradeMysterious breeding habits of aquarium fish vex expertsGerman leader warns against 'worldwide renaissance' for coalFirst Native American woman in space awed by Mother Earth"What our science shows is that people in California are exposed to potentially hazardous levels of benzene from the gas that is piped into their homes," said Drew Michanowicz, a study co-author and senior scientist at PSE Healthy Energy, an energy research and policy institute. "We hope that policymakers will consider this data when they are making policy to ensure current and future policies are health-protective in light of this new research."Homes in the Greater Los Angeles, the North San Fernando Valley, and the San Clarita Valley areas had the highest benzene in gas levels. Leaks from stoves in these regions could emit enough benzene to significantly exceed the limit determined to be safe by the California Office of Environmental Health Hazards Assessment.This finding in particular didn't surprise residents and health care workers in the region who spoke to The Associated Press about the study. That's because many of them experienced the largest-known natural gas leak in the nation in Aliso Canyon in 2015.Back then, 100,000 tons of methane and other gases, including benzene, leaked from a failed well operated by Southern California Gas Co. It took nearly four months to get the leak under control and resulted in headaches, nausea and nose bleeds.Dr. Jeffrey Nordella was a physician at an urgent care in the region during this time and remembers being puzzled by the variety of symptoms patients were experiencing. "I didn't have much to offer them," except to help them try to detox from the exposures, he said.That was an acute exposure of a large amount of benzene, which is different from chronic exposure to smaller amounts, but "remember what the World Health Organization said: there's no safe level of benzene," he said.Kyoko Hibino was one of the residents exposed to toxic air pollution as a result of the Aliso Canyon gas leak. After the leak, she started having a persistent cough and nosebleeds and eventually was diagnosed with breast cancer, which has also been linked to benzene exposure. Her cats also started having nosebleeds and one recently passed away from leukemia."I'd say let's take this study really seriously and understand how bad (benzene exposure) is," she said.Copyright © 2022 by The Associated Press. All Rights Reserved.
Environmental Science
Published August 10, 2022 2:32PM article FILE - Rain drips from a gutter fall into a blue rain barrel. (Photo by Karl-Josef Hildenbrand/picture alliance via Getty Images) Rainwater should be considered unsafe to drink everywhere on Earth due to the levels of "forever chemicals" in the environment, a new study suggests. This includes even the most remote areas, like Antarctica and the Tibetan plateau. Forever chemicals are a nickname for per- and polyfluorinated alkyl substances (PFAS) because they last so long in the environment. These man-made chemicals have been used in industry and consumer products worldwide since the 1950s, including non-stick cookware, stain-resistant fabrics, some cosmetics, and even some firefighting foams. These persistent chemicals have been linked to a wide range of harmful health effects, including the potential for an increased risk of cancer, high blood pressure or pre-eclampsia in pregnant women, high cholesterol, and immune system issues. RELATED: NIH: Exposure to ‘forever chemicals’ found in food packaging can lead to premature births The new research, published on Aug. 2 in Environmental Science & Technology, states that PFAS can be found in rainwater and snow in all corners of the Earth.  The study focused on four types of PFAS, including the possibly cancer-causing perfluorooctanoic acid (PFOA). This chemical is no longer made in the U.S., but people can still be exposed to them — as the researchers noted. "Based on the latest U.S. guidelines for PFOA in drinking water, rainwater everywhere would be judged unsafe to drink," Ian Cousins, the lead author of the study and professor at the Department of Environmental Science at Stockholm University, said in a statement.  "Although in the industrial world we don’t often drink rainwater, many people around the world expect it to be safe to drink and it supplies many of our drinking water sources," Cousins added. Researchers find high levels of PFAS chemicals in rainwater in parts of US Researchers at the National Atmospheric Deposition Program, say they have found high levels of  toxic chemicals known as PFAS in rainwater, in some parts of the U.S. The Stockholm University team conducted laboratory and fieldwork on the atmospheric presence and transport of PFAS for the past decade. They noted that the levels of some harmful PFAS in the atmosphere are not notably declining, despite their phase out by various manufacturers — including major manufacturer 3M, which phased out PFOA and another type of forever chemical 20 years ago.  The study authors concluded that the planetary PFAS boundary "has been exceeded." "Irrespective of whether or not one agrees with our conclusion that the planetary boundary for PFAS is exceeded, it is nevertheless highly problematic that everywhere on Earth where humans reside recently proposed health advisories cannot be achieved without large investment in advanced cleanup technology," the study authors wrote. They warned that problems associated with forever chemicals "are likely to be only the tip of the iceberg" given that there are many thousands of PFAS — "and the risks associated with most of them are unknown." RELATED: 'Forever chemicals' linked to high blood pressure risk in middle-aged women, study finds The researchers called for rapidly restricting the uses of PFAS wherever possible. "It cannot be that some few benefit economically while polluting the drinking water for millions of others, and causing serious health problems," Dr. Jane Muncke with the Food Packaging Forum Foundation in Zürich, Switzerland, and not involved in the work, said in a statement.  "The vast amounts that it will cost to reduce PFAS in drinking water to levels that are safe based on current scientific understanding need to be paid by the industry producing and using these toxic chemicals," Muncke added. "The time to act is now." EPA’s strategy to regulate toxic ‘forever chemicals’ In late 2021, the EPA launched a broad strategy to limit pollution from these forever chemicals increasingly turning up in public drinking water systems, private wells, and food. The Defense Department said it would work to assess and clean up PFAS-contaminated sites throughout the country, while the Food and Drug Administration expanded testing of the food supply to estimate Americans’ exposure to PFAS from food. The Agriculture Department also said it would boost efforts to prevent and address PFAS contamination in food. The series of actions, announced in October, is intended to restrict PFAS from being released into the environment, accelerate cleanup of PFAS-contaminated sites such as military bases, and increase investments in research to learn more about where PFAS are found and how their spread can be prevented. Most recently, the EPA issued new advisories on PFAS in drinking water in June that lower the level that regulators consider safe for certain forever chemicals, including PFOA. The agency also encouraged states to apply for a portion of $1 billion in grant funding as part of the 2021 infrastructure law to address this issue in drinking water, specifically in "small and disadvantaged communities." "This is a bold strategy that starts with immediate action" and includes additional steps "that will carry through this first term" of President Joe Biden, EPA head Michael Regan said in a previous interview with The Associated Press. "We’re going to use every tool in our toolbox to restrict human exposure to these toxic chemicals.″ This story was reported from Cincinnati. The Associated Press contributed.
Environmental Science
The world lost 1,453 square kilometers (561 square miles) of salt marsh between 2000 and 2019, an area twice the size of Singapore, according to a new study based on satellite imagery.In addition to providing wildlife habitat and numerous ecosystem services, salt marshes store a great deal of carbon.Salt marsh loss resulted in 16.3 teragrams, or 16.3 million metric tons, of carbon emissions per year, according to the study. That’s the rough equivalent of the output of around 3.5 million cars.Climate change is one of the greatest threats to marshes. Other contributors to their global decline include conversion to aquaculture, coastal erosion, eutrophication, drainage, mangrove encroachment and invasive species. An area of salt marsh twice the size of Singapore has disappeared since the turn of the century, NASA scientists determined by analyzing satellite images from around the globe. Severe storms were partially responsible for the loss, which resulted in “significant” carbon emissions, according to a recent study based on the maps. The study, published in the journal Nature in late November, showed that the world lost 2,733 square kilometers (1,055 square miles) of marsh over the 19-year period between 2000 and 2019 and recovered 1,278 km2 (493 mi2), some as a result of restoration by people. This resulted in a net loss of 1,453 km2 (561 mi2). Globally, salt marshes declined at a rate of 0.28% per year, according to the study. Previously, up-to-date information on the rates and “hotspots” of salt marsh loss at the global level was limited, as were estimates of the resulting carbon emissions, Anthony Campbell, the paper’s lead author, told Mongabay in an interview. Past estimates suggested much higher salt marsh losses of between 1% and 2% per year. A salt marsh in Maryland, U.S. North America is a hotspot of salt marsh loss, due in large part to severe storms. Photo courtesy of Chesapeake Bay Program via Flickr (CC BY-NC 2.0). “We utilized the Landsat record to get at those numbers and then do an updated carbon budget for salt marshes, showing a decline in loss, but still significant emissions in salt marsh environments,” Campbell, a postdoctoral fellow in NASA’s Biospheric Sciences Laboratory in Maryland, said. According to the study, salt marsh loss resulted in 16.3 teragrams, or 16.3 million metric tons, of carbon emissions per year. That’s the rough equivalent of the annual output of around 3.5 million cars. The loss of marsh also reduced its capacity to store carbon. The results are “sobering,” Peter Macreadie, who was not involved in the study and is director of the Blue Carbon Lab at Deakin University in Burwood, Australia, told Mongabay in an email. They provide clarity about salt marsh distribution and extent, he said. “Unfortunately, they are losing ground, and future climate change will make these ecosystems more vulnerable.” Salt marshes: “Unsung heroes” Salt marshes are coastal wetlands and depend on tidal flows to maintain a delicate balance of sea water. Found mainly in temperate regions around the globe, marshes are “unsung heroes” in providing a range of ecosystem services, Macreadie said. “This humble coastal vegetated [ecosystem] provides critical support for fish and fisheries; it provides habitat for some of the rarest species on the planet; it helps stabilize shorelines and prevents erosion; and it is one of the most efficient and long-term carbon sinks on the planet,” he wrote. Alongside other coastal ecosystems, however, salt marshes are under pressure from human activities. A study released earlier this year estimated that only 15.5% of the world’s coastlines remain unaffected by human interference; a previous study showed that salt marshes in particular have lost 25% to 50% of their historic global coverage. Climate change is one of the greatest threats they face. Sea level rise can outpace the ability of salt marshes to adapt or push them further inland, where human-made barriers can block their expansion and ultimately reduce their extent in a process known as coastal squeeze. Other threats contributing to the global marsh decline include conversion to aquaculture, coastal erosion, eutrophication, drainage, mangrove encroachment and invasive species. By analyzing historical satellite images and comparing them over time, Campbell and his team found that the majority — 64% — of salt marsh losses occurred in the U.S. and Russia, often as a result of hurricanes and coastal erosion. The largest loss of marshes documented in any place and time occurred in North America between 2005 and 2009: about 283 km2 (109 mi2) succumbed, much of it in areas battered by hurricanes. Uncertainties remain, such as the full extent of salt marsh ecosystems around the world, whether factors in addition to coastal erosion explain marsh loss in the Arctic, and the extent of mangrove encroachment in areas such as Oceania. Further analyses of satellite imagery are needed to fill such knowledge gaps, Campbell said. Yellow broomrape (Cistanche phelypaea) in flower at a salt water marsh in Portugal. Image by Cycling Man via Flickr (CC BY-NC-ND 2.0). Protecting salt marshes Salt marshes are recognized for their potential role as a nature-based solution to climate change, as well as a way to mitigate some its effects. Efforts are underway to restore and conserve them, as well as other coastal environments, to harness their ecosystem services. Restoring tidal flows, returning vegetation and reducing coastal erosion are some of the techniques people are implementing. Currently, it’s difficult to tell whether marshes collectively still serve as a carbon sink or are undergoing enough loss to become a net carbon emitter, Campbell said. Whether humanity will be able to turn around the global decline to return them definitively to being a carbon sink remains unclear. One reason is the uncertainties about marshes’ carbon-storage capacity, Campbell said. William Austin, an expert on blue carbon at the University of St Andrews, said it is “speculative” that improving the condition of marshes through “management interventions” would reduce emissions and strengthen marshes’ ability to store carbon. Even so, he said the paper does stake a claim for protecting salt marshes. “While we still probably lack adequate monitoring data to support the implementation of such policy change at scale, this seems a very significant Nature-based Solution that would both support the case for carbon AND nature,” he wrote in an email. Tracking salt marshes and other coastal environments, such as seagrass meadows, with satellite technology is essential to “safely and consistently monitor these important blue carbon systems,” Campbell said. “With the spatial awareness of where they are and how they’re changing, we have a better ability to protect them and address areas that are hotspots of change,” he said. Aerial view of a salt marsh in New Jersey, U.S. Photo by James Loesch via Flickr (CC BY 2.0). Banner image: Black-bellied whistling ducks among water lilies in a salt marsh in Texas, U.S. Image by Scott Sanford via Flickr (CC BY-NC-SA 2.0). Citations: Campbell, A. D., Fatoyinbo, L., Goldberg, L., & Lagomasino, D. (2022). Global hotspots of salt marsh change and carbon emissions. Nature. doi:10.1038/s41586-022-05355-z Crooks, S., Herr, D., Tamelander, J., Laffoley, J., & Vandever, J. (2011). Mitigating Climate Change through Restoration and Management of Coastal Wetlands and Near-shore Marine Ecosystems: Challenges and Opportunities. Retrieved from The World Bank website: https://documents1.worldbank.org/curated/en/847171468313810664/pdf/605780REPLACEM10of0Coastal0Wetlands.pdf Williams, B. A., Watson, J., Beyer, H., Klein, C., Montgomery, J., Runting, R., … Wenger, A. (2021). The global rarity of intact coastal regions. Conservation Biology. doi:10.1101/2021.05.10.443490 Mcowen, C., Weatherdon, L., Bochove, J., Sullivan, E., Blyth, S., Zockler, C., … Fletcher, S. (2017). A global map of saltmarshes. Biodiversity Data Journal, 5, e11764. doi:10.3897/bdj.5.e11764 Fagherazzi, S., Anisfeld, S. C., Blum, L. K., Long, E. V., Feagin, R. A., Fernandes, A., … Williams, K. (2019). Sea level rise and the dynamics of the marsh-upland boundary. Frontiers in Environmental Science, 7. doi:10.3389/fenvs.2019.00025 FEEDBACK: Use this form to send a message to the editor of this post. If you want to post a public comment, you can do that at the bottom of the page. Related reading: To save salt marshes, researchers deploy a wide arsenal of techniques Article published by Carbon Conservation, Carbon Emissions, Climate Change, Climate Change And Biodiversity, Climate Change And Conservation, Coastal Ecosystems, Environment, Extreme Weather, Habitat Degradation, Impact Of Climate Change, Oceans, Research, Sea Levels, Soil Carbon, Storms, Wetlands Print
Environmental Science
Rubber plumbing seals can leak additives into drinking water, study says As drinking water flows through pipes and into a glass, it runs against the rubber seals inside some plumbing devices. These parts contain additives that contribute to their flexibility and durability, but these potentially harmful compounds can leak into drinking water, according to a small-scale study in Environmental Science & Technology Letters. The authors report that the released compounds, which are typically linked to tire pollution, also transformed into other unwanted byproducts. To enhance rubber's strength and durability, manufacturers typically mix in additives. Scientists have shown that tire dust can transport these substances, such as 1,3 diphenylguanidine (DPG) and N-(1,3-dimethylbutyl)-N'-phenyl-1,4-benzenediamine (6PPD), into waterways. DPG and 6PPD have also been detected in drinking water samples, though it's unclear how the compounds got there. In previous research, Shane Snyder and Mauricius Marques dos Santos found that these rubber additives can react with disinfectants in simulated drinking water. Their lab tests generated a variety of chlorinated compounds, some of which could damage DNA. Now, the team wanted to assess whether real-world rubber plumbing fixtures can release DPG and 6PPD and form chlorinated byproducts in drinking water samples. In this pilot study, the team collected tap water from 20 buildings and detected polymer additives at parts per trillion levels in every sample. The researchers explain that these compounds are not currently regulated, but the measured levels are potentially concerning, based on their previous study's results from human cell bioassays. The samples from faucets with aerators contained the highest total amounts. All of the samples contained DPG and one of its chlorinated byproducts, whereas 6PPD and two other chlorine-containing compounds were each found in fewer than five samples. This is the first report of chlorinated DPG byproducts in drinking water, according to the researchers. To see if these compounds could have come from plumbing fixtures, the team tested rubber O-rings and gaskets from seven commercial devices, including faucet aerators and connection seals. In the experiment, the rings sat in water with or without chlorinated disinfectants for up to two weeks. Most of the seals, except for the silicone-based ones, released DPG and 6PPD additives. Additionally, plumbing pieces sitting in disinfectant-treated water generated chlorinated forms of DPG in amounts that were consistent with those observed in the drinking water samples. Because some of the rubber plumbing seals released DPG and 6PPD, the researchers say that drinking water, as well as tire pollution, could be a route of human exposure to these compounds. More information: Occurrence of Polymer Additives 1,3-Diphenylguanidine (DPG), N‑(1,3-Dimethylbutyl)‑N'‑phenyl-1,4-benzenediamine (6PPD), and Chlorinated Byproducts in Drinking Water: Contribution from Plumbing Polymer Materials, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00446. pubs.acs.org/doi/abs/10.1021/acs.estlett.3c00446 Journal information: Environmental Science & Technology Letters Provided by American Chemical Society
Environmental Science
The Constant Evolution of Education for Sustainability Professionals During my time as an EPA staffer and then consultant from 1977 through the 1990s, I started to develop an understanding of the educational needs of environmental professionals. I began putting that understanding to work, creating graduate programs in environment and sustainability policy and management. I started in 1987, when I became the Associate Dean of Faculty at Columbia’s School of International and Public Affairs (SIPA), and was able to launch an environmental policy concentration. That concentration continues today as a concentration in Energy and Environment. A unique aspect of that program in the early days was a course that was team-taught by several prominent environmental scientists from Columbia’s Lamont Doherty Earth Observatory. That course was entitled “Environmental Science for Policymakers,” and it was an innovative and important breakthrough. During my time at EPA, I came to learn the importance of environmental science and the general ignorance that most environmental policymakers and policy analysts (me included) had of the basics of toxicology, hydrology, geology, climate science, ecology, and earth systems science. The environmental science our faculty taught to students of environmental policy and management was not the same as that taught to future environmental scientists; instead, it was the science most relevant to policy and management decision-making. In 2002, building on what we learned fielding the environmental policy concentration, we established a Master of Public Administration program specifically for environmental policy analysts and managers: the MPA in Environmental Science and Policy. It featured a summer of environmental science, along with the typical set of MPA courses in quantitative policy analysis, economics, management, and financial management taught during the fall and spring semesters. The core MPA courses taught traditional concepts but focused cases and data sets on environmental issues. The “Environmental MPA” also featured a three-semester workshop sequence that began with a two-semester management simulation that emphasized science communication in the summer and program start-up and management in the fall. In the spring semester, we engaged in pro-bono projects for environmental government and nonprofit agencies. The entire sequence was based on experiential learning or learning by doing. Today, over two decades later, our program has evolved significantly, but we continue to offer the same basic degree in environmental science and policy. We have undertaken hundreds of workshop projects since the start of SIPA’s environmental policy program. In recent years, we have made the curriculum a little more flexible by allowing students to enroll in four electives in the fall and spring semesters. Columbia now offers hundreds of courses in environmental policy and sustainability management each year, and our SIPA students can shape part of their program to focus on issues like energy, climate, water, agriculture, and many other topics, including sustainable fashion. The program we created at SIPA requires an in-depth, full-time commitment of twelve months to complete a 54-credit Master’s degree in three intense semesters. That cohort program is very successful but requires single-minded commitment and focus. Around 2008 and 2009, while America endured a “great recession,” I started to work with colleagues at Columbia’s Earth Institute to develop a program designed for working professionals interested in pivoting to the emerging field of Sustainability Management. In discussions with SIPA’s Deans, I was advised that such a degree would be best placed in Columbia’s then-new School of Continuing Education, now called the School of Professional Studies. That school was designed to meet the educational needs of working professionals. The MS in Sustainability Management degree would offer all its classes in the evening and would have only two required classes—a broad survey of sustainability management at the start and a client-based pro-bono capstone workshop at the end. In between, students would complete course requirements in five fields of study: 1. Interdisciplinary topics in sustainability management; 2. Economics and quantitative analysis; 3. The physical dimensions of sustainability; 4. Public policy, and 5. Management and finance. Over the past three years, this program has doubled in size and now enrolls 450 students. While initially, most of the students were working professionals attending part-time, today, about 40% attend full-time. Many of the full-time students are international students who must attend full-time to obtain a student visa. This semester, we are offering about 50 courses on topics ranging from life cycle analysis to complying with carbon disclosure rules to reversing the biodiversity crisis to geographies of environmental justice. During our first fall semester in 2010, we offered only 11 courses. Every Tuesday night at 6:10 PM I teach about 115 students the basics of sustainability management and remain excited to learn from my students while teaching that required course. I now think of the field that I work in as Environmental Sustainability: a subfield of the broader field of Sustainability Management. The field of Sustainability Management has evolved into a field of management that continues to study how to preserve the planet while producing wealth, but it also now studies how to preserve organizations and communities and ensure that merit and brainpower are rewarded while harm to host communities is reduced. Corporations all over the world are including sustainability considerations as part of routine management, and as I predicted in my 2011 book Sustainability Management: today’s definition of effective management includes sustainability management. The two terms are synonymous. Effective managers focus on energy efficiency and their carbon and environmental impacts, or they end up paying hundreds of millions of dollars in clean-up costs and liability to communities like East Palestine, Ohio. They focus on their political impact on local communities or, like Amazon, they are forced to abandon their plans for HQ2 in Long Island City and find themselves exiled to Crystal City, Virginia. We live on an increasingly crowded and interconnected planet in a global economy that is not going away. Management requires care and precision. It calls for brains and strategy, not muscle and brute force. As educators, my colleagues and I are required to anticipate how society, culture, technology, and governance are evolving. This year, we have added three courses on corporate sustainability reporting, including a law course on the new carbon disclosure rules that are coming and a course on managing corporate sustainability reporting. These were added to our long-standing course on how to conceptualize and write ESG reports. Our courses on greenhouse gas measurement and life cycle analysis continue to evolve, along with courses on start-ups, sustainability finance, and the circular economy. The two Master’s programs I direct now have over three thousand graduates, and many of those alums are leaders in the field of environmental sustainability and sustainability management. Some provide me with welcomed, needed advice on emerging issues our curriculum needs to address. Our students are also free with their advice, and our faculty take all these proposals seriously. The issues of environmental sustainability are changing daily. Looking back over the decades, the only constant is change. I remember in 2001 when New York City’s final landfill at Freshkills, Staten Island, closed, and we met with engineering colleagues to learn about waste-to-energy plants and anaerobic digestion for food waste. Today, food recycling is mandatory in Brooklyn and Queens, and waste mining that utilizes robots and artificial intelligence is showing great promise as we work to replace home waste sorting with automated sorting. Electric vehicles that seemed like a fantasy are being adopted at a pace no one predicted a decade ago. Battery technology and renewable energy technology continue to advance. In a field like environmental policy and sustainability management, we need to keep current and look ahead to attempt to predict where we may be going. Of course, while keeping up with a changing world, we still need to teach our students some of the basics of management, finance, and public policy, which remain unchanged. Half of my course in sustainability management is devoted to the basics of organizational management that my colleague Bill Eimicke and I wrote about in our recent book Management Fundamentals. The other half is devoted to cases and concepts in sustainability management. Students find management studies simplistic, but when we discuss organizational behavior, learn that inadequate management is quite common. As I often say: “If management’s so simple, why are so many managers so bad at it?” In addition to management fundamentals, there are analytic techniques from economics, political science, law, sociology, and history that are central to effective sustainability management. It is this combination of timeless concepts and an understanding of the contemporary world that presents a constant challenge to sustainability educators. A balance is needed between the lessons that will help our graduates in their next job and the concepts that will remain central throughout their professional careers. Life-long learning is required to respond to a changing world, but so, too, is the reinforcement of fundamental ethical principles and values and a basic understanding of human and organizational behavior. Education requires both change and stability and the quest for balance never ends.
Environmental Science
In a new article published in the journal Biological Conservation, a team of researchers from the University of Helsinki, University of Jyväskylä, and University of Kent assessed preferences and motivations for owning exotic pets, by asking more than 300 keepers across 33 countries in an anonymous survey translated in 6 different languages. Overall, the study found that exotic pet keepers were concerned about species conservation and preferred captive-bred exotic pets and/or species that were commonly found in the wild and available in the market, suggesting that respondents’ preferences may be aligned with at least some conservation objectives (e.g., sustainable use). Moreover, while respondents favoured rare aesthetic or morphological traits, they disregarded animals of wild origin, under higher risk of extinction, and under trade restrictions. Passionate about the species The study also found that the most important reasons of exotic pet keeping were relational motivations, such as caring about the exotic pet, as well as learning and being passionate about the species. “Keepers may establish emotional relationships with their exotic pets and may be concerned by the fact that their interest and care does not impact, but instead supports the conservation of the species in the wild” says Dr Anna Hausmann, a conservation scientist leading the study. “However, while respondents showed feelings of care, interest, and responsibility towards the conservation of exotic pet species, practices of breeding, trading, keeping and other close contact opportunities (e.g., exotic pet cafes) present several conservation and animal welfare challenges, which can potentially threaten both species and people’s well-being (e.g., spread of zoonotic disease)”, she continues, “keeping exotic pets may represent a way people express, and practice, care towards other-than-human natures, which however may not be aligned with conservation goals”. The study used an online survey where respondents were asked to choose the most and least preferred characteristics in various combinations of hypothetical exotic pets for sale, and indicate their motivations to acquire them. “The study followed state of the art methodologies for assessing preferences using experimental designs, resulting in the likelihood that each characteristic could be chosen as best or worst when acquiring exotic pets” says Iain Fraser, Professor of Agri-Environmental Economics at the University of Kent, who co-authored the study. Rarity fascinates Rare attractive aesthetic features of species were sought after by consumers, and respondents supported captive breeding of species as a source for exotic pets. However, the combined preference for rare aesthetic features and for captive-bred animals may lead to the deliberate selection of individual animals for breeding purposes based on specific traits through intensive breeding, in which animals are potentially taken from the wild, or artificially selected for rare aesthetics that do not exist in the wild. “Certification systems of origin that supports animal welfare and conservation may be one option to help support a more sustainable trade in exotic pet species” says Associate Professor Enrico Di Minin, the senior author of the study, who leads the Helsinki Lab of Interdisciplinary Conservation Science at the University of Helsinki, and is the receiver of a European Research Council grant that supported the study. “However, attention should be paid to challenges throughout the supply chain and not to incentivize consumers’ preferences for rare genetic features as this may pose a risk to the conservation of species in the wild” he points out. Feelings of care and curiosity and being passionate about the species were dominant motivations for keeping exotic pets. “In order to enhance conservation of exotic pet species and people’s well-being, there is need to explore alternative ways of conceiving and practicing how people care about non-human natures” says Dr. Gonzalo Cortés-Capano a research Fellow at the School of Resource Wisdom, University of Jyväskylä, who co-authored the paper. “Care, as embodied and practiced in the context of human-exotic pet relations, can act as an important motivation for stewardship, supporting conservation goals if redirected towards caring about species in their own habitats. Existing frameworks such as ethics of care and relational values may provide insights to better understand how to foster meaningful expressions of care with animals in the wild, such as in people’s gardens, neighbourhoods, or nearby natural areas, as an alternative to keeping animals as exotic pets at home.” “Understanding demand, and the role of relational dimensions, are crucial when planning conservation initiatives and policies to address wildlife trade, which is a major threat to biodiversity” concludes Dr. Anna Hausmann. More information: Dr. Anna Hausmann, Department of Biological and Environmental Science, School of Resource Wisdom, University of Jyvaskyla, Jyvaskyla, Finland Email: anna.a.hausmann@jyu.fi Twitter: @Haus_Panda Associate Professor Enrico Di Minin, Department of Geosciences and Geography, University of Helsinki, Finland. Email: enrico.di.minin@helsinki.fi Twitter: @EnTembo @HELICS_Lab
Environmental Science
U.S. President Joe Biden holds out his pen to U.S. Senator Joe Manchin (D-WV) as Senate Majority Leader Chuck Schumer (D-NY) and U.S. House Majority Whip James Clyburn (D-SC) look on after Biden signed "The Inflation Reduction Act of 2022" into law during a ceremony in the State Dining Room of the White House in Washington, August 16, 2022.Leah Millis | ReutersThe Biden administration this year signed a historic climate and tax deal that will funnel billions of dollars into programs designed to speed the country's clean energy transition and battle climate change.As the U.S. this year grappled with climate-related disasters from Hurricane Ian in Florida to the Mosquito Fire in California, the Inflation Reduction Act, which contains $369 billion in climate provisions, was a monumental development to mitigate the effects of climate change across the country. The bill, which President Joe Biden signed into law in August, is the most aggressive climate investment ever taken by Congress and is expected to slash the country's planet-warming carbon emissions by about 40% this decade and move the country toward a net-zero economy by 2050.The IRA's provisions have major implications for clean energy and manufacturing businesses, climate startups and consumers in the coming years. As 2022 comes to a close, here's a look back at the key elements in the legislation that climate and clean energy advocates will be monitoring in 2023.Incentives for electric vehiclesThe deal offers a federal tax credit worth up to $7,500 to households that buy new electric vehicles, as well as a used EV credit worth up to $4,000 for vehicles that are at least two years old. Starting Jan. 1, people making $150,000 a year or less, or $300,000 for joint filers, are eligible for the new car credit, while people making $75,000 or less, or $150,000 for joint filers, are eligible for the used car credit.Despite a rise in EV sales in recent years, the transportation sector is still the country's largest source of greenhouse gas emissions, with the lack of convenient charging stations being one of the barriers to expansion. The Biden administration has set a goal of 50% electric vehicle sales by 2030.The IRA limits EV tax credits to vehicles assembled in North America and is intended to wean the U.S. off battery materials from China, which accounts for 70% of the global supply of battery cells for the vehicles. An additional $1 billion in the deal will provide funding for zero-emissions school buses, heavy-duty trucks and public transit buses.U.S. President Joe Biden gestures after driving a Hummer EV during a tour at the General Motors 'Factory ZERO' electric vehicle assembly plant in Detroit, Michigan, November 17, 2021.Jonathan Ernst | ReutersStephanie Searle, a program director at the nonprofit International Council on Clean Transportation, said the combination of the IRA tax credits and state policies will bolster EV sales. The agency projects that roughly 50% or more of passenger cars, SUVs and pickups sold in 2030 will be electric. For electric trucks and buses, the number will be 40% or higher, the group said.In the upcoming year, Searle said the agency is monitoring the Environmental Protection Agency's plans to propose new greenhouse gas emissions standards for heavy-duty vehicles starting in the 2027 model year."With the IRA already promoting electric vehicles, EPA can and should be bold in setting ambitious standards for cars and trucks," Searle said. "This is one of the Biden administration's last chances for strong climate action within this term and they should make good use of it."Taking aim at methane gas emissionsSome pumpjacks operate while others stand idle in the Belridge oil field near McKittrick, California. Oil prices rose in early Asian trade on the prospect that a stalled Iran nuclear deal and Moscow's new mobilization campaign would restrict global supplies.Mario Tama | Getty ImagesThe package imposes a tax on energy producers that exceed a certain level of methane gas emissions. Polluters pay a penalty of $900 per metric ton of methane emissions emitted in 2024 that surpass federal limits, increasing to $1,500 per metric ton in 2026.It's the first time the federal government has imposed a fee on the emission of any greenhouse gas. Global methane emissions are the second-biggest contributor to climate change after carbon dioxide and come primarily from oil and gas extraction, landfills and wastewater and livestock farming.Methane is a key component of natural gas and is 84 times more potent than carbon dioxide, but doesn't last as long in the atmosphere. Scientists have contended that limiting methane is needed to avoid the worst consequences of climate change. The Harris Cattle Ranch feedlot, located along Interstate 5, is the largest producer of beef in California and can produce 150 million pounds of beef a year as viewed on May 31, 2021, near Harris Ranch, California.George Rose | Getty ImagesRobert Kleinberg, a researcher at Columbia University's Center on Global Energy Policy, said the methane emitted by the oil and gas industry each year would be worth about $2 billion if it was instead used to generate electricity or heat homes."Reducing methane emissions is the fastest way to moderate climate change. Congress recognized this in passing the IRA," Kleinberg said. "The methane fee is a draconian tax on methane emitted by the oil and gas industry in 2024 and beyond."In addition to the IRA provision on methane, the Biden Interior Department this year proposed rules to curb methane leaks from drilling, which it said will generate $39.8 million a year in royalties for the U.S. and prevent billions of cubic feet of gas from being wasted through venting, flaring and leaks. Boosting clean energy manufacturingThe bill provides $60 billion for clean energy manufacturing, including $30 billion for production tax credits to accelerate domestic manufacturing of solar panels, wind turbines, batteries and critical minerals processing, and a $10 billion investment tax credit to manufacturing facilities that are building EVs and clean energy technology.There's also $27 billion going toward a green bank called the Greenhouse Gas Reduction Fund, which will provide funding to deploy clean energy across the country, but particularly in overburdened communities. And the bill has a hydrogen production tax credit, which provides hydrogen producers with a credit based on the climate attributes of their production methods.Solar panels are set up in the solar farm at the University of California, Merced, in Merced, California, August 17, 2022.Nathan Frandino | ReutersEmily Kent, the U.S. director of zero-carbon fuels at the Clean Air Task Force, a global climate nonprofit, said the bill's support for low-emissions hydrogen is particularly notable since it could address sectors like heavy transportation and heavy industry, which are hard to decarbonize."U.S. climate policy has taken a major step forward on zero-carbon fuels in the U.S. and globally this year," Kent said. "We look forward to seeing the impacts of these policies realized as the hydrogen tax credit, along with the hydrogen hubs program, accelerate progress toward creating a global market for zero-carbon fuels."The clean energy manufacturing provisions in the IRA will also have major implications for startups in the climate space and the big venture capital firms that back them. Carmichael Roberts, head of investment at Breakthrough Energy Ventures, has said the climate initiatives under the IRA will give private investors more confidence in the climate space and could even lead to the creation of up to 1,000 companies."Everybody wants to be part of this," Roberts told CNBC following the passage of the bill in August. Even before the measure passed, "there was already a big groundswell around climate," he said.Investing in communities burdened by pollutionThe legislation invests more than $60 billion to address the unequal effects of pollution and climate change on low-income communities and communities of color. The funding includes grants for zero-emissions technology and vehicles, and will help clean up Superfund sites, improve air quality monitoring capacity, and provide money to community-led initiatives through Environmental and Climate Justice block grants.Smoke hangs over the Oakland-San Francisco Bay Bridge in San Francisco, California, U.S., on Wednesday, Sept. 9, 2020. Powerful, dry winds are sweeping across Northern California for a third day, driving up the risk of wildfires in a region thats been battered by heat waves, freak lightning storms and dangerously poor air quality from blazes.Bloomberg | Bloomberg | Getty ImagesResearch published in the journal Environmental Science and Technology Letters found that communities of color are systematically exposed to higher levels of air pollution than white communities due to redlining, a federal housing discrimination practice. Black Americans are also 75% more likely than white Americans to live near hazardous waste facilities and are three times more likely to die from exposure to pollutants, according to the Clean Air Task Force.Biden signed an executive order after taking office aimed to prioritize environmental justice and help mitigate pollution in marginalized communities. The administration established the Justice40 Initiative to deliver 40% of the benefits from federal investments in climate change and clean energy to disadvantaged communities. More recently, the EPA in September launched an office focused on supporting and delivering grant money from the IRA to these communities.Cutting ag emissionsThe deal includes $20 billion for programs to slash emissions from the agriculture sector, which accounts for more than 10% of U.S. emissions, according to EPA estimates.The president has pledged to reduce emissions from the agriculture industry in half by 2030. The IRA funds grants for agricultural conservation practices that directly improve soil carbon, as well as projects that help protect forests prone to wildfires.Farmer Roger Hadley harvests corn from his fields in his John Deere combine in this aerial photograph taken over Woodburn, Indiana.Bing Guan | Reuters
Environmental Science
Novel method of analyzing microplastic particle pollution can facilitate environmental impact assessment In the last decade, growing numbers of researchers have studied plastic pollution, one of the world's most pressing environmental hazards. They have made progress but still face challenges, such as the comparability of results, especially with regard to microplastic particles. There is no standard sample collection and analysis methodology, for example. Most studies present conclusions based on numbers of particles as if they were environmentally equivalent regardless of size, volume, mass or surface area. An article by three Brazilian researchers published in Environmental Science and Pollution Research aims to contribute to progress in this field by proposing a novel perspective on particle morphology. Using a theoretical approach, the authors argue that including morphological attributes in the analysis can reveal significant differences between samples of microplastic particles, demonstrating that samples initially considered equivalent because they contain the same number of particles actually have different environmental impacts because of variations in particle size and shape. Microplastic particles (MPs) are artificial polymers with a length of between 0.001 and 5.0 millimeters, or 1-5,000 micrometers (μm), and are found in all kinds of environment. Few studies of pollution by MPs have been published in Brazil, especially regarding inland aquatic areas. "Most of the research that's been done on MPs reports the number of particles in terms of the unit adopted for the sample type, ranging from volume in the case of water, to mass when the analysis involves soil and sediment, and individuals for biota. We've been researching MPs in the laboratory for several years, and we've confirmed that size is important and makes a difference. We measure particle size in all samples. In this study we found samples with similar numbers of MPs but significant variations in particle size and very different levels of plastic pollution based on particle mass and volume," Décio Semensatto, first author of the article, told Agência FAPESP. He is a professor at the Federal University of São Paulo's Institute of Environmental, Chemical and Pharmaceutical Sciences (ICAQF-UNIFESP). The other authors of the article are Professor Geórgia Labuto and Cristiano Rezende Gerolin, a former researcher at UNIFESP. According to Semensatto, the group is finalizing an article on the Guarapiranga reservoir, a source of drinking water for São Paulo and two nearby towns, Itapecerica da Serra and Embu-Guaçu. "We collected samples in the wet and dry seasons and found more MPs in one season than another, with an even greater difference in terms of each sample's mass and total volume of plastic. Using only numbers of particles as a parameter focuses on just one dimension and ignores that fact that different particle sizes have different effects on ecosystems," he said. Comparisons According to the recent article, the researchers analyzed seven samples with 100 MPs each. These would be considered equivalent based on conventional pollution metrics. However, the comparisons made showed that their impact on the environment would be very different. In one sample, the MPs were larger in terms of volume, mass and specific surface area. It therefore had more plastic than the others and was likely to give rise to a larger number of even smaller particles when broken down by physical and chemical degradation. In another comparison, they analyzed samples with 100 MPs and 10 MPs respectively, noting that if only the number of particles were considered, the conclusion would be that the former had ten times more plastic than the latter, although both had the same total mass and volume of plastic, while particle size and specific surface area were larger in the former. The authors also highlight the question of morphology or particle shape. Samples containing fibers had less volume, mass and surface area, for example. "We also explore the question of specific surface area, which is highly relevant, especially when studying MPs as carriers of other pollutants, such as metals or pharmaceuticals," Semensatto said. "Particle size influences the surface area available for adsorption of these pollutants. In addition, MPs also form a plastisphere that serves as a substrate for organisms and disperses these organisms to other environments, with consequences for global health." The plastisphere is the community of bacteria, fungi, algae, viruses and other microorganisms that have evolved to live on man-made plastic. "By considering particle volume, mass and specific surface area, we can better understand how MPs pollute water bodies and transport other agents responsible for pollution, including microorganisms," Semensatto said. "Analyzing all attributes of samples brings new possibilities into view and extends the comparability of the results." The vast scale of the problem World production of plastic reached 348 million metric tons in 2017, up from only 2 million tons in 1950. The global plastic industry is valued at USD 522.6 billion, and its capacity is expected to double by 2040, according to a report by The Pew Charitable Trusts and SystemIQ, partnering with Oxford and Leeds Universities in the UK. Plastic production and pollution affect human health and fuel greenhouse gas emissions. Plastic can be ingested by more than 800 marine and coastal species or cause accidents involving them. Some 11 million tons of plastic waste enter the oceans every year. In 2022, 175 countries represented at the UN General Assembly adopted a historic resolution to sign up by 2024 to a legally binding commitment to end global plastic pollution. To this end, they established an intergovernmental negotiating committee, which held its first session in December. "With this study, we set out to contribute to academic efforts to develop routines and methodologies for dealing with plastic pollution," Semensatto said. "Our article proposes a discussion within the academic community. The proposal is open to debate. We're inviting other scientists to measure MPs and report their morphological attributes, as a contribution to the discussion of their environmental significance." In this context, a group at UNIFESP linked to Semensatto are working with the São Paulo State Environmental Corporation (CETESB) to develop protocols for collecting water samples and analyzing MPs in the coastal region of the state. The main aim is to find a way to compare results so that MPs can become part of continuous environmental monitoring, which they are not right now in São Paulo. This project is being conducted under the aegis of Rede Hydropoll, a network of researchers at various institutions engaged in studying water source pollution. More information: Décio Semensatto et al, The importance of integrating morphological attributes of microplastics: a theoretical discussion to assess environmental impacts, Environmental Science and Pollution Research (2022). DOI: 10.1007/s11356-022-24567-4 Journal information: Environmental Science and Pollution Research Provided by FAPESP
Environmental Science
The next time you crack your backdoor to let your cat outside for its daily adventure, you may want to think again. For a cat, the outdoors is filled with undesirable potential. Like the risks of catching and transmitting diseases, and the uncontrollable drive to hunt and kill wildlife, which has been shown to reduce native animal populations and degrade biodiversity. A new study by University of Maryland researchers has concluded that humans bear the primary responsibility, and that these risks can be significantly reduced by keeping cats indoors. The study’s analysis used data from the D.C. Cat Count, a Washington, D.C.--wide survey that deployed 60 motion-activated wildlife cameras spread across 1,500 sampling locations. The cameras recorded what cats preyed on and demonstrated how they overlapped with native wildlife, which helped researchers understand why cats and other wildlife are present in some areas, but absent from others. The paper was published on November 21, in the journal Frontiers in Ecology and Evolution. “We discovered that the average domestic cat in D.C. has a 61% probability of being found in the same space as racoons -- America’s most prolific rabies vector -- 61% spatial overlap with red foxes, and 56% overlap with Virginia opossums, both of which can also spread rabies,” said Daniel Herrera, lead author of the study and Ph.D. student in UMD’s Department of Environmental Science and Technology (ENST). “By letting our cats outside we are significantly jeopardizing their health.”In addition to the risk of being exposed to diseases that they can then bring indoors to the humans in their families (like rabies and toxoplasmosis), outdoor cats threaten native wildlife. The D.C. Cat Count survey demonstrated that cats that are allowed to roam outside also share the same spaces with and hunt small native wildlife, including grey squirrels, chipmunks, cottontail rabbits, groundhogs, and white footed mice. By hunting these animals, cats can reduce biodiversity and degrade ecosystem health. “Many people falsely think that cats are hunting non-native populations like rats, when in fact they prefer hunting small native species,” explained Herrera. “Cats are keeping rats out of sight  due to fear, but there really isn’t any evidence that they are controlling the non-native rodent population. The real concern is that they are decimating native populations that provide benefits to the D.C. ecosystem.”In general, Herrera found that the presence of wildlife is associated with tree cover and access to open water. On the other hand, the presence of cats decreased with those natural features but increased with human population density. He says that these associations run counter to arguments that free-roaming cats are simply stepping into a natural role in the ecosystem by hunting wildlife.“These habitat relationships suggest that the distribution of cats is largely driven by humans, rather than natural factors,” explained Travis Gallo, assistant professor in ENST and advisor to Herrera. “Since humans largely influence where cats are on the landscape, humans also dictate the degree of risk these cats encounter and the amount of harm they cause to local wildlife.”Herrera encourages pet owners to keep their cats indoors to avoid potential encounters between their pets and native wildlife. His research notes that feral cats are equally at risk of contracting diseases and causing native wildlife declines, and they should not be allowed to roam freely where the risk of overlap with wildlife is high – echoing previous calls for geographic restrictions on where sanctioned cat colonies can be established or cared for.
Environmental Science
The newly upgraded Linac Coherent Light Source (LCLS) X-ray free-electron laser (XFEL) at the Department of Energy’s SLAC National Accelerator Laboratory successfully produced its first X-rays, and researchers around the world are already lined up to kick off an ambitious science program. The upgrade, called LCLS-II, creates unparalleled capabilities that will usher in a new era in research with X-rays. Scientists will be able to examine the details of quantum materials with unprecedented resolution to drive new forms of computing and communications; reveal unpredictable and fleeting chemical events to teach us how to create more sustainable industries and clean energy technologies; study how biological molecules carry out life’s functions to develop new types of pharmaceuticals; and study the world on the fastest timescales to open up entirely new fields of scientific investigation. “This achievement marks the culmination of over a decade of work,” said LCLS-II Project Director Greg Hays. “It shows that all the different elements of LCLS-II are working in harmony to produce X-ray laser light in an entirely new mode of operation.” Reaching “first light” is the result of a series of key milestones that started in 2010 with the vision of upgrading the original LCLS and blossomed into a multi-year ($1.1 billion) upgrade project involving thousands of scientists, engineers, and technicians across DOE, as well as numerous institutional partners. “For more than 60 years, SLAC has built and operated powerful tools that help scientists answer fundamental questions about the world around us. This milestone ensures our leadership in the field of X-ray science and propels us forward to future innovations,” said Stephen Streiffer, SLAC’s interim laboratory director. “It’s all thanks to the amazing efforts of all parts of our laboratory in collaboration with the wider project team.” Taking X-ray science to a new level XFELs produce ultra-bright, ultra-short pulses of X-ray light that allow scientists to capture the behavior of molecules, atoms, and electrons with unprecedented detail on the natural timescales on which chemistry, biology, and material changes occur. XFELs have been instrumental in many scientific achievements, including the creation of the first "molecular movie" to study complex chemical processes, watching in real time the way in which plants and algae absorb sunlight to produce all the oxygen we breathe, and studying the extreme conditions that drive the evolution of planets and phenomena such as diamond rain. LCLS, the world’s first hard XFEL, produced its first light in April 2009, generating X-ray pulses a billion times brighter than anything that had come before. It accelerates electrons through a copper pipe at room temperature, which limits its rate to 120 X-ray pulses per second. “The light from SLAC’s LCLS-II will illuminate the smallest and fastest phenomena in the universe and lead to big discoveries in disciplines ranging from human health to quantum materials science,” said U.S. Secretary of Energy Jennifer M. Granholm. “This upgrade to the most powerful X-ray laser in existence keeps the United States at the forefront of X-ray science, providing a window into how our world works at the atomic level. Congratulations to the incredibly talented engineers and researchers at SLAC who have poured so much into this project over the past several years, all in the pursuit of knowledge.” The LCLS-II upgrade takes X-ray science to a whole new level: It can produce up to a million X-ray pulses per second, 8,000 times more than LCLS, and produce an almost continuous X-ray beam that on average will be 10,000 times brighter than its predecessor – a world record for today’s most powerful X-ray light sources. “The LCLS’s history of world-leading science will continue to grow with these upgraded capabilities,” said DOE Office of Science Director Asmeret Asefaw Berhe. “I really look forward to the impact of LCLS-II and the user community on national science priorities, ranging from fundamental science research in chemistry, materials, biology, and more; application of the science advances for clean energy; and ensuring national security through initiatives like quantum information science.” Partnerships for sophisticated technology This accomplishment is the culmination of an extensive collaborative effort, with vital contributions from researchers across the world. Multiple institutions, including five U.S. national laboratories and a university, have contributed to the realization of the project, a testimony to its national and international importance. Central to LCLS-II’s enhanced capabilities is its revolutionary superconducting accelerator. It comprises 37 cryogenic modules that are cooled to minus 456 degrees Fahrenheit – colder than outer space – a temperature at which it can boost electrons to high energies with nearly zero energy loss. Fermilab and the Thomas Jefferson National Accelerator Facility played pivotal roles in designing and building these cryomodules. “At the heart of the LCLS-II Project is its pioneering superconducting accelerator,” said Fermilab Director Lia Merminga. “The collective engineering, technical and scientific expertise and talent of the collaboration deserve immense credit for its successful construction and for delivering world-class performance in a remarkably short period of time.” The superconducting accelerator works in parallel with the existing copper one, allowing researchers to make observations over a wider energy range, capture detailed snapshots of rapid processes, probe delicate samples that are beyond the reach of other light sources and gather more data in less time, greatly increasing the number of experiments that can be performed at the facility. “It is wonderful to see this tremendous achievement, which is powered by the state-of-the-art LCLS-II superconducting accelerator,” said Jefferson Lab Director Stuart Henderson. “Jefferson Lab is proud to have contributed to this achievement through our construction of half of the cryomodules, in collaboration with Fermilab and SLAC. This achievement builds upon more than a decade of development of this powerful particle accelerator technology.” In addition to a new accelerator, LCLS-II required many other cutting-edge components, including a new electron source, two powerful cryoplants that produce refrigerant for the niobium structures in the cryomodules, and two new undulators to generate X-rays from the electron beam, as well as major leaps in laser technology, ultrafast data processing, and advanced sensors and detectors. The undulators were developed in partnership with Lawrence Berkeley National Laboratory and Argonne National Laboratory. Numerous other institutions, including Cornell University have contributed to other key components, underscoring the widespread commitment to advancing scientific knowledge. “Congratulations to SLAC and to the impressive team of accelerator experts from the Department of Energy labs across the country that built LCLS-II,” said Lawrence Berkeley National Laboratory Director Mike Witherell. “This unique new facility will provide many new opportunities for discovery science.” The “soft” and “hard” X-ray undulators produce X-rays with low and high energy, respectively – a versatility that allows researchers to tailor their experiments more precisely, probing deeper into the structures and behaviors of materials and biological systems. “We’re excited to see our collaborations with SLAC and Berkeley Lab help to empower this light source of the future,” said Argonne National Laboratory Director Paul Kearns. “The advanced technology behind LCLS-II will enable the DOE user facility community to significantly increase our understanding of the world around us. Congratulations to SLAC and to everyone who contributed to this remarkable scientific achievement.” Enabling breakthrough science Researchers have been preparing for years to use LCLS-II for a broad science program that will tackle challenges that were out of reach before. For example, scientists will be able to study interactions in quantum materials on their natural timescales, which is key to understanding their unusual and often counter-intuitive properties – to make use of them to build energy efficient devices, quantum computers, ultrafast data processing, and other future technologies. By capturing atomic-scale snapshots of chemical reactions at the attosecond timescale – the scale at which electrons move – LCLS-II will also provide unprecedented insights into chemical and biological reactions, leading to more efficient and effective processes in industries ranging from renewable energy to the production of fertilizer and the mitigation of greenhouse gases. The X-ray pulses generated by LCLS-II will allow scientists to track the flow of energy through complex systems in real time. This will provide an unprecedented level of detail to inform the development of fields such as ultrafast computing, sustainable manufacturing, and communications. At the intersection of physics, chemistry, and engineering, materials science also stands to benefit substantially from the new capabilities of LCLS-II. The enhanced X-ray laser’s potential to observe the internal structure and properties of materials at atomic and molecular scales is predicted to lead to breakthroughs in the design of new materials with unique properties, to impact a range of industries from electronics to energy storage to aerospace engineering. Life’s processes occur at scales and speeds that have often eluded detailed study. LCLS-II’s ability to create ‘molecular movies’ can illuminate these phenomena, revolutionizing our understanding of life at the its most basic level. From the intricate dance of proteins to the machinery of photosynthesis, LCLS-II will shed light on biological systems in never-before-seen detail. “Experiments in each of these areas are set to begin in the coming weeks and months, attracting thousands of researchers from across the nation and around the world,” said LCLS Director Mike Dunne. “DOE user facilities such as LCLS are provided at no cost to the users – we select on the basis of the most important and impactful science. LCLS-II is going to drive a revolution across many academic and industrial sectors. I look forward to the onslaught of new ideas – this is the essence of why national labs exist.” Press Office Contact: Manuel Gnida, mgnida@slac.stanford.edu, 415-308-7832 SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation. SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. Related Topics - Accelerators - Energy sciences - Fundamental physics - Partnerships - DOE partnerships - Science news - Accelerator science - Advanced accelerator R&D - Batteries - Bioimaging - Biological sciences - Chemistry and catalysis - Condensed-matter physics - Detectors - Energy science - Engineering - Environmental science - Extremely large datasets - Lasers - Materials science - Matter in extreme conditions - Medical - Molecular movies - Nanoscience - Quantum information science (QIS) - Quantum materials - Quantum physics - Quantum sensors and detectors - Scientific computing - Solar cells - Structural molecular biology - Superconductivity - Sustainability - The early universe - Ultrafast science - X-ray crystallography - X-ray imaging - X-ray scattering and diffraction - X-ray science - X-ray spectroscopy - X-ray light sources and electron imaging - LCLS Resonant Inelastic X-ray Scattering (chemRIXS/qRIXS) - LCLS Tender X-ray Imaging (TXI) - LCLS Time-resolved Atomic, Molecular and Optical Science (TMO) - LCLS Atomic, Molecular & Optical Science (AMO) - LCLS Coherent X-ray Imaging (CXI) - LCLS Macromolecular Femtosecond Crystallography (MFX) - LCLS Matter in Extreme Conditions (MEC) - LCLS Matter in Extreme Conditions Upgrade (MEC-U) - LCLS soft X-ray materials science (SXR) - LCLS X-ray Correlation Spectroscopy (XCS) - LCLS X-ray Pump Probe (XPP) - LCLS-II - Linac Coherent Light Source (LCLS)
Environmental Science
By Christian Schroeder - Lecturer in Environmental Science & Planetary Exploration, University of StirlingNASA recently reported that a cloud of dust was surrounding Mars high above its atmosphere. The authors of the study ruled out Mars itself and its moons Phobos and Deimos as the sources of the dust and concluded that it must come from a larger dust cloud floating around between the planets in our solar system.This “interplanetary dust” is hugely important. It is thought to have played a crucial role in the formation and evolution of our solar system. What’s more, it may even have provided our planet with water – and kick-started life.Ashes to ashes, dust to dustWe all know how quickly empty spaces fill with dust and, figuratively speaking, the cosmos is no different. Cosmic dust is made up of tiny mineral grains in the nano and micrometer size range (one billionth and one millionth of a metre, respectively). Cosmic dust particles find themselves between the end of one star’s lifetime and at the beginning of the formation of a new solar system.A star forms from the collapse of a gas cloud made up of hydrogen and helium, elements that were created in the aftermath of the Big Bang. Stars use this hydrogen as fuel, creating heavier elements such as carbon and oxygen and up to the element iron through nuclear fusion processes. These new elements are released at the end of a star’s lifetime, when it collapses under its own gravity and explodes as a supernova. The high energies of such an explosion create additional elements heavier than iron. Some of the heavier elements, metals such as silicon and iron, combine with oxygen to form minerals – which is exactly what dust is. Our solar system formed from the collapse of a hydrogen and helium gas cloud mixed with dust, otherwise there would not be any rocky planets like Earth and Mars. The fact that Earth contains such heavy elements as gold, lead, or uranium (all heavier than iron) shows that our sun is a third or higher-generation star, preceded by at least one supernova explosion of another nearby star.Interstellar dust particles, which predate our own solar system, can provide insight into the processes at the end of the lifetime of ancient stars. The interplanetary dust in the inner solar system contains some interstellar dust particles. But the vast majority of interplanetary dust particles in our solar system are released from comets as they approach the sun or from the collision of asteroids in the asteroid belt. They therefore contain clues about the makeup and formation of such “proto-planets”, which are seen as the first steps of planet formation from the huge dust and gas cloud surrounding a new star.The dust cloud in our solar system gradually moves towards the sun whose gravitational pull acts like a giant vacuum cleaner. On their way, some of the dust particles collide with Mars and Earth. The dust is responsible for the Zodiac light that can be seen after sunset in spring or before sunrise in autumn. Dust as an origin of life?Any cosmic dust mineral grain offers a surface for gases, ice or organic matter to stick to. Complex molecules of organic matter as the basic building blocks for life have been documented in intergalactic dust clouds, comets and meteorites.Understanding the distribution and amount of dust is important because dust could have delivered significant amounts of water and organic matter to the planets in the inner solar system, in particular Earth and Mars. While many researchers think that asteroid and comet impacts may be behind the water and life on Earth, several studies have indicated that dust itself can deliver water and organic matter simultaneously and might have jump-started life. This process would work universally, also on exoplanets in distant solar systems.So if the dust did jump start life on Earth it is plausible that it could have done so on Mars as well. However, Earth’s magnetic field has protected our atmosphere and water against being destroyed by the solar wind – we get just the right amount of it. Mars has not had a magnetic field for most of its lifetime, and its atmosphere and water have subsequently been lost to space. Without water, organic matter molecules cannot be assembled into the very complex molecules, like DNA and proteins, that make up life. Lack of a thick atmospheric layer also means lack of protection against destruction of organic molecules by UV light and other harmful forms of cosmic radiation. While the jury is still out on whether there was ever life on Mars, it is extremely unlikely that dust could jump start life on Mars today, despite hovering above its atmosphere.It is obviously important that we learn more about dust. Interplanetary dust particles are actively collected for research by sending planes into the stratosphere or scouring spacecraft returning to land for impacts of these tiny dust particles. If dust particles make it to the ground by themselves, they can be collected as micrometeorites from places where they are recognisable such as ocean or polar sediments.However, once an interplanetary dust particle enters the Earth’s atmosphere or smashes into a spacecraft, any complex molecules stuck to it are inevitably lost. While we can learn a lot from them about the primordial matter from which our solar system formed as well as the makeup of comets and asteroids, we have to investigate these bodies first-hand to be able to obtain more sensitive information.A good way to do this is to fly through comet tails. This is what Rosetta did to make the surprise discovery of free oxygen in the coma of comet 67P/Chryumov-Gerasimenko. Meanwhile, NASA’s Stardust mission flew through the tail of comet Wild 2 and returned cosmic dust particles to Earth for analysis in 2006. In 2009, NASA announced that fundamental chemical building blocks of life had been found: glycine, an amino acid.Additional data like this can hopefully help uncover many more secrets of the dust in the universe – including whether it kick-started life on Earth and whether it could do it again.Source: The ConversationFeatured Articles:
Environmental Science
Increases in ocean acidity are damaging the first link in the food chain, a microorganism called coccolithophores. This is causing major impacts to marine species that rely on coccolithophores as their primary source of food.  Recent research from the Universitat Autonoma de Barcelona’s Institute of Environmental Science and Technology illustrates how ocean acidification is affecting coccolithophores, and thus, how ocean acidification is affecting the food chain as a whole. Background Information: What is Ocean Acidification? Full Study: Nutritional response of a coccolithophore to changing pH and temperature (Association for the Sciences of Limnology and Oceanography (ASLO) Aug 2022). Coccolithophores, part of the phytoplankton family, are single-celled plant-like marine microorganisms that are commonly found close to the surface of the ocean. Despite their microscopic size, coccolithophores play an important role in the marine food chain by serving as its foundation. Many small marine animals rely on coccolithophores as their primary source of food and energy. Even small fish and other organisms that do eat other things, like worms or small crustaceans, feed on coccolithophores when other sources of food are scarce. Marine animals obtain nutrition and energy from coccolithophores’ fat content, known as lipids.  Coccolithophores build protective scale-like platings around themselves known as coccoliths (the oval platings shown in the image to the right). These coccoliths are made of limestone (calcite). However, ocean acidification makes it much more difficult for organisms to build calcite shells. A 2022 study conducted by researchers from the Universitat Autonoma de Barcelona’s Institute of Environmental Science and Technology, in collaboration with the Roscoff Marine Station of France, discovered that ocean acidification has significantly reduced coccolithophores’ ability to build shells, and decreased the nutrient content in their bodies. This has massive cascading effects throughout the food chain. Here’s what you need to know about it.  The researchers created a simulation of future climate conditions, causing ocean warming and triggering ocean acidification.  At first, the coccolithophores showed resilience to the increase of ocean temperatures and acidity. The researchers even observed an increase in their population. As the experimental ocean acidity spiked, the researchers found that coccolithophore population growth halted as the organisms began to struggle to build their shells. The acidic conditions caused the coccolithophores’ protective platings (coccoliths) to collapse. While this may seem beneficial to organisms that eat coccolithophores (because this makes them easier to eat and digest), this breakdown of their shells comes with other negative consequences. Researchers also discovered that when coccolithophores were exposed to acidified ocean conditions, the nutritional content in their bodies significantly decreased. As acidification worsens, this reduced nutrient content may have a detrimental impact on the food chain.  Marine species that rely on coccolithophores for food, such as smaller fishes and zooplanktons, would be forced to feed on nutritionally deficient food. Researchers concluded that as ocean acidification affects coccolithophores’ energy and nutrients, coccolithophores may also seek lower-acidity areas to try and slow ocean acidifications’ effects on their survival. This movement will pose an extra threat to marine species that rely on coccolithophores for food, as some may be unable to “follow” coccolithophores to lower-acidity conditions.. The images below show the comparison of healthy coccolithophores (a) and collapsed coccolithophores due to ocean acidification (b). Read more: Researchers Discover How Ocean Animals Adapt to Ocean Acidification – But Adaptation Comes at a Price Sources:  “Ocean warming and acidification impact marine food web” Universitat Autonoma de Barcelona, Nov 2022 https://www.uab.cat/web/newsroom/news-detail/ocean-warming-and-acidification-impact-marine-food-web-1345830290613.html?detid=1345875011926 “Ocean warming and acidification impact the marine food web, study finds” Phys Org, Nov 2022 https://phys.org/news/2022-11-ocean-acidification-impact-marine-food.html “Nutritional response of a coccolithophore to changing pH and temperature” Association for the Sciences of Limnology and Oceanography (ASLO) Aug 2022 https://aslopubs.onlinelibrary.wiley.com/doi/10.1002/lno.12204
Environmental Science
A 19th-century 'dinner plate' tool is still useful in ocean science A simple 19th-century tool is still useful to ocean scientists in the age of satellites, new research shows. The research is published in the journal Frontiers in Marine Science. A Secchi disk—historically called a "dinner plate" by sailors—is used in the open ocean to measure concentrations of microscopic algae called phytoplankton. Sailors lower the white disk into the water and record the depth at which it disappears. In the new study, a research team including the University of Exeter, Plymouth Marine Laboratory, Vrije Universiteit (Netherlands) and the Italian Institute of Marine Sciences (ISMAR) compared the performance of Secchi disks with satellites and high-performance chromatography. Secchi disks performed almost as well as modern methods at monitoring phytoplankton abundance—meaning Secchi measurements going back more than a century can help scientists understand long-term changes in the ocean. "Phytoplankton produce half the world's oxygen and form the base of ocean food webs, so monitoring them helps us track everything from climate change to the health of ecosystems," said Dr. Bob Brewin, from the Centre for Geography and Environmental Science on Exeter's Penryn Campus in Cornwall. "New technology undoubtedly gives us new opportunities, but our study shows Secchi disks do a good job of estimating chlorophyll (a way of measuring phytoplankton abundance)—which means we should be able to integrate data from the past with modern measurements. This gives us a priceless source of long-term data on how our oceans are changing." Secchi disks are still used all around the world to monitor ocean biomass and water quality, and co-author Dr. Jaime Pitarch, from ISMAR, said the findings support their continued use. "It's a simple, cheap tool, but our research shows it's also remarkably effective," he said. In fact, researchers including Dr. Brewin at Exeter, are working on a project that will use 3D-printed Secchi disks to monitor water quality in lakes in India and Africa, and coastal regions of the US. Prior to the 1850s, mariners used a variety of objects (in the same way as Secchi disks) to help with navigation, including cloths, pans and plates. It was the Vatican astronomer Angelo Secchi, invited by the Papal Navy Commander Alessandro Cialdi to join a scientific cruise to study the murkiness of the sea in 1865, who standardized the method. The measurements in the new study were collected on Atlantic Meridional Transect cruises. More information: Robert J. W. Brewin et al, Evaluating historic and modern optical techniques for monitoring phytoplankton biomass in the Atlantic Ocean, Frontiers in Marine Science (2023). DOI: 10.3389/fmars.2023.1111416 Journal information: Frontiers in Marine Science Provided by University of Exeter
Environmental Science
- President Joe Biden will sign an executive order directing federal agencies to invest in disadvantaged communities disproportionately affected by pollution and climate change, the White House said. - The president, who is preparing to announce his reelection bid next week, will make the announcement during a ceremony at the White House Rose Garden. - Biden is expected to condemn Republicans over "safeguarding handouts for Big Oil companies" and "fighting to make it easier for oil and gas companies to pollute the air we breathe," a White House official said. President Joe Biden on Friday will sign an executive order directing federal agencies to invest in disadvantaged communities disproportionately affected by pollution and climate change, the White House said. The order will create a new Office of Environmental Justice in the White House to coordinate all environmental justice efforts across the federal government and require agencies to notify nearby communities if toxic substances are released from a federal facility. The president, who is preparing to announce his reelection bid next week, will make the announcement during a ceremony at the White House Rose Garden. He is also expected to condemn Republicans over "safeguarding handouts for Big Oil companies" and "fighting to make it easier for oil and gas companies to pollute the air we breathe," a White House official said. Biden is expected to argue that his administration's historic environmental justice and climate agenda contrasts with "the dangerous vision Speaker McCarthy and his extreme caucus have for our planet, our economy, and public health," the official said. Republican lawmakers have urged weaker regulation of oil production in order to lower energy prices. And earlier this week, House Speaker Kevin McCarthy proposed lifting the debt limit for one year and scaling back federal spending, including repealing electric vehicle and other clean energy tax incentives under the Inflation Reduction Act. Early in his presidency, Biden pledged that addressing environmental justice would be a core component of his climate agenda and signed an executive order that launched the Justice40 Initiative, which requires agencies to deliver at least 40% of benefits from investments to overburdened communities. Communities of color across the U.S. are systematically exposed to higher levels of air pollution than white communities because of federal housing discrimination, according to research published in the journal Environmental Science and Technology Letters. Black Americans are 75% more likely than white Americans to live near facilities that produce hazardous waste, according to the Clean Air Task Force, and are three times more likely to die from exposure to air pollutants. "For far too long, communities across our country have faced persistent environmental injustice through toxic pollution, underinvestment in infrastructure and critical services, and other disproportionate environmental harms often due to a legacy of racial discrimination including redlining," the White House said in a statement. "These communities with environmental justice concerns face even greater burdens due to climate change." Beverly Wright, founder and executive director of the Deep South Center for Environmental Justice, said in a statement that though Biden's executive order is historic, "much work must be done to achieve true environmental justice." In September, Biden's administration set up the Office of Environmental Justice and External Civil Rights, made up of more than 200 Environmental Protection Agency staff in 10 U.S. regions. The office is overseeing the delivery of a $3 billion climate and environmental justice block grant program created by the Inflation Reduction Act. The bill includes $60 billion for environmental justice initiatives.
Environmental Science
A huge ash plume rises into the sky as the Lava fire explodes in Weed, California on July 1, 2021.Photo: JOSH EDELSON / AFP (Getty Images)Regional wildfire smoke is significantly lowering air quality for millions of people across the country.OffEnglishA new study published in Environmental Science & Technology found that wildfires are creating more air pollution every year. They especially create particle pollution known as PM2.5, which can cause short-term health concerns like nose, eye, and lung irritation. Long-term exposure can create or worsen existing respiratory and cardiovascular issues.According to the study, about 25 million people were exposed to wildfire particles during an especially bad day in 2020. “So this is just people exposed on what we call an extreme day of wildfire smoke,” Marshall Burke, one of the study’s co-authors and a Stanford University scientist, told Earther.The number of people in the U.S. exposed to dangerous levels of wildfire-related particles, which the study categorized as PM2.5 concentrations reaching at least 100 micrograms per cubic meter, have increased more than 27-fold in the last decade. The number of people exposed to extreme levels of pollution from wildfire smoke grew 11,000-fold during that time. These days were categorized by PM2.5 concentrations that reached 200 micrograms per cubic meter of air. Burke and other researchers looked at satellite images of wildfire smoke from 2006 to 2020. They compared the images to data from air quality monitors to see if the spikes in pollution coincided with wildfire smoke. But air quality monitors are not distributed equally across the country, so the scientists filled in data gaps by training a machine learning model to use satellite data in order to accurately predict PM2.5 concentrations.G/O Media may get a commissionAlexa?Has an 8" HD touchscreen which can let you watch shows, stream things, or even make video calls thanks to the 13 MP camera, you can also use it to control other smart devices in your home with ease, and even display photos if you want to as a digital photo frame.They also found that high-income communities and Hispanic communities have been disproportionately impacted by particles from wildfire smoke. Burke explained that this reflects demographics in heavily impacted states, like California. There are a lot of Hispanic communities in that state, along with extremely wealthy zip codes along the West Coast that are being impacted by wildfires.Researchers worry that the increased air pollution from wildfires is reversing the strides the U.S. has made in improving air quality since the passing of pollution regulations in the 1970s. “We’ve been really successful in reducing pollution from point sources—from factories, from tailpipes, energy producing units. Wildfire is a whole different animal,” Burke said. “It’s not regulated by the Clean Air Act, and it’s a growing source of pollution.”The worst pockets of polluted air are unsurprisingly in western states. The climate crisis has fueled conditions that have made wildfires increasingly destructive. There’s an ongoing megadrought drying out major water sources like the Colorado River. This year has also seen several dangerous heat waves, creating even drier and hotter conditions. But the increased intensity and frequency of wildfires has far-reaching consequences. “As you move east, out of the Rockies into the Midwest, we still see the impact of wildfire smoke on air quality there,” Burke said. “These impacts are smaller, but they still exist… this is not just a West Coast problem anymore.”Wildfire smoke can travel even farther than the Midwest. For example, last July, New York’s air quality was one of the worst in the world after wildfire smoke from the West Coast traveled thousands of miles. Some of the smoke reached as far east as Greenland that week.As wildfire seasons become worse over time, Burke and other researchers are working to answer questions that came up as they compiled the recently released study. He hopes to find out just exactly how much pollution wildfires are creating and how much that is rolling back air quality improvements in the U.S.
Environmental Science
In the scientific literature, the total production of reactive oxygen species (ROS) such as H2O2 is commonly used as proxy for the toxicity of air pollutants and their ability to induce oxidative stress and inflammation. The research team led by Thomas Berkemeier from the MPIC in Mainz found that ROS concentrations in the epithelial lining fluid (ELF) of the human respiratory tract may be primarily determined by the release of endogenous H2O2 and the inhalation of ambient gas-phase H2O2, while the chemical production of H2O2 through inhaled PM2.5 is less important. "Based on our simulations, we think that the overall concentrations of these reactive species in the lungs are large anyway, and not directly dependent on levels of air pollution," says Dr. Thomas Berkemeier, head of the Chemical Kinetics & Reaction Mechanisms group at the MPIC. They use a computer model to understand the relevant physical, chemical, and biological processes, and quantify the health effects of different types of air pollutants. "Our new model simulates the chemical reactions that happen in the respiratory tract. For the first time, we included production, diffusion, and removal of hydrogen peroxide from cells and the blood stream into our computer model. This was quite challenging, because it is not so easy to put these processes in biological tissues into equations," explains Thomas Berkemeier. New research directions "The findings of this study suggest that the current paradigms for assessing the differential toxicity of individual PM2.5 components need to be critically reassessed," says Prof. Dr. Ulrich Pöschl, Head of the Multiphase Chemistry Department at the MPIC. The study proposes that the chemical production of superoxide and H2O2 in a cell-free assay may not be a suitable metric for assessing the differential toxicity of individual PM2.5 components, and some acellular oxidative potential assays may not capture the actual deleterious effects of PM2.5. Fine particulates might act through Fenton chemistry However, the production of hydroxyl radicals (OH) was strongly correlated with Fenton chemistry of PM2.5 in the model calculations. "The model simulations suggest that PM2.5 mostly acts by conversion of peroxides into highly reactive OH radicals. Thus, PM2.5 is not so much the fuel, but rather the catalyst of the chemical reactions that cause damage to cells and tissues," says Berkemeier explaining the role of inhaled particles in the model. Additionally, PM2.5 may stimulate the production of superoxide from endogenous sources, which further contributes to the adverse health effects of air pollution. The study underscores the importance of continued research to better understand the chemical mechanisms underlying the health effects of air pollution and to develop effective strategies to mitigate these effects. The authors believe that this study will contribute significantly to this important research effort. Their findings are published in the scientific journal "Environmental Science: Atmospheres." Background information Air pollution is a major health risk that affects millions of people worldwide, but the underlying chemical mechanisms are not yet fully understood. Fine particulate matter (PM2.5) typically contains chemical components that can trigger oxidation reactions. When inhaled and deposited in the human respiratory tract, they can induce and sustain radical reaction cycles that produce reactive oxygen species (ROS) in the epithelial lining fluid (ELF) that covers the airways and alveoli in human lungs. Numerous studies have shown that excess concentrations of ROS like hydrogen peroxide (H2O2) and hydroxyl radicals (OH) can cause oxidative stress injuring cells and tissues in the respiratory tract. Story Source: Journal Reference: Cite This Page:
Environmental Science
The alga Melosira arctica, which grows under Arctic sea ice, contains ten times as many microplastic particles as the surrounding seawater. This concentration at the base of the food web poses a threat to creatures that feed on the algae at the sea surface. Clumps of dead algae also transport the plastic with its pollutants particularly quickly into the deep sea -- and can thus explain the high microplastic concentrations in the sediment there. Researchers led by the Alfred Wegener Institute have now reported this in the journal Environmental Science and Technology. It is a food lift for bottom-dwelling animals in the deep sea: the alga Melosira arctica grows at a rapid pace under the sea ice during spring and summer months and forms metre-long cell chains there. When the cells die and the ice to whose underside they adhere melts, they stick together to form clumps that can sink several thousand metres to the bottom of the deep sea within a single day. There they form an important food source for bottom-dwelling animals and bacteria. In addition to food, however, these aggregates also transport a dubious cargo into the Arctic deep sea: microplastics. A research team led by biologist Dr Melanie Bergmann from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) has now published this in the journal Environmental Science and Technology. "We have finally found a plausible explanation for why we always measure the largest amounts of microplastics in the area of the ice edge, even in deep-sea sediment," Melanie Bergmann reports. Until now, the researchers only knew from earlier measurements that microplastics concentrate in the ice during sea ice formation and are released into the surrounding water when it melts. "The speed at which the Alga descends means that it falls almost in a straight line below the edge of the ice. Marine snow, on the other hand, is slower and gets pushed sideways by currents so sinks further away. With the Melosira taking microplastics directly to the bottom, it helps explain why we measure higher microplastic numbers under the ice edge," explains the AWI biologist. On an expedition with the research vessel Polarstern in summer 2021, she and a research team collected samples of Melosira algae and the surrounding water from ice floes. The partners from Ocean Frontier Institute (OFI), Dalhousie University and the University of Canterbury then analysed these in the laboratory for microplastic content. The surprising result: the clumps of algae contained an average of 31,000 ± 19,000 microplastic particles per cubic metre, about ten times the concentration of the surrounding water. "The filamentous algae have a slimy, sticky texture, so it potentially collects microplastic from the atmospheric deposition on the sea, the sea water itself, from the surrounding ice and any other source that it passes. Once entrapped in the algal slime they travel as if in an elevator to the seafloor, or are eaten by marine animals," explains Deonie Allen of the University of Canterbury and Birmingham University who is part of the research team. Since the ice algae are an important food source for many deep-sea dwellers, the microplastic could thus enter the food web there. But it is also an important food source at the sea surface and could explain why microplastics were particularly widespread among ice-associated zooplankton organisms, as an earlier study with AWI participation shows. In this way, it can also enter the food chain here when the zooplankton is eaten by fish such as polar cod and these are eaten by seabirds and seals and these in turn by polar bears. The detailed analysis of plastic composition showed that a variety of different plastics are found in the Arctic, including polyethylene, polyester, polypropylene, nylon, acrylic and many more. In addition to various chemicals and dyes, this creates a mix of substances whose impact on the environment and living creatures is difficult to assess. "People in the Arctic are particularly dependent on the marine food web for their protein supply, for example through hunting or fishing. This means that they are also exposed to the microplastics and chemicals contained in it. Microplastics have already been detected in human intestines, blood, veins, lungs, placenta and breast milk and can cause inflammatory reactions, but the overall consequences have hardly been researched so far," reports Melanie Bergmann. "Micro and nano plastics have basically been detected in every place scientists have looked in the human body and within a plethora of other species. It is known to change behaviours, growth, fecundity and mortality rates in organisms and many plastic chemicals are known toxins to humans," says Steve Allen, OFI Dalhousie University, a research team member. Moreover, the Arctic ecosystem is already threatened by the profound environmental upheavals caused by the climate crisis. If the organisms are now additionally exposed to microplastics and the chemicals they contain, it can weaken them further. "So, we have a combination of planetary crises that we urgently need to address effectively. Scientific calculations have shown that the most effective way to reduce plastic pollution is to reduce the production of new plastic," says the AWI biologist and adds: "This should therefore definitely be prioritised in the global plastics agreement that is currently being negotiated." That is why Melanie Bergmann is also accompanying the next round of negotiations, which will begin in Paris at the end of May. Story Source: Materials provided by Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research. Note: Content may be edited for style and length. Journal Reference: Cite This Page:
Environmental Science
A critical component of the food web in the Arctic is at risk. According to a new study published in the journal Environmental Science and Technology, Arctic ice algae has become infected with microplastics, threatening the creatures that feed on the algae. The new study shows that the algae found beneath the Arctic ice have much higher concentrations of microplastics than the surrounding seawater. Researchers say when the algae die, clumps of them carry the microplastics out to the deep sea, which could explain the high concentrations of microplastics found in sediment out there. Prior to this research, scientists only knew that microplastics somehow concentrate in the ice and then are released into the water when it melts. Now, though, this discovery shows that the Arctic ice plays an important part in how microplastics are moving around the deep sea in those regions of our world. The big problem here is the threat that these microplastics pose not only to the sediment of the deep sea, but also to the sealife that relies on that algae for sustenance. In the winter months, when the ice forms across the Arctic seas, the algae flourish beneath it, sticking to it and soaking up concentrations of microplastics. However, when the summer months roll around, and the Arctic ice melts, the algae clumps up, dying, and sinks to the bottom, taking the microplastics with it. Because the microplastic-ridden algae are heavier, and sink straight to the bottom. This, the researchers say, explains why concentrations of microplastics in the sediment are so high at the edge of the Arctic ice. Alongside posing a threat to the animals that feed on it, these high concentrations of microplastics mean that any sealife caught for food by the peoples of the Arctic could pose a threat to human life, too. When humans eat the sealife that has eaten microplastic-infected algae, they risk contamination by the chemicals in the plastics. The ongoing fight against plastic pollution is not a small one, and entire patches of plastic and other garbage can be found in the ocean. And these algae under the Arctic ice aren’t the only thing being infected with microplastics. Researchers also found plastic-infested rocks on a remote island in Brazil.
Environmental Science
- Globally, June was the hottest June in the 174-year records kept by the National Oceanic and Atmospheric Administration, the federal agency said on Thursday. - Canada is in a record-breaking wildfire season. Sea ice is at an all-time low. Vermont is recovering from catastrophic flooding. If it feels like lately, there's a new record-level, end-of days extreme weather event with astounding frequency, you're not alone. - Global warming causes a hotter wetter atmosphere, which has been adding intensity to extreme weather events for decades. And in June, El Niño weather pattern arrived, which is adding fuel to the already roiling fire of extreme weather. If you feel like record-level extreme weather events are happening with alarming frequency, you're not alone. Scientists say it's not your imagination. "The number of simultaneous weather extremes we're seeing right now in the Northern Hemisphere seems to exceed anything at least in my memory," Michael Mann, professor of earth and environmental science at the University of Pennsylvania, told CNBC. Globally, June was the hottest June in the 174-year records kept by the National Oceanic and Atmospheric Administration, the federal agency said on Thursday. It was the 47th consecutive June and the 532nd consecutive month in which average temperatures were above the average for the 20th century. The amount of sea ice measured in June was the lowest global June sea ice on record, due primarily to record-low sea ice levels in the Antarctic, also according to NOAA. There were nine tropical cyclones in June, defined as storms with wind speeds over 74 miles per hour, and the global accumulated cyclone energy, a measure of the collective duration and strength of tropical storms, was almost twice its average value for 1991–2020 in June, NOAA said. As of Friday morning, 93 million people in the United States are under excessive heat warnings and heat advisories, the National Weather Service Weather Prediction Center, according to a bulletin published Friday morning. "A searing heat wave is set to engulf much of the West Coast, the Great Basin, and the Southwest," the National Weather Service said. Temperatures during the day in the desert regions of southern California, southern Nevada and southern Arizona could see high temperatures that top 120 degrees Fahrenheit in coming days, according to the National Weather Service. Montpelier, Vermont, set an all-time record maximum for rain that has ever fallen in one day on Monday, according to the National Weather Service. "Make no mistake, the devastation and flooding we're experiencing across Vermont is historic and catastrophic," Vermont governor Phil Scott said on Tuesday. On June 27, Canada surpassed the record set in 1989 for total area burned in one season when it reached 7.6 million hectares, or 18.8 million acres. And the total has since increased to 9.3 million hectares, or 23 million acres, which is being driven by record-breaking high temperatures, turning the vegetation into kindling for wildfires to race through. Those record Canada wildfires have blanketed parts of the United States in smoke, causing some of the worst quality in the world at various points. In all of 2022, there were 18 separate billion dollar weather and climate disaster events according to data from NOAA, including tornado outbreaks, high wind, hailstorms, tropical cyclones, flooding, drought, heatwaves and wildfires. So far, there have been 12 billion-dollar weather and climate disasters in 2023, according to NOAA. "This year will almost certainly break records for the number of extreme weather events," Paul Ullrich, professor of regional and global climate modeling at University of California at Davis, told CNBC. Global warming is making extreme weather events more severe, scientists said. "Our own research shows that the observed trend toward more frequent persistent summer weather extremes — heat waves, floods, — is being driven by human-caused warming," Mann told CNBC. Ullrich agrees. "Increases in the frequency and intensity of heatwaves, floods and wildfires can be directly attributable to climate change," Ullrich told CNBC. "Through the emission of greenhouse gases, we have been trapping more heat near the surface, leading to increases in temperature, more moisture in the air, and a drier land surface," Ullrich said. "Scientists are extremely confident that an increasing frequency and intensity of extreme events is a direct consequence of human modification of the climate system." El Niño is like adding lighter fuel to an already smoldering fire. "Under recently emergent El Niño conditions, temperatures are pushed higher worldwide, further compounding increases in temperature brought on by greenhouse gas emissions," Ullrich said. That combination of anthropogenic climate change and El Niño is "spiking some of these extreme events," Mann said. El Niño, which means "little boy" in Spanish, happens when the normal trade winds that blow west along the equator weaken and warmer water gets pushed o the east, toward the west coast of the Americas. In the United States, a moderate to strong El Niño in the fall and winter correlates with wetter-than-average conditions from southern California to the Gulf Coast, and drier-than-average conditions in the Pacific Northwest and Ohio Valley. When global warming and El Niño are hitting at the same time, "it can be difficult separating what is just a weather event or if it is part of a longer trend," Timothy Canty, professor in the department of atmospheric and oceanic science at University of Maryland, told CNBC. But what is clear is that climate change makes it more likely that an extreme weather event will happen. "Higher temperatures from climate change are indisputable, and with each degree increase we're multiplying our changes of getting an extreme heat wave. In the wetter regions of the world, including the Northeastern US, we're expecting more rain and more intense storms," Ullrich told CNBC. "To avoid even more extreme changes, we need to both reduce our reliance on fossil fuels and act to clean up our polluted atmosphere." And as long as global greenhouse gas emissions continues to increase, the trend of more and more frequent extreme weather is expected to continue, Mann says. Decreasing the greenhouse gas emissions released into the atmosphere by burning fossil fuels will help moderate the extreme weather trends. "The good news is that the latest research shows that the surface warming driving more extreme weather events stabilizes quickly when carbon emissions cease. So we can prevent this all from getting worse and worst by decarbonizing our economy rapidly," Mann told CNBC. Every person's contributions to reducing their climate footprint helps, Canty says. "People have asked me essentially 'What can I do as an individual that matters?' and decide not to do anything and instead blame everyone else. Honestly, it's societies made up of individuals that have gotten us to this point," Canty said. Individuals can reduce their greenhouse gas emissions by making small changes like turning off the lights when they're not in a room, turning down the heat or up the air conditioning when they're not home, avoiding food waste and using public transportation. Voting also matters a lot, Canty said. Government leaders have been able to make successful progress on international environmental crises in the past, Canty said, pointing to the Montreal Protocol. "There is a roadmap for working together to fix environmental problems in ways that benefit everyone," Canty said. "Tackling the ozone hole required governments, scientists, and businesses to work together and the Montreal Protocol and its amendments have been very successful not only for ozone but for climate," Canty said, noting that the same chemicals that deplete the ozone, chlorofluorocarbons, are also very bad greenhouse gasses. "The ozone hole is slowly recovering and because of actions taken in the 80s we've avoided even worse planetary warming, and we still have air conditioning and hair spray which seemed to be the big panic at the time." If individuals and organizations don't commit to aggressively reducing their greenhouse gas emissions, however, then this battery of extreme weather is a harbinger of the future. "If we fail to act what we're seeing right now is just the tip of the proverbial — melting — iceberg," Mann told CNBC.
Environmental Science
Urban parks built on former waste incineration sites could be lead hotspots, study finds For much of the last century, many cities across the United States and Canada burned their trash and waste in municipal incinerators. Most of these facilities were closed by the early 1970s due to concerns about the pollution they added to the air, but a new Duke University study finds that their legacy of contamination could live on in urban soils. "We found that city parks and playgrounds built on the site of a former waste incinerator can still have greatly elevated levels of lead in their surface soils many decades after the incinerator was closed," said Daniel D. Richter, professor of soils at Duke's Nicholas School of the Environment, who co-led the research. Exposure to lead in soil has been linked to potential long-term health problems, particularly in children. These include possible damage to the brain and nervous system, slowed growth and development, and learning and behavioral problems. To conduct their study, Richter and his students collected and analyzed surface soil samples from three city parks in Durham, N.C. that are located on former incinerator sites closed in the early 1940s. Samples collected from a two-acre section of East Durham Park contained lead levels over 2000 parts per million, more than five times higher than the current U.S. Environmental Protection Agency (EPA) standard for safe soils in children's play areas. Samples collected from Walltown Park mostly contained low lead levels, "but about 10% were concerning and a few were very high," Richter noted. Samples collected from East End Park all contained levels of soil lead below the current EPA threshold for children's safety "and presented no cause for concern," he said. The sharp differences in lead levels between the three parks underscores the need for increased monitoring, he stressed. "Determining where contamination risks persist, and why contamination is decreasing at different rates in different locations, is essential for identifying hotspots and mitigating risks," Richter said. "Many cities should mobilize resources to do widespread sampling and monitoring, and create soil maps and, more specifically, soil lead maps." "That's where we really need to go," Richter said. "Not just in Durham but in hundreds of other cities where parks, as well as churches, schools and homes, may have been built on former waste incinerator and ash disposal sites." By analyzing historic surveys of municipal waste management, the Duke team found that about half of all cities surveyed in the U.S. and Canada incinerated solid waste between the 1930s and 1950s. "These incinerators burned all kinds of garbage and trash, including paint, piping, food cans and other products that contained lead back then," Richter said. The leftover ash, in which lead and other contaminants were concentrated, was sometimes covered with a too-thin layer of topsoil or even spread around parks, new construction sites or other urban spaces as a soil amendment. "Historical surveys indicate a lack of appreciation for the health and environmental hazards of city-waste incinerator ash. Back then, they didn't know what we do now," he said. New technology could help make sampling and monitoring more feasible at the thousands of sites nationwide that may be contaminated, he added. Using a portable X-ray fluorescence instrument, his lab is now able to do a preliminary analysis on a soil sample for multiple metals, including lead, in just 20 seconds. Making use of historical records about waste incineration and ash disposal could also speed efforts to identify hotspots. In their paper, Richter and his students provide histories gleaned from archived public works records, old street maps and newspaper clippings showing where ash was burned and disposed of in six sample cities: Los Angeles; New York City; Baltimore; Spokane, Wash.; Jacksonville, Fla.; and Charleston, S.C. "This is something you could do for many cities to guide monitoring efforts," Richter said. "There's been a lot of interest in mitigating lead exposure in cities, but most until now has been focused on reducing risks within the home. Our study reminds us that risks exist in the outdoor environment, too," he said. Richter and his students published their findings Sept. 11 in Environmental Science & Technology Letters. His co-authors on the new paper were Enikoe Bihari, a 2023 Master of Environmental Management graduate of the Nicholas School who conducted much of the research as part of her Master's Project, and Garrett Grewal, a senior at Duke majoring in Earth and Climate Sciences. More information: Enikoe Bihari et al, Legacies of Pre-1960s Municipal Waste Incineration in the Pb of City Soils, Environmental Science & Technology Letters (2023). DOI: 10.1021/acs.estlett.3c00488 Journal information: Environmental Science & Technology Letters Provided by Duke University
Environmental Science
Nanomaterial boosts potency of coronavirus disinfectants The use of peroxide-based disinfectants has grown with the emergence of the COVID-19 pandemic. Yet, the extensive use of chemical disinfectants to kill viruses and other pathogens can also threaten human health and ecosystems. Now, a research team led by the George Washington University has engineered a new nanomaterial that can boost the potency of common disinfectants. The team showed that when the nanomaterial—a double-atom catalyst—is mixed with a peroxide-based disinfectant, the disinfectant is two-to-four times more effective in disabling a coronavirus strain compared to when the disinfectant is used alone. The ability to enhance disinfectants with nanomaterials engineered from earth-abundant elements like iron and carbon is more sustainable and cost-effective, say the researchers. "Peroxides are often used to kill pathogens but we have to use a much higher concentration of them than we really need," Danmeng Shuai, senior author and associate professor of civil and environmental engineering at GW, said. "With this nanomaterial, we can actually reduce the amount of peroxides we're using daily, which not only reduces costs but also offers a more sustainable method of disinfection while still achieving the best performance for killing environmental pathogens." Shuai and the team, which included David P. Durkin from the United States Naval Academy and Hanning Chen from the University of Texas at Austin, developed a Fe−Fe double-atom catalyst, which they mixed with a peroxide and coronavirus strain in two different mediums: artificial saliva and freshwater drawn from a local river to mimic contact surface cleaning and water disinfection, respectively. The researchers observed that the nanomaterial worked by shuttling electrons from the virus to the peroxide. As a result, the virus became oxidized, damaging the viral genome and proteins as well as the coronavirus lifecycle in the host cells. "Our work paves a new avenue of leveraging advanced materials for improving disinfection, sanitation, and hygiene practices," said Zhe Zhou, first author on the paper and a Ph.D. candidate at GW. "Our discovery also has broad engineering applications for advancing catalysis in pollution control, enabling effective and safe disinfection, controlling the environmental transmission of pathogens, and ultimately protecting public health." The nanomaterial could be scaled to deactivate environmental pathogens in diverse environments, the study's researchers said. Examples include water purification—potentially packing the nanomaterial in columns and allowing water to pass through, purifying the water in the process. It can also be scaled to use in spray form to disinfect contact surfaces, like countertops. The researchers said future studies should focus on optimizing the materials to further advance the disinfection potency to achieve eco-friendly and robust disinfection to further protect public health. The findings are published in the journal Environmental Science & Technology. More information: Zhe Zhou et al, Fe–Fe Double-Atom Catalysts for Murine Coronavirus Disinfection: Nonradical Activation of Peroxides and Mechanisms of Virus Inactivation, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.3c00163
Environmental Science
Warming and internal nutrient loading together interfere with long-term stability of lake restoration Urban lakes are ubiquitous worldwide and tend to be highly eutrophic, indicating an increase in the frequency, duration and magnitude of harmful algal blooms as a widespread threat to ecological and human health. For more than half a century, phosphate (P) precipitation has been one of the most effective treatments to mitigate eutrophication in these lakes. However, after a period of high effectiveness, re-eutrophication could occur, leading to the return of harmful algal blooms. While such abrupt ecological changes have been attributed to internal P loading, the role of lake warming and its potential synergistic effects with the internal loading, thus far, has been largely unexplored. Researchers led by Dr. Kong Xiangzhen and Prof. Dr. Xue Bin from the Nanjing Institute of Geography and Limnology of the Chinese Academy of Sciences, together with their international collaborators, have addressed this question by quantifying the contributions of lake warming and the potential synergistic effects with internal P loading in an urban lake located in central Germany that suffered from the abrupt re-eutrophication and cyanobacterial blooms in 2016 (30 years after the first P precipitation). Their results were published in Environmental Science & Technology on Feb. 20. In this study, a process-based lake ecosystem model (GOTM-WET) was developed using a high-frequency monitoring dataset covering eutrophic/oligo-trophic conditions over 30 years. Model analyses indicated that for the abrupt onset of cyanobacterial blooms, internal P release accounted for 68% of the biomass increase, while lake warming contributed to 32%, including both direct effects via promotion (18%) and synergistic effects via intensification of internal P loading (14%). The model further showed that the synergy was due to prolonged warming of the lake hypolimnion and oxygen depletion. "Our study exemplifies how process-based mechanistic modeling can help to tease apart important drivers of abrupt shifts and cyanobacterial blooms in lakes, especially in an era of rapid global change, including climate change and human activities," said Dr. Kong. This study unravels the substantial role of lake warming in promoting cyanobacterial blooms in re-eutrophicated lakes. The indirect effects of warming on cyanobacteria by promoting internal loading need more attention in future lake research and management. "Our results will have far-reaching implications for lake restoration and management, as the nutrient targets we have been using to reach or maintain a certain trophic state will not work in a much warmer future and will need to be adapted, i.e., greater nutrient reductions and restoration efforts will be required," said Dr. Kong. More information: Xiangzhen Kong et al, Synergistic Effects of Warming and Internal Nutrient Loading Interfere with the Long-Term Stability of Lake Restoration and Induce Sudden Re-eutrophication, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c07181
Environmental Science
In the arid and drought-stricken western Great Basin, sparse surface water means rural communities often rely on private groundwater wells. Unlike municipal water systems, well water quality in private wells is unregulated, and a new study shows that more than 49 thousand well users across the region may be at risk of exposure to unhealthy levels of arsenic in drinking water. Led by researchers at DRI and the University of Hawai'i Cancer Center and published February 16th in Environmental Science and Technology, the study used data from groundwater wells across the western Great Basin to build a model to predict the probability of elevated arsenic in groundwater, and the location and number of private well users at risk. According to the study, the Carson Desert basin (including the town of Fallon, Nevada), Carson Valley (Minden and Gardnerville, Nevada), and the Truckee Meadows (Reno), have the highest population of well users at risk. The new study builds on previous research showing that 22% of 174 domestic wells sampled in Northern Nevada had arsenic levels exceeding the EPA guideline. "What we are finding is that in our region, we have a high probability for elevated arsenic compared to most other regions in the country," said Daniel Saftner, M.S., a hydrogeologist at DRI and lead author of the study. "And we are seeing that geothermal and tectonic processes that are characteristic of the Great Basin contribute to the high concentrations of naturally occurring arsenic in the region's groundwater." The region's mountains are also primary sources of arsenic. "As the arsenic-rich volcanic and meta-sedimentary rocks that form the mountains erode, sediment is transported to the valleys below," says Steve Bacon, Ph.D., DRI geologist and study co-author. Water percolating through the valley floor then carries arsenic into the groundwater. Deeper, older groundwater and geothermal waters tend to have a higher arsenic concentration and can migrate upward along faults and mix with shallow groundwater. "We really wanted to better understand the unique geologic factors that contribute to high arsenic in this study," Saftner says. "It's important for us to think about the role of the environment as it pertains to human health -- where we live can influence what our long-term health looks like." To train and test the predictive model, the research team used data collected through the Healthy Nevada Project, including water samples from 163 domestic wells primarily located near Reno, Carson City, and Fallon. These data were supplemented with 749 groundwater samples compiled from the USGS National Water Information System. The model uses tectonic, geothermal, geologic, and hydrologic variables to predict the probability of elevated arsenic levels across the region. Although the U.S. EPA has set an arsenic concentration guideline of 10 µg/L for public drinking water, previous research has shown a range of health effects from long-term exposure to levels above 5 µg/L. Using this concentration as the benchmark, the model and map show that much of the region's groundwater -- particularly in western and central Nevada -- is predicted to have more than a 50% probability of elevated arsenic levels. "Community members can use our arsenic hazard map to see what the risk is at their location, which might motivate them to test their well water," says Monica Arienzo, Ph.D., associate research professor at DRI and study co-author. "Then, if they have high levels of arsenic or other contaminants, they can take steps to reduce their exposure, such as installing a water treatment system." The findings from this study are potentially useful for a range of different applications. "The results can be useful for water utilities or water managers who tap similar shallow aquifers for their water supply," says Saftner, "as well as irrigation wells that source water from these aquifers." The research team plans to use their model to take a closer look at the health impacts of prolonged arsenic exposure. "Through the Healthy Nevada Project, genetic data and health records are paired with environmental data to help determine whether there are associations between the levels of arsenic in a community's groundwater and specific health outcomes," stated Joe Grzymski, Ph.D., research professor at DRI and principal investigator of the project. Story Source: Journal Reference: Cite This Page:
Environmental Science
Abstract Human society is dependent on nature1,2, but whether our ecological foundations are at risk remains unknown in the absence of systematic monitoring of species’ populations3. Knowledge of species fluctuations is particularly inadequate in the marine realm4. Here we assess the population trends of 1,057 common shallow reef species from multiple phyla at 1,636 sites around Australia over the past decade. Most populations decreased over this period, including many tropical fishes, temperate invertebrates (particularly echinoderms) and southwestern Australian macroalgae, whereas coral populations remained relatively stable. Population declines typically followed heatwave years, when local water temperatures were more than 0.5 °C above temperatures in 2008. Following heatwaves5,6, species abundances generally tended to decline near warm range edges, and increase near cool range edges. More than 30% of shallow invertebrate species in cool latitudes exhibited high extinction risk, with rapidly declining populations trapped by deep ocean barriers, preventing poleward retreat as temperatures rise. Greater conservation effort is needed to safeguard temperate marine ecosystems, which are disproportionately threatened and include species with deep evolutionary roots. Fundamental among such efforts, and broader societal needs to efficiently adapt to interacting anthropogenic and natural pressures, is greatly expanded monitoring of species’ population trends7,8. This is a preview of subscription content, access via your institution Access options Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription $29.99 per month cancel any time Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Rent or buy this article Get just this article for as long as you need it $39.95 Prices may be subject to local taxes which are calculated during checkout Data availability Raw data from the Reef Life Survey and Australian Temperate Reef Collaboration programmes are accessible through a live data portal accessed via either the Reef Life Survey (https://www.reeflifesurvey.com/) or Australian Ocean Data Network (https://portal.aodn.org.au/) websites using the keywords ‘National Reef Monitoring Network’. Sea surface temperature data were obtained from Coral Reef Watch (available through https://coralreefwatch.noaa.gov/product/5km/), chlorophyll data were obtained from Bio-ORACLE (https://www.bio-oracle.org/), fish trait data were obtained in part from Fishbase (https://www.fishbase.org/), and invertebrate trait data from Sealifebase (https://www.sealifebase.ca/), as described in Methods. Code availability Analytical computations were undertaken in R version 4.2.0 (using libraries tidyverse. janitor, zoo, magick, cowplot, scales, patchwork, ggplot, rag, gt, gtable, grid, nlme, here and kableExtra)57, as described in R markdown script available at https://github.com/FreddieJH/continental_reef_trends. References Whitmee, S. et al. Safeguarding human health in the Anthropocene epoch: report of The Rockefeller Foundation–Lancet Commission on planetary health. Lancet 386, 1973–2028 (2015). Cardinale, B. J. et al. Biodiversity loss and its impact on humanity. Nature 486, 59–67 (2012). Ceballos, G., Ehrlich, P. R. & Dirzo, R. Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines. Proc. Natl Acad. Sci. USA 114, E6089–E6096 (2017). Duffy, J. E. et al. Toward a coordinated global observing system for seagrasses and marine macroalgae. Front. Mar. Sci. 6, https://doi.org/10.3389/fmars.2019.00317 (2019). Wernberg, T. et al. Climate-driven regime shift of a temperate marine ecosystem. Science 353, 169–172 (2016). Hughes, T. P., Kerry, J. T. & Simpson, T. Large-scale bleaching of corals on the Great Barrier Reef. Ecology 99, 501 (2017). Jetz, W. et al. Essential biodiversity variables for mapping and monitoring species populations. Nat. Ecol. Evol. 3, 539–551 (2019). Day, J. The need and practice of monitoring, evaluating and adapting marine planning and management—lessons from the Great Barrier Reef. Mar. Policy 32, 823–831 (2008). Loh, J. et al. The Living Planet Index: using species population time series to track trends in biodiversity. Phil. Trans. R. Soc. B 360, 289–295 (2005). Vogel, G. Where have all the insects gone? Science 356, 576–579 (2017). Vieilledent, G. et al. Combining global tree cover loss data with historical national forest cover maps to look at six decades of deforestation and forest fragmentation in Madagascar. Biol. Conserv. 222, 189–197 (2018). Dirzo, R. et al. Defaunation in the Anthropocene. Science 345, 401–406 (2014). Regan, E. C. et al. Global trends in the status of bird and mammal pollinators. Conserv. Lett. 8, 397–403 (2015). Bergstrom, D. M. et al. Combating ecosystem collapse from the tropics to the Antarctic. Glob. Change Biol. 27, 1692–1703 (2021). Watson, R. A. & Tidd, A. Mapping nearly a century and a half of global marine fishing: 1869–2015. Mar. Policy 93, 171–177 (2018). Species Survival Commission. 2006 IUCN Red List of Threatened Species (IUCN, 2006). Living Planet Report 2020—Bending The Curve of Biodiversity Loss (World Wildlife Fund, 2020). Edgar, G. J., Ward, T. J. & Stuart-Smith, R. D. Rapid declines across Australian fishery stocks indicate global sustainability targets will not be achieved without expanded network of ‘no-fishing’ reserves. Aquat. Conserv. 28, 1337–1350 (2018). Pitois, S. G., Lynam, C. P., Jansen, T., Halliday, N. & Edwards, M. Bottom-up effects of climate on fish populations: data from the continuous plankton recorder. Mar. Ecol. Progr. Ser. 456, 169–186 (2012). Stuart-Smith, R. D. et al. Assessing national biodiversity trends for rocky and coral reefs through the integration of citizen science and scientific monitoring programs. BioScience 67, 134–146 (2017). Knowlton, N. et al. in Life in the World’s Oceans: Diversity, Distribution and Abundance (ed. McIntyre, A. D.) 65–77 (Wiley–Blackwell, 2010). Edgar, G. J. & Stuart-Smith, R. D. Systematic global assessment of reef fish communities by the Reef Life Survey program. Sci. Data 1, 140007 (2014). Edgar, G. J. & Barrett, N. S. An assessment of population responses of common inshore fishes and invertebrates following declaration of five Australian marine protected areas. Environ. Conserv. 39, 271–281 (2012). Emslie, M. J. et al. Decades of monitoring have informed the stewardship and ecological understanding of Australia’s Great Barrier Reef. Biol. Conserv. 252, 108854 (2020). Edgar, G. J. et al. Establishing the ecological basis for conservation of shallow marine life using Reef Life Survey. Biol. Conserv. 252, 108855 (2020). Hughes, T. P. et al. Global warming and recurrent mass bleaching of corals. Nature 543, 373–377 (2017). Stuart-Smith, R. D., Brown, C. J., Ceccarelli, D. M. & Edgar, G. J. Ecosystem restructuring along the Great Barrier Reef following mass coral bleaching. Nature 560, 92–96 (2018). Hughes, T. P. et al. Emergent properties in the responses of tropical corals to recurrent climate extremes. Curr. Biol. 31, 5393–5399.e5393 (2021). Johnson, C. R. et al. Climate change cascades: shifts in oceanography, species’ ranges and subtidal marine community dynamics in eastern Tasmania. J. Exp. Mar. Biol. Ecol. 400, 17–32 (2011). IUCN Standards and Petitions Working Group. Guidelines for Using the IUCN Red List Categories and Criteria, Version 7.0. http://intranet.iucn.org/webfiles/doc/SSC/RedList/RedListGuidelines.pdf (2008). Fraser, K. M., Stuart-Smith, R. D., Ling, S. D. & Edgar, G. J. High biomass and productivity of epifaunal invertebrates living amongst dead coral. Mar. Biol. 168, 102 (2021). Gilmour, J. P. et al. The state of Western Australia’s coral reefs. Coral Reefs 38, 651–667 (2019). Long-Term Monitoring Program Annual Summary Report of Coral Reef Condition 2020/2021 (Australian Institute of Marine Science, 2021). Eakin, C. M. et al. Caribbean corals in crisis: record thermal stress, bleaching, and mortality in 2005. PLoS ONE 5, e13969 (2010). Edgar, G. J., Ward, T. J. & Stuart-Smith, R. D. Weaknesses in stock assessment modelling and management practices affect fisheries sustainability. Aquat. Conserv. 29, 2010–2016 (2019). Babcock, R. C. et al. Decadal trends in marine reserves reveal differential rates of change in direct and indirect effects. Proc. Natl Acad. Sci. USA 107, 18256–18261 (2010). Ling, S. D., Johnson, C. R., Frusher, S. D. & Ridgway, K. R. Overfishing reduces resilience of kelp beds to climate-driven catastrophic phase shift. Proc. Natl Acad. Sci. USA 106, 22341–22345 (2009). Stuart-Smith, R. D., Edgar, G. J. & Bates, A. E. Thermal limits to the geographic distributions of shallow-water marine species. Nat. Ecol. Evol. 1, 1846–1852 (2017). Chaudhary, C., Richardson, A. J., Schoeman, D. S. & Costello, M. J. Global warming is causing a more pronounced dip in marine species richness around the equator. Proc. Natl Acad. Sci. USA 118, e2015094118 (2021). Bennett, S. et al. The ‘Great Southern Reef’: social, ecological and economic value of Australia’s neglected kelp forests. Mar. Freshwater Res. 67, 47–56 (2016). Stuart-Smith, R. D. et al. Loss of native rocky reef biodiversity in Australian metropolitan embayments. Mar. Pollut. Bull. 95, 324–332 (2015). Ling, S. D. et al. Pollution signature for temperate reef biodiversity is short and simple. Mar. Pollut. Bull. 130, 159–169 (2018). Davies, T. E., Maxwell, S. M., Kaschner, K., Garilao, C. & Ban, N. C. Large marine protected areas represent biodiversity now and under climate change. Sci. Rep. 7, 9569 (2017). Grech, A., Edgar, G. J., Fairweather, P., Pressey, R. L. & Ward, T. J. in Austral Ark (eds Stow, A., Maclean, N. & Holwell, G. I.) 582–599 (Cambridge Univ. Press, 2014). Bates, A. E., Stuart-Smith, R. D., Barrett, N. S. & Edgar, G. J. Biological interactions both facilitate and resist climate-related functional change in temperate reef communities. Proc. R. Soc. B 284, 20170484 (2017). Verges, A. et al. Tropicalisation of temperate reefs: Implications for ecosystem functions and management actions. Funct. Ecol. 33, 1000–1013 (2019). Popova, E. et al. From global to regional and back again: common climate stressors of marine ecosystems relevant for adaptation across five ocean warming hotspots. Glob. Change Biol. 22, 2038–2053 (2016). Edgar, G., Jones, G., Kaly, U., Hammond, L. & Wilson, B. Endangered Species: Threats and Threatening Processes in the Marine Environment. Issues Paper (Australian Marine Science Association, 1991). Edgar, G. J. et al. El Niño, fisheries and animal grazers interact to magnify extinction risk for marine species in Galapagos. Glob. Change Biol. 16, 2876–2890 (2010). Koslow, J. A., Miller, E. F. & McGowan, J. A. Dramatic declines in coastal and oceanic fish communities off California. Mar. Ecol. Progr. Ser. 538, 221–227 (2015). Burrows, M. T. et al. Ocean community warming responses explained by thermal affinities and temperature gradients. Nat. Clim. Change 9, 959–963 (2019). Hamilton, S. L. et al. Disease-driven mass mortality event leads to widespread extirpation and variable recovery potential of a marine predator across the eastern Pacific. Proc. R. Soc. B 288, 20211195 (2021). Kummu, M. & Varis, O. The world by latitudes: a global analysis of human population, development level and environment across the north–south axis over the past half century. Appl. Geogr. 31, 495–507 (2011). Halpern, B. S. et al. A global map of human impact on marine ecosystems. Science 319, 948–951 (2008). Edgar, G. J. et al. Abundance and local-scale processes contribute to multi-phyla gradients in global marine diversity. Sci. Adv. 3, e1700419 (2017). Krumhansl, K. et al. Global patterns of kelp forest change over the past half-century. Proc. Natl Acad. Sci USA 113, 13785–13790 (2016). R Core Team. R: A language and environment for statistical computing., (R Foundation for Statistical Computing, 2021). Stuart-Smith, R. D., Barrett, N. S., Stevenson, D. G. & Edgar, G. J. Stability in temperate reef communities over a decadal time scale despite concurrent ocean warming. Glob. Change Biol. 16, 122–134 (2010). James, L. C., Marzloff, M. P., Barrett, N., Friedman, A. & Johnson, C. R. Changes in deep reef benthic community composition across a latitudinal and environmental gradient in temperate Eastern Australia. Mar. Ecol. Progr. Ser. 565, 35–52 (2017). Collen, B. et al. Monitoring change in vertebrate abundance: the Living Planet Index. Conserv. Biol. 23, 317–327 (2009). Skirving, W. et al. CoralTemp and the Coral Reef Watch Coral Bleaching Heat Stress Product Suite Version 3.1. Remote Sens. 12, 3856 (2020). Tyberghein, L. et al. Bio-ORACLE: a global environmental dataset for marine species distribution modelling. Global Ecol. Biogeogr. 21, 272–281 (2012). Edgar, G. J. Australian Marine Life: The Plants and Animals of Temperate Waters 2nd edn (Reed New Holland, 2008). Edgar, G. J. Tropical Marine Life of Australia: Plants and Animals of the Central Pacific (New Holland, 2019). Froese, R. & Pauly, D. FishBase 2000: Concepts, Designs and Data Sources (ICLARM, 2000). Pinheiro, J., Bates, D., DebRoy, S. & Sarkar, D. nlme: linear and nonlinear mixed effects models (version 3.1-155). http://CRAN.R-project.org/package=nlme (2015). R Core Team. R: A Language and Environment for Statistical Computing (version 4.2.0). http://www.R-project.org/ (R Foundation for Statistical Computing, 2013). Becker, R. A., Wilks, A. R. & Brownrigg, R. mapdata: Extra map databases. R package version 2.3.0 (2018). Acknowledgements Support for field surveys and analyses was provided by the Australian Research Council; the Australian Institute of Marine Science; the Institute for Marine and Antarctic Studies; Parks Australia; Department of Natural Resources and Environment Tasmania; New South Wales Department of Primary Industries; Parks Victoria; South Australia Department of Environment, Water and Natural Resources; Western Australia Department of Biodiversity, Conservation and Attractions; The Ian Potter Foundation; Minderoo Foundation; and the Marine Biodiversity Hub, a collaborative partnership supported through the Australian Government’s National Environmental Science Programme. We thank the many colleagues and Reef Life Survey (RLS) divers who participated in data collection. RLS and ATRC data management is supported by Australia’s Integrated Marine Observing System (IMOS)—IMOS is enabled by the National Collaborative Research Infrastructure Strategy (NCRIS). Ethics declarations Competing interests The authors declare no competing interests. Peer review Peer review information Nature thanks Sergio Floeter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Extended data figures and tables Sites visited on single occasions are shown in blue, and in at least two years in red. Most individual sites are overlapping. Number of annual surveys undertaken at sites from 2008 to 2021. Extended Data Fig. 3 Recent change in mean abundance around the Australian continent for different species groupings. Major taxonomic and biogeographic groupings were assessed using data with different site sampling frequency. a. Full dataset, as shown in Fig. 1, that includes sites surveyed in at least two separate years. b. Reduced dataset that includes sites surveyed in at least 3 years. c. Reduced dataset that includes sites sampled in at least four years. Data relate to the 10-year period from 2011 to 2021. The number of species considered per grouping and 95% confidence intervals are shown. Images by GJE. Extended Data Fig. 4 Mean annual population trends for reef species with different biogeographic affinity in different regions. Species trends relative to 2008 were categorised within three biogeographic groupings (tropical >23 °C; warm-temperate >17.5 °C, <23 °C; cool temperate <17.5 °C) for six regions around Australia. a, Northwest: Northern Territory, Western Australian coast north of 27°S. b, Southwest: western coast south of 27°S. c, South: southern coast east to 148°S. d, Northeast: Queensland, Coral Sea. e, Southeast: New South Wales; Victoria west to 148°S; f, Tasmania. In contrast to data presented in Fig. 5, relative abundance for each species and site was calculated using Population Trend Assessment Method 2, as the log ratio relative to site mean across years, then averaged for sites within 1° x 1° grid cells. Shading indicates ±1 SE (assuming a normal distribution across species estimates). Supplementary information About this article Cite this article Edgar, G.J., Stuart-Smith, R.D., Heather, F.J. et al. Continent-wide declines in shallow reef life over a decade of ocean warming. Nature (2023). https://doi.org/10.1038/s41586-023-05833-y Received: Accepted: Published: DOI: https://doi.org/10.1038/s41586-023-05833-y
Environmental Science
Large chunks of the Navajo Nation in the Southwest lack access to clean drinkable water, a trend that has been rising in many parts of the U.S. in recent years. A research team led by engineers with The University of Texas at Austin aims to change that. The team has developed a new water filtration solution for members of the Navajo Nation, lining clay pots with pine tree resin collected from the Navajo Nation and incorporating tiny, silver-based particles that can be used to purify water to make it drinkable. "Making water filtration technology cheap doesn't solve all the problems, and making it effective doesn't solve everything either," said Navid Saleh, a professor in the Fariborz Maseeh Department of Civil, Architectural and Environmental Engineering and one of the leaders on the project. "You have to think about the people you are making it for." And that's what the researchers did. They worked closely with a third-generation potter from Arizona -- Deanna Tso, who is also a co-author on the paper -- to create a device that is simple for the users. All they have to do is pour water through the clay pots, and the coated pottery removes bacteria from water and generates clean, drinkable water. The Navajo Nation has a history of mistrust of outsiders, the researchers say, and that makes it less likely that people there would adopt a new technology made entirely by others. Using pottery, working with the community, and relying on local materials were important to the effectiveness of this project. The research appears in a new paper in the journal Environmental Science & Technology. "Navajo pottery is at the heart of this innovation because we hoped it would bridge a trust gap," said Lewis Stetson Rowles III, now a faculty member at Georgia Southern University's Department of Civil Engineering and Construction after earning a Ph.D. from UT in 2021. "Pottery is sacred there, and using their materials and their techniques could help them get more comfortable with embracing new solutions." Using silver particles for water filtration is not the main innovation. Others have used this technology in the past. The key is controlling the release of nanoparticles, which can reduce the usable life of the filters. And the silver particles mix at high volume with some of the chemicals, such as chloride and sulfide, in the untreated water, leading to a "poison layer" that can reduce the disinfection efficacy of the silver particles on the clay lining. The researchers used materials abundant in the environment of the community, including pine tree resin, to mitigate the uncontrolled release of silver particles during the water purifying process. The materials and construction process for the pots cost less than $10, making for a potentially low-cost solution. "This is just the beginning of trying to solve a local problem for a specific group of people," Saleh said. "But the technical breakthrough we've made can be used all over the world to help other communities." This research dates back to close to 10 years. The original idea for this project was sketched out on a Subway napkin and refined over many trips to those communities, experiments, and iterations. Rowles himself has pottery skills, and a desire to work on this issue of a lack of drinking water in native communities. The next step for the researchers is to grow the technology and find other materials and techniques to help communities use the materials abundant in their regions to help create fresh, drinkable water. The researchers are not seeking to commercialize the research, but they are eager to share it with potential partners. Other team members on the project include Desmond Lawler, a professor emeritus, who was Rowles' co-adviser; civil, architectural and environmental engineering professor Mary Jo Kirisits; and Andrei Dolocan, a senior research scientist at the Texas Materials Institute. Story Source: Journal Reference: Cite This Page:
Environmental Science
Green spaces can save lives, according to urban big data Against the backdrop of global climate change, extreme heat events are becoming hotter, longer, and more frequent. Such sustained extreme heat has severely impacted people's health all over the world. It is generally acknowledged that some urban landscape features, such as green spaces, can offer respite from high temperatures. However, few studies have shown how changing the landscape of a city affects its residents' ability to cope with extreme heat and the effect on mortality. In two papers, researchers from China, South Africa, and the UK found that green spaces, such as trees lining streets, can alleviate extreme heat's negative impacts on human health. One of the two papers, "Effect Modifications of Overhead-View and Eye-Level Urban Greenery on Heat-Mortality Associations: Small-Area Analyses Using Case Time Series Design and Different Greenery Measurements," has been published in Environmental Health Perspectives. The other paper, "Effects of the urban landscape on heat wave-mortality associations in Hong Kong: comparison of different heat wave definitions," has been published in the Frontiers of Environmental Science and Engineering. They also observed a higher risk of heat wave-associated mortality when buildings are more densely packed together and when the land surface temperature is higher at night. The research findings provide scientific evidence for policymakers and urban designers to form and implement strategies and health interventions for extreme heat conditions, such as incorporating different types of greenery into cities. Health-affecting landscapes The researchers chose to study Hong Kong due to its hot and humid summers as well as its high building and population density. They analyzed mortality, meteorological, and land use data collected between 2005 and 2018. They found that green spaces have a remarkable ability to protect people from heat-related illnesses, especially under longer and more intense heat waves, says Dr. Jinglu Song, the first and corresponding author of the two papers. Dr. Song is an assistant professor at Xi'an Jiaotong-Liverpool University's Department of Urban Planning and Design. "Vegetation tends to release more water vapor as the surrounding temperatures rise, thus leading to a better cooling effect of the air around it. Trees, in particular, can also provide shade for residents and pedestrians. "In contrast, areas with a high density of buildings are hotter as the concrete structures absorb more solar energy and block air from passing between them, thus trapping heat." Dr. Song continues, "During prolonged heat events, buildings may not be able to completely cool off at night, which increases nighttime temperatures. "Higher nighttime land surface temperatures can contribute to a more intensive urban heat island effect, a phenomenon where urban areas are hotter than nearby countryside. This can lead to temperatures staying higher for longer," she says. Age and gender differences The researchers also found that, during heat waves, older adults (over 65) and men are more likely to be affected by changes made to the urban landscape. Dr. Song explains, "Older adults tend to engage in fewer heat-adaptive behaviors like turning on fans or air conditioners, taking a shower, or leaving the house. "Elderly people's daily activities are often within or close to their residential areas. Therefore, they're more susceptible to the surrounding urban landscape. "Also, due to different gender distributions in industrial sectors, men may work outdoors for longer and are more likely than women to be affected by heat waves in certain situations." Measuring urban greenery Urban greenery is usually measured using satellite imaging, however, this often misses aspects people see on the ground. Previous studies may, therefore, have underestimated the health benefits of eye-level urban greenery against extreme heat events. To overcome these problems, the researchers assessed urban greenery by using data from satellite imagery and Google Street View images. Compared with the assessment of overhead-view urban greenery using satellite images, the eye-level greenery measurement is more accurate in reflecting what people see and access when they walk in the city. Dr. Song says, "It's difficult to use the satellite images to detect urban greenery like vertical green walls or shrubs and lawns covered by trees. This is why we used panoramic streetscape images from Google Street View to have a more precise understanding of how street-level greenery affects heat mortality." "Green spaces and foliage along walking streets can improve a pedestrian's thermal comfort in hot weather. It also reduces people's exposure to unhealthy factors like air pollution and traffic noise. "The design of our cities will become increasingly important in mitigating the risk of heat-associated death in the future as extreme heat events are projected to become more frequent, longer and more intensified." The studies were conducted by researchers from Xi'an Jiaotong-Liverpool University (XJTLU), China; London School of Hygiene and Tropical Medicine, UK; University of Liverpool, UK; City University of Hong Kong, China; North West University, South Africa; and Zhejiang University, China. More information: Jinglu Song et al, Effects of the urban landscape on heatwave-mortality associations in Hong Kong: comparison of different heatwave definitions, Frontiers of Environmental Science & Engineering (2023). DOI: 10.1007/s11783-024-1771-z Jinglu Song et al, Effect Modifications of Overhead-View and Eye-Level Urban Greenery on Heat–Mortality Associations: Small-Area Analyses Using Case Time Series Design and Different Greenery Measurements, Environmental Health Perspectives (2023). DOI: 10.1289/EHP12589 Journal information: Environmental Health Perspectives Provided by Xi'an jiaotong-Liverpool University
Environmental Science
In urban stormwater, particles from tyre wear were the most prevalent microplastic a new Griffith-led study has found. Published in Environmental Science & Technology, the study showed that in stormwater runoff during rain approximately 19 out of every 20 microplastics collected were tyre wear particles with anywhere from 2 to 59 particles per litre of water. “Pollution of our waterways by microplastics is an emerging environmental concern due to their persistence and accumulation in aquatic organisms and ecosystems,” said lead author Dr Shima Ziajahromi, a research fellow at the Australian Rivers Institute. “Stormwater runoff which contains a mixture of sediment, chemical, organic and physical pollutants, is a critical pathway for microplastics to washed off from urban environments during rain and into local aquatic habitats. “But to date, our knowledge of the amount of microplastics in urban stormwater, particularly tyre wear particles, is limited, as is the potential strategies we can use to minimise this source.” Tyre rubber contains up to 2500 chemicals with the contaminants that leach from tyres considered more toxic to bacteria and microalgae than other plastic polymers. “Due to the analytical challenges in measuring this source of microplastics in stormwater, research to date often lacks information about the actual number of tyre wear particles water samples,” said Dr Ziajahromi. Quantitative information of this type is crucial to improve our understanding of the amount of tyre wear particles in stormwater, assess the risk to the environment, and to develop management strategies. “Our study quantified and characterize microplastics and tyre wear particles in both stormwater runoff and sediment of stormwater drainage systems in Queensland,” said co-author Professor Fred Leusch, who leads the Australian Rivers Institute’s Toxicology Research Program. “We also assessed the effectiveness of a stormwater treatment device to capture and remove these contaminants from stormwater and evaluated the role of a constructed stormwater wetland for capturing microplastics in the sediment, removing it from stormwater runoff. “The device is a bag made of 0.2 millimetre mesh which can be retrofitted to stormwater drains. Although originally designed to capture gross pollutants, sediment, litter and oil and grease, it significantly reduced microplastics from raw runoff, with up to 88% less microplastics in treated water which had passed through the device.” Sediment samples collected from the inlet and outlet of a constructed stormwater wetland contained between 1450 to 4740 particles in every kilogram of sediment, with more microplastics in the sediment at the inlet than the outlet, indicating the wetland’s ability to remove them from stormwater. “Microplastics that enter constructed wetlands for stormwater drainage systems settle in the sediment and form a biofilm, leading to their accumulation over time, removing them from stormwater runoff,” said Dr Ziajahromi. “Urban stormwater runoff typically requires treatment for the removal of suspended solids and nutrients such as nitrogen and phosphorus in many jurisdictions in Australia, with some also requiring the removal of gross pollutants. However, regulations are lagging behind when it comes to microplastics and tyre wear particles.” “Our findings show that both constructed wetlands and the stormwater capture device are strategies that could be potentially used to prevent or at least decrease the amount of microplastics tyre wear particles being transported from stormwater into our waterways.”
Environmental Science
Former federal officials slam WHO for misleading guidance on ‘forever chemicals’ The World Health Organization’s (WHO) draft drinking water guidelines for “forever chemicals” disregard best available science and require extensive revisions, two former federal officials argued in a new position paper. The WHO draft, which focuses on the two most well-known types of per- and polyfluoroalkyl substances (PFAS), suggests guidance levels that are 25 times higher than those recently proposed by the Environmental Protection Agency (EPA), according to the paper, published on Thursday in Environmental Science & Technology. The co-authors, Elizabeth “Betsy” Southerland and Linda Birnbaum, skewered the WHO working group for expressing uncertainty as to whether the two compounds, PFOA and PFOS, are linked to adverse health outcomes. Such a conclusion “represents a striking and inappropriate disregard of the best available science,” wrote the authors, who respectively served as director of Science and Technology in the EPA Office of Water and director of the National Institute of Environmental Health Sciences. PFOA and PFOS are just two among thousands of “forever chemicals,” many of which are connected to thyroid disease, kidney cancer and testicular cancer, among other illnesses. Known for their ability to linger in the human body and the environment, PFAS are present in a variety of household products and in the foam used to fight jet fuel fires. While WHO’s draft guidance suggests a limit of 100 parts per trillion of PFOA and PFOS in drinking water, the EPA recently proposed setting that bar at 4 parts per trillion. One part per trillion is approximately equivalent to one droplet in an Olympic-size swimming pool. Other countries have already set stricter limits; for example, Denmark established 2 parts per trillion as the standard for four different types of PFAS, the authors noted. Southerland and Birnbaum also mentioned a February announcement from the European Chemicals Agency, which is evaluating the possible restriction of about 10,000 types of PFAS in manufacturing processes. In addition to describing WHO’s suggested guidance levels as “arbitrary,” the authors also slammed the agency for “casting doubt on the effectiveness” of certain PFAS filtration technologies. WHO’s working group described the removal effectiveness of ion exchange filtration systems with uncertainty, while neglecting evidence from the EPA documenting otherwise, according to the paper. “WHO’s PFAS draft guidance values misrepresent the effectiveness of affordable, readily available treatment technology and do not take into account the overwhelming global scientific evidence of serious health effects in epidemiological studies,” Southerland and Birnbaum stated. Crediting the European Chemicals Agency for considering restrictions on PFAS, the authors said that “it is stunning that the WHO maintains no health-based guidance values can be developed.” “To support the work of public health agencies worldwide providing people with safe drinking water, the WHO guidance levels need to be extensively revised,” they concluded. In response to the position paper, a spokesperson for WHO said that “although the process of developing the guideline values is still ongoing, WHO’s advice to its member states in relation to PFOS and PFOA in drinking-water is guided by key principles laid down in the draft background document.” Those principles include striving to achieving PFAS levels “that are as low as reasonably practical,” while minimizing contamination of water sources, stopping non-essential uses of PFAS and balancing risks from PFAS exposure to risks associated with inadequate supplies of drinking water, according to the agency. The spokesperson emphasized that the initial guidelines document constitutes “a draft and the website has been updated, as this is an ongoing process.” This story was updated at 12:08 am ET. Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Environmental Science
If you find yourself feeling hopeless whenever you think or read about climate change, don't worry: There's a scientific explanation.It's called climate anxiety, and it's a real mental health condition that can take time to address, according to Portland, Oregon-based environmental psychologist Thomas Doherty. At the Aspen Ideas Festival in Aspen, Colorado, on Monday, Doherty spoke about the "learning curve" it takes to combat the anxiety, or even despair, stemming from climate change.Doherty, who specializes in the intersection of psychology and environmental science, said he often tells clients to try and take a step back from those feelings of hopelessness, which can mean "pulling off of the media, going outside, doing stress reduction, all of these kinds of things."He also noted that part of coping means taking the time to accept that as a single person, you can only do so much. "I think the key in coping is making sure that we don't get stuck on certain feelings, but really growing all of the feelings, which is a process and it takes practice," he said.Both the United Nations and the American Psychological Association (APA) have found that humans are increasingly at risk of climate change-induced mental health issues. According to the APA, climate anxiety can manifest in people who respond to the news of climate change developments with "negative emotions including fear, anger, feelings of powerlessness, or exhaustion."Those feelings aren't uncommon: In 2021, a global study found that 45% of people between the ages of 16 and 25 said climate anxiety was affecting their daily lives. The Climate Psychology Alliance even offers a directory of "climate-aware" therapists.Doherty argued that those negative emotions aren't inherently bad, because "we should be able to feel all of our emotions" in a healthy way. Sometimes, he said, it's helpful to talk through your feelings with other people — whether you're feeling upset about the environment or charged up about your ability to help."Sometimes, we're going to be ... feeling good and feeling inspired," he said. "One of the biggest dangers is being alone in the process, because when we're alone, there's no one to help us in the down cycle."Doherty also noted that climate anxiety can be heightened by the sheer volume of negative news on the subject. That's where people like Alaina Wood — a sustainability scientist, and one of Doherty's fellow Aspen panelists — come in.Wood is popular on TikTok, where her 300,000-plus followers tune in to hear her list off the latest positive climate developments or outlooks — like scientists recently discovering an enzyme that breaks down environment-damaging plastics in under 24 hours.At Aspen, Wood said she tries to focus on positives to help viewers address their own climate anxiety, and that it'll take optimism to mobilize enough people to make a real difference against climate change. She too struggles with climate anxiety, she said, and usually follows a simple process to cope."I log off my phone," Wood said. "I take time to go out in nature, whether that's in my backyard or in the mountains, and just unwind. And take a second to remind myself why I care, why I'm feeling that way."Sign up now: Get smarter about your money and career with our weekly newsletterDon't miss:Ex-Google CEO Eric Schmidt: 'If we don't address climate change, we really will be toast'33-year-old who runs a multimillion-dollar start-up to help fight climate change: 'You can be altruistic and successful'
Environmental Science
Children and teenagers can carry out valuable wildlife research—here's how The environment is in crisis. Young people are calling for environmental action and requesting more education about the environment and the climate emergency. They are also looking at what they can do to tackle climate change. Together with colleagues, I have found that children can make a valuable contribution to research about the environment through citizen science projects—where members of the public help scientists with research. In one project, we looked at young people's involvement with iNaturalist, a popular nature app. The young participants, aged between five and 19, used the app independently or during and after attending events organized by the Natural History Museum of London, the Natural History Museum of Los Angeles County and the California Academy of Sciences in San Francisco. The app lets people upload photos of organisms and add information, such as location, date and time. If this information is added, then the photo is considered "verifiable". The verifiable photos that are considered of good quality (as agreed by the iNaturalist community) are labeled as "research grade". The research-grade contributions are shared with the Global Biodiversity Information Facility, which allows open access to data about all types of life on Earth. This data is frequently used by biodiversity researchers. Making a contribution We found that the young people using the app contributed to quality wildlife monitoring at the same rate as adults, with the support of iNaturalist's online community. Their observations were of research grade and therefore potentially valuable for wildlife research. What's more, wildlife can benefit from young volunteers' contributions. We found that young people tend to observe and identify different species than adults. These tend to be smaller species, such as insects, mushrooms and spiders. Although this may be because of young people's personal preferences, it may also depend on their different skills, opportunities and available instruments—for example, whether they have high-resolution cameras. Although people usually take part in citizen science projects because of their broad environmental concerns and concerns for others, they also develop scientific knowledge and skills, such as about the scientific process, and topic-specific knowledge—for example, learning about and collecting data on endangered species—as a result. We contacted teachers who engage their pupils in citizen science projects. We wanted to understand whether citizen science can also help young people engage in environmental science issues. Teachers confirmed the learning benefits for young people, adding that young people also maintain an ongoing engagement with science once they are involved. A secondary school science teacher said, "I think it is just a different way to do science, to help the students to be more engaged in science. Talk to them about real problems they can see in their everyday life, so it is not about the idea, it is just really a different way to teach science in the classroom." Another science teacher pointed out the ownership that the young people had over their scientific learning through citizen science: "And then they'll [the students] be the ones who archive that, they'll be the ones that then decide amongst each other which ones to look at in detail and try and work out the sequencing relationship between those samples… so it's all about them making the decisions amongst themselves in that larger [scientific] collaboration." Learning by doing Other research I carried out with colleagues confirmed these findings. This study looked at young people's participation in Zooniverse, a citizen science website. The website invites volunteers to participate in scientific projects by completing tasks. These may include classifying pictures of animals, transcribing historical wildlife data or annotating animals in camera trap images. The young scientists reported that they had gained environmental science learning, such as understanding more about their local wildlife population numbers. They felt they were experts, and were recognized as experts by others. They went on to use their science knowledge elsewhere: writing a blog to share their knowledge or going on to study life sciences at university. However, young people who had an existing interest and experience with science activities were more likely to learn about the environment on Zooniverse. Also, young participants who took part in reflection and discussion activities were more likely to show higher levels of environmental learning compared with those who just completed the tasks. Teachers said that they could better help young people successfully participate in environmental research if they have access to the right resources. This includes guidelines and support on how to implement citizen science activities in their classrooms. This support can be in the form of professional development programs, or tools, such as the nQuire platform, which guides and enables participation in different stages of scientific research. Young people can help to advance wildlife research and in doing so, help the environment. But to do so, they need to be empowered with the proper knowledge and skills through well-designed educational programs, and receive the right support from the research community. Provided by The Conversation
Environmental Science
COVID-19 shutdown highlights air quality policy challenges In a study conducted in NYC during the COVID-19 pandemic, researchers found that air quality policies focusing on reducing pollutant emissions from the transportation sector have made significant strides. However, the study also revealed some concerning trends of human activity that largely undermine these efforts and demonstrate the need for more comprehensive guidelines. The research, led by Drew Gentner and colleagues at Columbia University and Stony Brook University, continuously monitored major volatile organic compounds (VOCs) in Manhattan from January to April in both 2020 and 2021. The data showed that the pandemic-induced restrictions resulted in a substantial reduction in human activity by 60%–90%. As a result, concentrations of many VOCs significantly decreased, leading to a temporary ~28% reduction in chemical reactivity. The findings are published in the journal Environmental Science & Technology. Despite these positive effects, the researchers also discovered that the unusually warmer temperatures in the spring of 2021 led to higher emissions of VOCs. This increase in temperature-dependent emissions largely canceled out the gains achieved during the pandemic, ultimately negating some of the benefits of the previous year's efforts. The research highlights the limitations of relying solely on transportation-focused policies to address air pollution issues. While such measures can have a positive impact, they are not sufficient to combat the broader challenges posed by a warming climate. More information: Cong Cao et al, Policy-Related Gains in Urban Air Quality May Be Offset by Increased Emissions in a Warming Climate, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c05904 Journal information: Environmental Science & Technology Provided by Yale University
Environmental Science
Poor UK households more vulnerable to climate shocks, says new research Poorer households in the UK are more vulnerable to climate change because temperature shocks are associated with deepening wealth inequality, according to new research published today in the journal Environmental Science and Pollution Research. Researchers from Anglia Ruskin University (ARU) and institutions in South Africa and Germany examined wealth inequality data against climate date from the period 2006-2018 to ascertain the impact of both temperature changes and climate shocks, such as extreme heat, on wealth inequality in the UK. The study used wealth inequality data from the Office of National Statistics, while monthly temperature data was collected from the Met Office, using year-on-year temperature growth and its volatility shocks as measures of climate risk. The study found that both temperature growth and extreme temperature events have positive and statistically significant effects on all measures of wealth inequality in the longer term. This was particularly pronounced when comparing the 10% wealthiest households with the 10% poorest. It also found that climate risk shocks harm the poorest the most relative to the richest households, exacerbating long-term wealth inequality. Researchers believe this could be because of several factors, including the impact on health of poorer people due to increased air pollution and the effect on food supply chains. The study also found that show that, upon the impact of climate shocks, wealth inequality increases significantly between the 20% and 10% wealthiest households compared to the households of median wealth, while also reducing the inequality between the average household and the poorest. Lead author Dr. Xin Sheng, Associate Professor in the Faculty of Business and Law at Anglia Ruskin University (ARU), said, "Even within higher-income countries, poorer people can also be more vulnerable to the impact of climate change. For example, the UK recorded the highest number of heatwave deaths in 2020, and also has among the highest level of income inequality in relation to other developed countries in Europe." "The findings highlight the disproportionate increased burden of climate change on households that are already experiencing poverty, particularly households in high-climate risk areas. For example, climate change can pose a serious health threat through food insecurity and increased toxic air pollution. Extreme weather can affect crop production, which can push up food and fuel costs." "As such, measures to mitigate the adverse effects of climate change need to be tailored so as not to overburden the poor." More information: Xin Sheng et al, Climate shocks and wealth inequality in the UK: evidence from monthly data, Environmental Science and Pollution Research (2023). DOI: 10.1007/s11356-023-27342-1 Journal information: Environmental Science and Pollution Research Provided by Anglia Ruskin University
Environmental Science
Water in California's Central Valley contains enough manganese to cause cognitive disabilities and motor control issues in children, and Parkinson's-like symptoms in adults. A naturally occurring metal, manganese is found in water supplies throughout the world. It is regulated as a primary contaminant in many Southeast Asian countries where the climate causes it to leach into groundwater. However, in the U.S. it is regulated only as a secondary contaminant, meaning no maximum level is enforced. A new UC Riverside-led study shows that, among Central Valley communities, the highest concentrations of manganese are in private, untreated well water systems. However, the researchers also found it in public water systems at higher concentrations than what studies have shown can have adverse health effects. The study, published in the journal Environmental Science & Technology, not only measured levels of manganese in Central Valley water supplies, but also mapped the highest concentration areas according to annual income levels. Overall, the research team estimates nearly half of all domestic well water users in the Central Valley live in disadvantaged communities, as defined by annual income. Within this population, nearly 89% have a high likelihood of accessing water that is highly contaminated with manganese. "It is a relatively small number of people, compared to the total population of the state, who are getting the tainted water. But for them, the health risks are high," said Samantha Ying, UCR soil scientist and principal study investigator. "These people are particularly concentrated in disadvantaged communities, so if they wanted to monitor and treat the water, they would have a hard time doing so," Ying said. Point-of-use treatment options range from oxidation and precipitation filters to water softeners, chlorination, and reverse osmosis systems. But devices for monitoring water quality can cost up to $400 annually, and treatments for manganese-tainted water are just as expensive. "It is possible to purchase filters for manganese, but a lot of people cannot afford them. We are hoping people in these communities can be subsidized to buy treatment options," Ying said. Previously, the research team found that manganese-contaminated groundwater tends to occur in relatively shallow depths, compared to arsenic. They wondered if digging deeper wells would avoid the manganese contamination. Unfortunately, that strategy is unlikely to be effective. "Using existing groundwater model predictions of manganese concentrations at deeper depths did not change the number of wells likely to be contaminated," Ying said. The conditions that cause arsenic and manganese to leach are similar, so they tend to show up in groundwater in tandem. Arsenic has long been regulated as a primary contaminant in the U.S. "Wells are labeled unsafe if they contain arsenic, but not if they contain manganese," Ying said. "Thus, the number of wells considered safe may be greatly overestimated." Furthermore, the researchers used a benchmark of 300 parts per billion of manganese to assess water quality. This is a level of manganese contamination that some studies have associated with neurological development issues, particularly for fetuses and infants during early growth periods. It is likely though that adverse effects can occur at lower levels. "New studies from Canada, where manganese is now a primary contaminant, show there may be effects at 100 parts per billion," Ying said. "We were being conservative at 300." This study focused on the Central Valley in part because the conditions that cause manganese to move from aquifer materials into water are so prevalent there. It is likely that drinking water from wells in other parts of the state is similarly tainted. Over 1.3 million Californians rely on unmonitored private wells. "The population being exposed is much bigger than we might think. There are a lot of communities statewide drinking from private wells," Ying said. Story Source: Journal Reference: Cite This Page:
Environmental Science
NEED MORE INFO? QUICK QUESTION A specialist advisor will get back to you asap. Significant greenhouse gas (GHG) emissions will result from constructing, maintaining and rehabilitating the world road network using current practices. In this article, we will demonstrate the potential to significantly reduce road construction industry GHG emissions by pivoting to in-situ stabilisation / cold-recycling using low-carbon binders and Renolith 2.0 nanopolymer admixture. Road construction has a large carbon footprint Roads play a key role in movements of goods and people but require large amounts of materials emitting greenhouse gases to be produced. We estimate road material stock to be of 254 Gt. If we were to build these roads anew, raw material production would emit 8.4 GtCO2-eq. Material Stock and Embodied Greenhouse Gas Emissions of Global and Urban Road Pavement, Environmental Science & Technology 2022 56 (24), 18050-18059, DOI: 10.1021/acs.est.2c05255 The road network is growing …It is expected that the world will need to add nearly 25 million paved road lane‐kilometres (km) and 335 000 rail track kilometres (track‐km), or a 60% increase over 2010 combined road and rail network length by 2050. In addition, it is expected that between 45 000 square kilometres (km²) and 77 000 km² of new parking spaces will be added to accommodate passenger vehicle stock growth. Dulac, J., 2013. Global land transport infrastructure requirements. Paris: International Energy Agency, 20, p.2014. Roads require maintenance and refurbishment Asian countries tend to adopt a policy promoting short design life (about 10 years) to save on construction costs and because of the uncertainty connected with predicting long-term traffic volumes. However, initial cost savings under this strategy are often offset by mid-term and overall life-cycle costs required for maintenance and rehabilitation, which may result in increased GHG emissions. Though vehicle overloading is a major issue in Asia, it has rarely been taken into account at the design stage. This has commonly resulted in premature end of pavement life. Overloaded vehicles adversely and significantly affect GHG emissions, not only because they decrease road serviceability life, but also because of resulting increases in maintenance costs, vehicle operating costs, and road safety. Transport – Greenhouse gas emissions mitigation in road construction and rehabilitation: A toolkit for developing countries (English). Washington, D.C.: World Bank Group. p.2012. (Hereafter: GHG emissions mitigation toolkit) A life cycle assessment (LCA) is a methodology for detailing resource flows and associated environmental impacts. LCA is often used to assess the cradle-to-grave climate impact of a product or service, such as transport infrastructure. Various LCA software tools exist to estimate the GHG emissions from road projects. LCA requires project specific data to yield meaningful results, but in general, to minimise total GHG emissions from road construction, we seek to: For our model, we will use a simple method to estimate the GHG emissions for an unbound granular pavement vs a bound pavement incorporating Renolith admixture. A simplified approach to estimate GHG emissions during construction are estimated using the model and data from the GHG emissions mitigation toolkit. Only materials and transport emissions are considered. For expressways and national roads, GHG emissions from the fabrication and extraction of construction materials are the main contributor, at about 90 percent of total emissions; they are less important for provincial and rural roads, at about 80 percent. Transport of materials represents about 30 percent of the GHG emissions of a road project. Of that amount, about 50 percent is related to local (less than 25 km) transport. Extraction and material transport are therefore the main activities that must be considered to significantly improve the GHG impact of a road construction project. To build one kilometre of two-lane highway requires about 14,000 tonnes (or 400 truckloads) of construction aggregates. A typical two-lane bitumen road with an aggregate base can require up to 25 000 tonnes of material per kilometre. SBEnrc, “Reducing the environmental impact of road construction,” Sustainable Built Environment National Research Centre (SBEnrc), Perth WA, 2013 For our model, we will assume a requirement for 7000 tonnes of aggregate per lane km (excluding asphalt layer) and emissions intensity of 8 kg eq CO₂e / tonne. Transport of aggregate materials is assumed to be 30 percent of total GHG emissions. In the Brenner Autobahn rehabilitation project, cement binder was applied at 25kg/m² (88 tonnes per lane km) – approximately 4% wt of pavement at 30cm depth. The project used cement CEM 32.5, which is similar to GP cement per AS 3972, with estimated emissions intensity of 670 kg CO₂e / tonne. The transport component was assumed to be via heavy truck (diesel) with emissions intensity of 1.58 kg CO₂e / veh.km. Cement has high embodied energy and carbon footprint. Accordingly, there is much interest substituting lower-carbon binders for cement. Austroads AGPR4L-09, Guide to Pavement Technology Part 4L: Stabilising Binders, Ed1.1 Table 5.2 lists the typical range of commercial cementitious binder pozzolan blends available for pavement construction in Australia. These blends contain recycled supplementary cementitious materials (SCMs) (slag and/or fly ash), with a correspondingly lower emissions intensity. Renolith is compatible with all blends listed in Austroads AGPR4L-09. Therefore, a third design approach is considered, which assumes the same parameters as the Brenner case study but substitutes a lower emissions binder for cement. The binder blend is assumed to have emissions intensity of 335 kg eq CO₂ eq / tonne, which is half that of ordinary portland cement (OPC). Note that Austroads AGPR4L-09 lists dozens of blends with recycled SCM content ranging from 10% to 80%. In general, the higher the recycled SCM content, the lower the emissions of the blend. However, the performance of blends with very high SCM content may diverge from low to mid-range blends. The chart below shows the estimated emissions from the the materials and transport component of construction for the three designs. A significant reduction in GHG emissions is achieved using a mid-range cement/SCM blend binder in combination with Renolith. Potentially (subject to testing), emissions could be reduced even further via use of a very low emissions cementitious binder, such as a high-blend cement with reduced clinker content. Considerations (italicised) in the table below are derived from the GHG emissions mitigation toolkit The associated advantages of a Renolith pavement are listed. Cold recycling with cementitious binder and Renolith produces a stiff bound pavement layer at a much lower cost than conventional unbound granular construction methods. The improved economics of construction can reduce the pressure on the project to design for short life. The fatigue resistance of cementitiously-bound layers is very sensitive to the thickness of the course and the stiffness of the layer. As an approximation, a 10% reduction in either thickness or stiffness of a cementitiously-bound layer could lead to about a 90% reduction in fatigue life. AustStab TN05, “AustStab Technical Note No.5, Cement Stabilisation Practice,” AustStab, Sydney, 2012. Consider the inverse of this heuristic: an 11% increase in thickness, at less than 11% additional construction cost, can yield a massive 900% increase in fatigue life. The whole-of-life emissions advantages of Renolith pavements are clearly substantial, but difficult to quantify. The Brenner Autobahn case study suggests that a long pavement life can be achieved despite difficult environmental conditions (lateral water afflux, frost) and high traffic loading (>2,000,000 heavy goods vehicles per annum). By contrast, the 2022 floods in Australia highlighted the susceptibility of conventional pavement designs to water damage, causing $3.8B in damage. As the pace and intensity of extreme weather events increases, water damaged roads will need repairs and reconstruction, with associated GHG emissions from this activity. This perpetual problem could be substantially mitigated with Renolith, since the bound composite is essentially impermeable. There is potential to significantly reduce road construction industry GHG emissions by pivoting to in-situ stabilisation / cold-recycling using low-carbon binders and Renolith 2.0 nanopolymer admixture as the primary pavement construction method. This potential arises from attractive economics, a low carbon footprint at construction plus high pavement design life and durability. A specialist advisor will get back to you asap. Reduce the construction cost of your next project by up to 60%. Request a FREE copy of the Renolith Business Case to learn how.
Environmental Science
Modeling study shows plastic can drift far away from its starting point as it sinks into the sea Discarded or drifting in the ocean, plastic debris can accumulate on the water's surface, forming floating islands of garbage. Although it's harder to spot, researchers suspect a significant amount also sinks. In a new study in Environmental Science & Technology, one team used computer modeling to study how far bits of lightweight plastic travel when falling into the Mediterranean Sea. Their results suggest these particles can drift farther underwater than previously thought. From old shopping bags to water bottles, plastic pollution is besieging the oceans. Not only is this debris unsightly, animals can become trapped in it or mistakenly eat it. And if it remains in the water, plastic waste can release organic pollutants. The problem is most visible on the surface, where currents can aggregate this debris into massive, so-called garbage patches. However, plastic waste also collects much deeper. Even material that weighs less than water can sink as algae and other organisms glom onto it, and through other processes. Bits of this light plastic, which typically measure 5 millimeters or less, have turned up at least half a mile below the surface. Researchers don't know much about what happens when plastic sinks, but they generally assume it falls straight down from the surface. However, Alberto Baudena and his colleagues suspected this light plastic might not follow such a direct route. To test this assumption, they used an advanced computer model developed to track plastic at sea and incorporated extensive data already collected on floating plastic pollution in the Mediterranean Sea. They then simulated nearly 7.7 million bits of plastic distributed across the sea and tracked their virtual paths to depths as great as about half a mile. Their results suggested that the slower the pieces sank, the farther currents carried them from their points of origin, with slowest traveling an average of roughly 175 miles laterally. While observations of the distribution of plastic underwater are limited, the team found their simulations agree with those available in the Mediterranean. Their simulations also suggested that currents may push plastic toward coastal areas and that only about 20% of pollution near coasts originates from the nearest country. These particles' long journeys mean this plastic has greater potential to interact with, and harm, marine life, according to the researchers. More information: Alberto Baudena et al, Low-Density Plastic Debris Dispersion beneath the Mediterranean Sea Surface, Environmental Science & Technology (2023). DOI: 10.1021/acs.est.2c08873 Journal information: Environmental Science & Technology Provided by American Chemical Society
Environmental Science
Toxic diets: Canadian orcas face high risks of pollution-related health effects Killer whales, also called orcas, are known for their intelligence and striking presence. They are also enduring a silent but persistent threat beneath the surface of our oceans. My research investigates killer whales and their diets in the North Atlantic. Previous studies have focused on killer whales in the Pacific Ocean. But until now, no data existed for our killer whales in the North Atlantic, including those in Eastern Canada and the Canadian Arctic. With other international researchers, I recently published a study in Environmental Science & Technology that reveals a troubling reality: these apex predators are carrying high levels of persistent organic pollutants (POPs) in their blubber. The accumulation of these synthetic contaminants is also creating health risks for the killer whales. Forever chemicals POPs are also known as "forever chemicals" due to their remarkable stability and long-lasting nature. This group includes well-known compounds like polychlorinated biphenyls (PCBs), chlorinated pesticides like dichlorodiphenyltrichloroethane (DDT) and brominated flame retardants. In the last century, these chemicals were mass produced and used in a wide range of applications, such as industrial processes or agriculture. But research conducted in Sweden in the late 1960s revealed that these chemicals accumulate in living organisms and persist in the environment. The chemicals bind to fats and increase in concentration as they move up the food web, impacting dolphins and whales the most. These animals, being top predators, accumulate the largest concentrations and struggle to eliminate these chemicals. This buildup of contaminants through their diets—known as biomagnification—is especially concerning for marine mammals, as they need ample fat for warmth and energy. A gradient of contamination Our study, focusing on 160 killer whales, reveals a concerning pattern of PCB contamination accross the North Atlantic. The concentrations vary significantly across the North Atlantic, ranging from a staggering 100 mg/kg in the Western North Atlantic, to around 50 mg/kg in the mid-North Atlantic. Intriguingly, killer whales in the Eastern North Atlantic carry lower PCB levels at roughly 10 mg/kg in Norway. For context, PCB-related immune effects start at 10mg/kg, while reproductive failure was observed at 41 mg/kg in marine mammals. Killer whales in Eastern Canada and the Canadian Arctic have PCB levels exceeding twice the threshold linked to reproductive problems in marine mammals. You are what you eat Diet plays a pivotal role in this pattern of contamination. Killer whales that primarily feed on fish tend to have lower contaminant levels. On the other hand, those with diets focused on marine mammals, particularly seals and toothed whales, show higher levels of contaminants. Killer whales with mixed diets—containing both fish and marine mammals—tend to display elevated contaminant levels, particularly in Iceland. Our research investigates the potential impact of diet preferences on killer whale health. Risk assessments suggest that killer whales in the Western North Atlantic, and specific areas of the Eastern North Atlantic where they have mixed diets, face higher risks, directly linked to what they eat. Among the emerging contaminants, hexabromocyclododecane (HBCDD), a flame retardant, is of particular concern. Concentrations of HBCDD in North Atlantic killer whales are among the highest measured in any marine mammals, surpassing levels found in their North Pacific counterparts. Disappearing sea ice This reveals the fascinating complexity of killer whale ecology and underscores how their dietary choices significantly impact their exposure to environmental pollutants. It also raises some concern for "Arctic-invading" killer whales that progressively move north due to climate change. Killer whales' large dorsal fin has traditionally prevented them from navigating dense sea ice. But the melting of sea ice has allowed killer whales to access a new habitat with new prey species. There, researchers believe that they will hunt more and more marine mammals, like ringed seals, narwhals and belugas. These dietary shifts, influenced by our changing environment, may result in heightened health risks for apex predators. Maternal transfer means females are less contaminated The study also spotlights a sex difference in contaminant concentrations. Male killer whales appear to be more contaminated than their female counterparts, thanks to the transfer of contaminants from adult females to their offspring during gestation and lactation. Killer whale mothers use their own energy to produce fatty milk for their calves, helping them grow quickly and stay healthy. This nutritious milk comes from the mother's blubber, where contaminants are stored. As she feeds her young ones, she may pass on as much as 70 percent of these stored contaminants. Urgent action In response to these findings, urgent action is needed to protect North Atlantic killer whales and their ecosystems. The 2001 United Nations treaty's objective to phase out and destroy PCBs by 2028 is slipping out of reach. Substantial quantities of PCB-contaminated waste are stored in deteriorating warehouses, risking contaminants ending up in the environment, and further affecting our ecosystems. To compound the issue, as one chemical gets banned, another often emerges, with enough variations to avoid previous regulations, perpetuating a harmful cycle. To effectively tackle the issue of contaminant accumulation in killer whales, the following actions are necessary: - Urgent steps are needed for the proper disposal of PCB-contaminated waste, with an emphasis on international collaboration to support nations lacking the infrastructure for waste management. - It is crucial to prevent the release of potentially more harmful contaminants into the environment by improving toxicity testing of chemicals before they enter the market. - Collaboration among ecotoxicologists, conservation biologists, policymakers and other stakeholders is essential. Effective strategies to mitigate pollution's adverse effects can only be developed through collective efforts. - Targeted conservation efforts should be directed toward populations at higher risk, such as killer whales in the Eastern Canadian Arctic, and Eastern Canada. Chemical pollution has been identified as one of the nine global threats to wildlife, as well as human health in modern times. It is time to give our planet—and killer whales—the relief they need by reducing existing contaminants through concrete actions. Journal information: Environmental Science & Technology Provided by The Conversation
Environmental Science
A super tanker anchored off the country's coast and containing more than a million barrels of oil is "likely to sink or explode at any moment", unleashing an environmental and humanitarian disaster, the United Nations humanitarian coordinator for Yemen has told Sky News. The FSO Safer was all but abandoned in 2015 as Yemen descended into civil war and now the ship is starting to fall apart. Humanitarian coordinator David Gressly said: "We don't want the Red Sea to become the Black Sea, that's what's going to happen. "It's an ancient vessel, a 1976 super tanker from that era, and therefore is not only old but unmaintained and likely to sink or explode at any moment. "Those who know the vessel, including the captain that used to command the vessel, tell me that it's a certainty. "It's not a question of 'if', it is only a question of 'when', so it is important that we act as quickly as we can or it will eventually spill one million barrels of oil into the Red Sea. "We really have no way out except to solve the problem." According to recent modelling by the Nature Sustainability scientific journal, an oil spill would take two to three weeks to spread all the way up to Saudi Arabia, across to Eritrea and down to Djibouti. Within days it would close Yemen's key Red Sea ports of Hudayah and Salif, abruptly ending food aid relied upon by nearly six million people. Most fuel imports would stop too, which matters because eight million people in Yemen rely on fuel-powered pumps or trucks to get their fresh water. Further up the coast of Yemen an estimated two million people rely on desalination plants for their water, but these plants would also be contaminated by the oil spill and have to close. The environmental effects would be profound, destroying or damaging healthy coral reefs and protected coastal mangrove forests. Nature Sustainability predicts that within three weeks an unabated oil spill could kill almost all of Yemen's Red Sea fishing stock, upending the lives of millions of people living in coastal communities who rely on the ocean for their food and livelihoods. Dr Hisham Nagi, professor of environmental science at Yemen's Sana'a University, told Sky News: "The oil tanker is unfortunately located near a very, very healthy coral reef and clean habitat, and it has a lot of species of marine organisms. "Biodiversity is high in that area, so if the oil spill finds its way to the water column, so many marine sensitive habitats are going to be damaged, damaged severely because of that." Read more The victims of the forgotten war that shows no sign of stopping Donated UN food aid for Yemen's most needy sold in markets The United Nations is so desperate to stop the oil spilling that it has just crowdfunded the purchase of a rescue tanker to go on a salvage operation. But despite the potential $20bn (£16bn) cost of a clean-up, the UN is still $34m (£28m) short of the $130m (£106m) in funding it needs to complete the job. The US, UK, German and Dutch governments have all contributed alongside generous private donors, but it is not enough. Mr Gressly said: "There are many complexities but for most member states, the difficulty - and it's ironic - is there is plenty of money available in different member state budgets for a response to an emergency. "I know if there was an oil spill, there would be tens of millions of dollars pouring in to solve this spill. "But nobody seems to have budget lines for avoiding a catastrophe."
Environmental Science
Vinyl chloride entered the spotlight after the Feb. 3. But the has been around for decades and is everywhere — from buildings and vehicle upholstery to children's toys and kitchen supplies — and factories have been emitting the EPA-designated toxic chemical into the air for years. The train that derailed had the manmade and volatile compound on board, prompting temporary evacuations amid concerns it could quickly impact people in the area. Then when officials decided to burn it, there were also concerns it could release phosgene, a gas that can be highly lethal and was used as a chemical weapon in WWI. But the derailment isn't the first time vinyl chloride has alarmed experts. They've been concerned about its potential impacts for decades. On Jan. 2, the U.S. Department of Health and Human Services published a draft toxicological profile for the substance. In it, experts say that the volatile compound, "used almost exclusively by the plastics industry," has "leached into groundwater from spills, landfills, and industrial sources," and that people who live around plastic manufacturing facilities "may be exposed to vinyl chloride by inhalation of contaminated air." Ohio Gov. Mike DeWine said Friday that a medical clinic is being set up for residents in East Palestine, with "national experts on the impacts of chemical exposure," though officials have repeatedly said air and water testing shows conditions are safe. "This disaster is really a wakeup call," Jimena Díaz Leiva, the science director for nonprofit Center for Environmental Health, told CBS News. "...There needs to be a lot more regulatory oversight and action to address not just the safety and the actual transport around these chemicals, but also just stemming our production of all these chemicals." Díaz Leiva also said its risk has been underestimated — both in terms of its potential toxins and the greenhouse gas emissions involved in its production. And in the U.S., there are dozens of places where such exposure is possible. The base for "poison plastic" Vinyl chloride is the "essential building block of PVC plastic," Díaz Leiva said. "It's an incredibly dirty process that emits a lot of chemicals and uses a lot of chemicals in the manufacturing process, resulting in a lot of worker exposures and also exposures of people in frontline and fenceline communities," Díaz Leiva, who got her Ph.D. in environmental science, policy and management, said. "...PVC is called the poison plastic." The CEH published a report on polyvinyl chloride (PVC), a type of plastic used in pipes, buildings, packaging film, flooring and more, in 2018, saying, "the bottom line is there is no way to safely manufacture, use, or dispose of PVC products." The problem begins at vinyl chloride's origins. It's generated from ethane, which is obtained through fracking natural gas, a process that's significantly grown since 2013 and when done, emits the greenhouse gas methane — a major driver of climate change. PVC, according to a 2020 study, has a "high potential in global warming than other plastics" due to its high energy consumption and CO2 emissions. The U.S. Energy Information Administration said ethane production hit a monthly record last year of more than 2.4 million barrels per day. They expect production will hit 2.7 million barrels per day this year, as the global PVC market is expected to become a $56.1 billion industry within the next 3 years. A 2022 study also found that U.S. PVC production emitted roughly 18 million metric tons of CO2 in 2020. According to the EPA's Toxics Release Inventory (TRI), which "tracks the management of certain toxic chemicals that may pose aand the environment," there are 38 TRI facilities in 15 states — mostly around the Gulf of Mexico and the eastern U.S. — that use vinyl chloride, emitting about half a million pounds of the substance every year. In 2021, there were 428,523 pounds of the substance released, according to the EPA. As of 2021, vinyl chloride ranks as one of the most released chemicals in the U.S. Out of 531 chemicals reported to the agency, the substance ranks 117th, with one being the highest releases. Essentially all of those emissions came from the chemical industry in 2021 and were released into the air, and just five facilities made up more than half of those releases. The top emitter, Formosa Plastics Corp. Texas, sits along a bay leading into the Gulf of Mexico. They released more than 68,000 pounds of vinyl chloride into the air that year. These numbers, however, may be lower than what's true because not all facilities using the chemical compound are required to report to the EPA. "An underestimated risk" The emissions are known to have contributed to health issues in nearby communities. Mossville, Lousiana, a tiny town just west of Lake Charles that was founded by people who were formerly enslaved, has been historically plagued by manufacturing pollution. The area is surrounded by more than a dozen industrial facilities, including at least one working with vinyl chloride that has a history of violations and scores far above national and industry levels for factors contributing to health issues. In 2021, the site was fined more than $447,000 for violations for failure to ensure performance, management safety, mechanical integrity and record keeping, among other things. The area is part of what's known as "." "It's a predominantly Black and Brown community. And a lot of the plastics manufacturing companies that are around there, these are the ones that are producing the same precursors that are getting us to PVC plastic and other types of plastics," Díaz Leiva said. Dr. Juliane Beier, assistant professor of medicine at Pittsburgh Liver Research Center and an expert who contributed to the DHHS report, told CBS News those most at-risk are occupational workers. But those in areas near PVC-producing factories could also face exposure. How much vinyl chloride people can be subjected to before suffering health effects is still being researched, and different agencies have set different limits and recommendations. The Occupational Safety and Health Administration, for example, says that workers should not be exposed to more than 1 ppm of vinyl chloride over an 8-hour period, or more than 5 ppm averaged over any period less than 15 minutes. The Agency for Toxic Substances and Disease Registry, however, sets its minimal risk levels — the estimate of how much someone can ingest without a noticeable health impact — much lower. Those who have been exposed for 14 days or less have an MRL of 0.5 ppm for inhalation, while those who have been exposed for 15 to 364 days have an MRL of 0.02 ppm. Once in the air outside, vinyl chloride dissipates within a few days, so emissions from PVC production don't necessarily pose a long-term or widespread impact. However, the ATSDR says areas near vinyl chloride manufacturing and processing plants, as well as waste sites and landfills, have seen a wide range of vinyl chloride concentrations. It's usually a range from "trace amounts to over 1 ppm," the agency says, but levels have gotten as high as 44 ppm around landfills. Beier is currently researching exposure limits and the impact on livers, and told CBS News that at .08 ppm — which is less than the max threshold considered "safe" by OSHA standards — vinyl chloride could still impact health. The concentration at which it can impact health is also far lower than when it's immediately detectable. The substance's odor threshold — the concentration when most people can smell it — is 3,000 ppm in the air, according to the ATSDR. "We have shown experimentally — this is not in humans — that these lower concentrations will enhance liver disease that is either pre-existing or caused by other factors," she said. "And so that is one of my concerns ... are there residents that have underlying liver disease?" When asked if she there should be more concern about the hazards of vinyl chloride, Beier issued a swift response: "Yes." "We need to raise awareness that low levels of vinyl chloride that are currently considered safe may enhance underlying disease, this may be liver disease, but maybe also other disease," she said. "...But this is, I think, an underestimated risk." "The whole vinyl chloride story is absolutely, absolutely under-studied and definitely needs more investigation," Beier said. for more features.
Environmental Science
[1/3] FILE PHOTO:Noah Verin, the owner of Sydney farm Urban Green, holds a tray herbs and micro-greens that he grows with his team in a basement carpark at central business district, in Sydney, Australia, October 27, 2022. REUTERS/Stefica Nicol BikesSYDNEY, Nov 4 (Reuters) - A former chef turned farmer has begun to supply Sydney restaurants with sustainable herbs and microgreens grown in a carpark beneath the city's harbourside business district.Noah Verin set up his Urban Green business in Sydney's Barangaroo district in early 2020 with around 40 different plant species growing side by side. Now, he is riding an industry push to make sustainability a top menu ingredient."I always knew that when people heard the story of the fact there's a farm in a basement in Barangaroo growing food... I knew it would leave an impact," Verin, who also holds an environmental science degree, told Reuters.While vertical farms have been seen as a potential answer to the food crisis, Verin said now the conversation has shifted to how those same farms can also be sustainable."There's no point in setting up a farm to help solve these problems if we are not also creating sustainable farms," he said, ahead of the 2022 United Nations Climate Change Conference, COP27, to be held from Nov. 6 in Sharm El Sheikh, Egypt.Verin is pushing to make his business fully sustainable and aims to make Urban Green carbon neutral by 2026.So far, he has halved his power usage from LED lights, while the fibre he grows plants in - coconut coir - is a byproduct from the coconut industry. He is shifting towards e-bike delivery and fully biodegradable plant pots so the business can be plastic free."Noah's product comes in still alive, still in its pot and also he doesn't use a lot of plastics or any throw away products like that so it's all very sustainable which I like," head chef Logan Campbell of Sydney restaurant Botswana Butchery told Reuters.Verin aims to one day open car park farms for products such as chilis and strawberries and more car park micro green and herb farms."We want to derive a minimum 50 percent of our deliveries within a one kilometre radius of the farm because that's a major advantage... we are surrounded by hundreds and hundreds of food service restaurants," he said.Reporting by Stefica Nicol Bikes Editing by Melanie Burton and Tomasz JanowskiOur Standards: The Thomson Reuters Trust Principles.
Environmental Science
A nature reserve in the Peak District, the site of an historic mass trespass that paved the way for countryside access rights, has been extended by more than 500 acres.Kinder Scout, a National Nature Reserve (NNR) at the highest point in the Peak District, is now 26% larger after Natural England added an extra 226 hectares (558 acres) to its boundary. The National Trust, which manages the site, said the extension recognises the "scientific research this area provides to help tackle the climate and nature emergencies", which includes testing how peatland can soak up climate heating carbon dioxide, and how nature can buffer flooding.Natural England's chief operating officer, Oliver Harmar, said NNRs were established to protect important habitats, species and geology, "to provide 'outdoor laboratories' for environmental science and opportunities for people to enjoy nature". Image: Bare peat on the control site of the Kinder National Nature Reserve, used for scientific research. Pic: National Trust The site, now the equivalent size of 1,000 international rugby pitches, covers the highest point in the Peak District, 636m (2,087ft). In 1932 a group of young workers staged a mass trespass in protest of the fact much of the moorland was out of bounds for ordinary walkers.Many were arrested and some sentenced to prison, but the following public uproar triggered the creation of open access to moorland and of National Parks. More on Climate Change Pakistan floods: 'The climate catastrophe of the decade' Pakistan monsoon floods: Brave and resilient displaced people feel angry and abandoned Tough choices for Germany as coal power stations return to keep people warm this winter The protest "[paved] the way for millions of visitors to be able to escape city living and pollution to enjoy some of our most inspiring landscapes and connect with nature", said Craig Best, general manager for the Peak District at the National Trust.When the trust started caring for Kinder Scout in 1982, the mountain was a "barren moonscape of bare peat, degraded by human activity over the centuries due to pollution, historical land management practices, high visitor numbers and climate change", said Mr Best. Today the nature reserve is being "transformed into a plateau of healthy peat bogs rich in vegetation", providing habitats for mountain hares and upland birds, he added.Covering the bare peat with rich moorland vegetation and adding means to keep the moors wetter has reduced the erosion of peat by 98% within 18 months, according to monitoring data.Researchers compared restored, healthy peatland with an unrestored, degraded "control" plot.The restoration has increased the amount of carbon dioxide stored in the peat and improved water quality as it filters through the vegetation before reaching streams and reservoirs, the National Trust said. Image: Before and after photos from 2010 and 2021 show how the area has changed under conservation. Pic: Prof. Tim Allott, Manchester University Image: Pic: Prof. Tim Allott, Manchester University Professor Tim Allott, from Manchester University, said without the control area they "would not have been able to properly assess the impact of the restoration work in slowing the flow of water on hillsides and reducing flood risk downstream".It also provides a "museum of the past damage on Kinder Scout", he said."By simply standing within this small remaining 'island' of bare peatland, you get a dramatic sense of the scale of transformation of this iconic landscape by looking across the newly restored, vibrant, and diverse habitat which surrounds it."Watch the Daily Climate Show at 3.30pm Monday to Friday, and The Climate Show with Tom Heap on Saturday and Sunday at 3.30pm and 7.30pm. All on Sky News, on the Sky News website and app, on YouTube and Twitter.The show investigates how global warming is changing our landscape and highlights solutions to the crisis.
Environmental Science
Concentrations of potentially harmful chemical contaminants on the International Space Station (ISS) could exceed those found in dust on floors in homes across the United States and Western Europe. That was the conclusion reached in a first-of-its-kind new study that looked at dust collected by the air filtration system on the orbiting space station. One day, this investigation's results could help engineers design and build spacecraft for long-term human jaunts to space. “Our findings have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction," Stuart Harrad, a professor at the University of Birmingham in the United Kingdom and co-author of the study, said in a statement. The team behind the research included other scientists from the University of Birmingham as well as some affiliated with NASA's Glenn Research Center. Together, these experts identified that levels of organic contaminants in ISS dust were noticeably higher than average median levels of such contaminants in house dust across the U.S. and Western Europe. As for what types of contaminants were found, there was a pretty wide range. For example, the team identified polybrominated diphenyl ethers (PBDEs), which are commonly used as a flame retardant in electrical equipment. The researchers think this chemical's presence could be the result of using inorganic flame retardants on the space station, such as ammonium dihydrogen phosphate, to make fabrics and webbing flame retardant. However, Harrad pointed out that PBDE concentrations in ISS dust samples technically fall within the range of concentrations in U.S. house dust. “While concentrations of organic contaminants discovered in dust from the ISS often exceeded median values found in homes and other indoor environments across the US and western Europe, levels of these compounds were generally within the range found on Earth," Harrad said. Other flame retardants called novel brominated flame retardants (BFRs) and organophosphate esters (OPEs) were present in ISS dust samples too. These are used in electrical and electronic equipment, building insulation, furniture fabrics and foams. Some OPEs are currently under consideration for limitation by the European Chemicals Agency because they could potentially be toxic to human health at high levels. The team also detected polycyclic aromatic hydrocarbons (PAHs), which are present in hydrocarbon fuels and released when these fuels are burned, polychlorinated biphenyls (PCBs), which are used in building and window sealants, and perfluoroalkyl substances (PFAS), which are used for stain proofing in fabrics. PFAs have been limited, and even banned, in some regions because of their potential negative effects on human health — certain PFAs are actually considered human carcinogens. Further, some contaminants found on the ISS are classed as "persistent organic pollutants (POPs)," which are compounds that can accumulate in living tissues. POPs are also known to pose health risks to humans. Many of these chemicals detected could be coming from commercially available items, the team suggested, including MP3 players, tablet computers, medical devices and clothing the ISS crew brought from Earth. Air circulating through the ISS gets changed by the space station's filtration system between eight and 10 times an hour. But while scientists know this process removes carbon dioxide and gaseous contaminants, it isn't as clear how efficient it is at removing other chemicals. Scientists also aren't sure how high levels of ionizing radiation in space, which causes accelerated aging of materials and breakdown of plastics to airborne microplastics, can affect the abundances of PBDEs, HBCDD, NBFRs, OPEs, PAH, PFAS, and PCBs in space-based dust relative to abundances in Earth dust — particularly in indoor environments. In the microgravity environment of the ISS, the flow of contaminant particles is primarily driven by ventilation systems. Eventually, these particles either find their ways to surfaces where they are deposited or to air intakes where they collect as debris on filters. That debris must be vacuumed to ensure filters keep operating at maximum efficiency. (Other debris that finds its way to the filters comes from spacecraft cabins that bring crew and cargo between the ISS and Earth). But importantly, what this means is used vacuum bags on the ISS are filled with previously airborne contaminant particles alongside other materials, such as hair and lint. And it was these vacuum bags where the team found their dust samples for analysis on Earth, which ultimately offered us a whole new understanding of what chemicals contaminants are bunching up on the ISS. The team's research was published this month in the journal Environmental Science and Technology Letters.
Environmental Science
Q&A: New communication strategies to tackle climate-related water issues Access to safe drinking water is a pressing global issue, with approximately 2 billion people currently lacking consistent access to this fundamental resource—a sobering number that is projected to soar to 5 billion by 2050. The United Nations has made global water safety—ensuring universal access to drinking water that is clean, uncontaminated and properly treated—a key priority within the Sustainable Development Goals for the millennium. In their efforts to raise awareness about this environmental threat, researchers at USC made a surprising discovery: While investigating the relationship between water safety concerns, climate change and severe weather, they found that—around the world—people's worries about water safety were more strongly connected to their concerns about severe weather than climate change. Findings from the study, published in the journal Environmental Science & Technology, challenge conventional approaches to environmental communication and suggest that capturing people's attention and driving meaningful action may be better achieved by framing conversations around the immediate impact of extreme weather rather than relying on messages about climate change. USC News caught up with study authors Wändi Bruine de Bruin, Provost Professor of public policy, psychology and behavioral science at the USC Price School of Public Policy and the Department of Psychology at the USC Dornsife College of Letters, Arts and Sciences, and Joshua Inwald, a USC psychology doctoral student and first author of the study. Why does extreme weather resonate with people more than climate change? Bruine de Bruin: It's easier to see how severe weather threatens water safety. People can witness these events happening and understand how they might impact them personally, whereas climate change is a more abstract concept that is challenging to observe directly. Inwald: Right. When it comes to the public's understanding of science and acceptance of climate change, we have observed that while most people now recognize climate change as a threat, they are more inclined to acknowledge the local impacts of severe weather. They readily notice changes in rainfall patterns, increased heat waves or less intense winters in their own lifetimes and local areas. What surprised you about the study findings? Inwald: Our results were remarkably consistent across different regions of the world. By analyzing countries based on their economic development and the state of their water infrastructure, we consistently observed that worries about severe weather had a stronger predictive power for water safety concerns than individual concerns about climate change. These patterns held true across all groups of people, regardless of the statistical techniques we used during our analysis. What makes this study unique? Bruine de Bruin: Most studies on public concerns about environmental risks tend to be conducted in the United States, Europe and other high-income countries. Ours draws from the Lloyd's Register Foundation's World Risk Poll. This comprehensive data set asked participants from diverse populations to report on their concerns about water safety, climate change and severe weather, among other environmental issues. Inwald: This is especially significant in the context of water safety, which is a major concern in regions outside of the U.S. and Europe. While pockets of developed nations face significant water challenges, the most immediate and pressing impacts are felt by people in developing countries. Up until now, we haven't had robust data for these important populations to show how this issue is playing out on the global stage. What are the impacts of this research? Bruine de Bruin: Our findings have broad implications that extend beyond water safety. They can provide valuable insights for nonprofits, policymakers and other groups working to raise public awareness and promote behaviors that mitigate different environmental threats. Inwald: It's also important to consider that messages about water safety and other risk communications are often disseminated from the top down, usually from organizations led by individuals from high-income Western countries. This can lead to cultural misunderstandings that make it harder to communicate with diverse, global audiences. These organizations need to adopt language that aligns with how people actually perceive climate change, weather and the quality of their water supplies. Our approach also offers a distinct advantage in the U.S., where climate change is a highly polarized issue. When climate change or the climate crisis is mentioned, it often triggers political divisions, causing a significant portion of the audience to disengage. By focusing on severe weather instead, we can bypass these political concerns and potentially reach a broader audience. Journal information: Environmental Science & Technology Provided by University of Southern California
Environmental Science
Water harvesting in Death Valley: Conquering the arid wilderness Korea is regarded as a "water-stressed nation." Although the country receives an annual precipitation of approximately 1,300mm, it is characterized by concentrated periods and specific regions, thereby giving rise to challenges stemming from water scarcity. The lack of drinking water extends beyond mere inconvenience, posing life-threatening implications for certain individuals. In March 2023, the United Nations Children's Fund (UNICEF) released a report highlighting the plight of roughly 190 million children in Africa who suffer from an absence of safe water, resulting in the tragic daily loss of 1,000 children under the age of five. Nations across the globe are employing diverse approaches in an endeavor to mitigate this issue. However, seawater desalination is energy intensive that predominantly reliant on fossil fuels engendering environmental pollution such as discharge of concentrated brine into the sea. Harvesting atmospheric water also presents challenges, particularly in regions with humidity is less than 70% as it necessitates a substantial amount of energy to condense the vapor, rendering it an ineffective solution. Recently, a joint team of researchers led by Professor Woochul Song from the Division of Environmental Science & Engineering at Pohang University of Science and Technology (POSTECH) and Omar M. Yaghi, Professor of Chemistry at UC Berkeley, accomplished successful atmospheric water harvesting using ambient sunlight in the Death Valley desert. The achievement signifies a promising breakthrough in tackling water scarcity as it harnesses an infinite resource without polluting the environment. The research findings were published in the international journal Nature Water . Metal-organic frameworks (MOFs) refer to porous materials characterized by minuscule holes measuring 1–2 nm. High-surface-areas of MOFs function as an absorbent, capturing atmospheric water vapor. To this end, the research team devised a water harvesting device, which is referred to as water harvester, based on the MOF, leveraging its capability to draw in water from the atmosphere during nighttime while condensing the absorbed water into drinkable liquid using ambient sunlight throughout the day. The team's water harvester takes the form of a cylindrical structure, unlike conventional rectangular design. This configuration ensures that the device's surface area aligns with the trajectory of the sun, maximizing the utilization of ambient sunlight from sunrise to sunset. The research team tested the harvester through water collection experiments conducted in two different locations: Berkeley in June 2022 and Death Valley desert in August 2022. Death Valley desert represents one of the world's hottest and most arid regions. With persistently elevated temperatures reaching 40°C even at midnight, soaring to a scorching 57°C during day, and a relative humidity below 7%, the area experiences exceptionally dry conditions. During the experiments, the device harvested up to 285g and 210g of water per kilogram of MOF in Berkeley and Death Valley desert respectively. This represents a twofold increase in water production compared to the previous harvesters. Notably, the research team's harvester successfully extracted water without generating any carbon footprint even in the extremely dry weather conditions including the highest temperature of 60°C and the average night humidity of 14%. The team's unique condenser and MOF-absorbent system, empowered the water harvesting process solely through the use of ambient sunlight, rendered additional energy sources or power supplies unnecessary. The research holds substantial significance due to the experiments conducted in genuinely extreme environments, effectively showcasing the practicality of the technology. The ability to harvest water from atmospheric water vapor has universal applicability, offering contributions toward the realization of ethical objectives in the realms of environment and public health, thereby serving as a pivotal technology for human security. Professor Woochul Song at POSTECH conveyed explained, "We have substantiated the technology's potential to address the escalating challenges of water scarcity, further compounded by environmental issues." He added, "This technology embodies sustainability as it provides a reliable water resource regardless of geographic or weather conditions in the world." More information: Woochul Song et al, MOF water harvester produces water from Death Valley desert air in ambient sunlight, Nature Water (2023). DOI: 10.1038/s44221-023-00103-7 Journal information: Nature Water Provided by Pohang University of Science and Technology
Environmental Science
Detecting coral biodiversity in seawater samples Researchers from the Okinawa Institute of Science and Technology (OIST) have developed a method to measure coral biodiversity through extracting the environmental DNA (or eDNA) from a liter of surface seawater collected from above a reef. The method has been confirmed to work through observations made by scientific divers in the same areas of ocean. The research, conducted in collaboration with the Okinawa Prefecture Environmental Science Center and University of Tokyo, was published in the Proceedings of the Royal Society B: Biological Sciences. It has paved the way for large-scale comprehensive surveys of reef-building coral to take place and removes the reliance of direct observations made through scientific scuba diving or snorkeling. "Beautiful coral reefs in subtropical and tropical seas account for only 0.2% of the entire ocean," said co-author Prof. Nori Satoh, Principal Investigator of OIST's Marine Genomics Unit. "However, they are the most biodiverse areas of the oceans, home to about 30% of all marine life. Reef-building corals play a key role in creating coral reefs, but recent global warming and other factors have caused bleaching, and many coral reefs are in danger of disappearing." To conserve and protect the coral reefs, it's important to first know which coral exists on the reef and how the make-up of a reef is changing over time. Previously, the only way to effectively survey a reef was through divers and snorkelers directly observing the coral and recording the species and the changes over time. This was time consuming, expensive, and labor intensive. But researchers are now utilizing the DNA that living creatures release into the environment, through skin, waste products, and mucus. By extracting this eDNA from the seawater and analyzing it, a clear picture of the organisms that inhabit that part of the ocean can be found, without ever having to enter the water. Reef-building, or hard, coral are vital parts of coral reefs. It is estimated that there are approximately 1,300 species of reef-building corals in 236 genera worldwide. These corals release mucus into the surrounding seawater, which contains a portion of DNA. In 2021, researchers from OIST and the University of Tokyo succeeded in developing tools that amplify and identify the DNA of 45 genera of reef-building coral. Now, the researchers have tested whether these tools are effective and accurate by conducting a large-scale survey of the ocean surrounding Okinawa using both the eDNA method and scientific divers. This involved direct visual observation by two divers to identify dominant coral genera and collecting two or three one-liter bottles of surface seawater at each site. Seawater was filtered as soon as possible to fix environmental DNA trapped in the filters and the filters were brought back to the OIST laboratory for analysis. Over a four-month period, from early September to late December 2021, 62 sites from around the main Okinawa Island were surveyed and two to four dominant coral genera at each reef were recorded. "We found that the eDNA analysis matched that of the direct scientific observations with more than 91% accuracy," said OIST Research Scientist, Dr. Koki Nishitsuji, first author of the paper. "In fact, 41 out of the 62 sites were identical. The eDNA method indicated the presence of five dominant coral genera at all 62 sites surveyed. What's more the results of the environmental DNA method suggest the presence of corals never before recorded along the coast of Okinawa." The eDNA method requires complex sequencing information, and due to this, only 45 of the estimated 236 genera can currently be detected. With more information, the effectiveness of the eDNA method will increase. And, although further research is needed, the eDNA method may be able to indicate the presence of corals that are difficult to detect by direct observation. More information: Koki Nishitsuji et al, An environmental DNA metabarcoding survey reveals generic-level occurrence of scleractinian corals at reef slopes of Okinawa Island, Proceedings of the Royal Society B: Biological Sciences (2023). DOI: 10.1098/rspb.2023.0026 Journal information: Proceedings of the Royal Society B Provided by Okinawa Institute of Science and Technology
Environmental Science
Scientists map changes in soot particles emitted from wildfires Not many people would voluntarily fly through plumes of smoke emitted from wildfires. But atmospheric scientists from the U.S. Department of Energy's Brookhaven National Laboratory do, over and over, tracing flight paths that might make ordinary air passengers sick. Their goal? Measure properties of the soot particles emitted by wildfires so they can learn how these plumes affect Earth's climate. Correctly modeling the impact of these particles is important, the scientists say, because wildfires are increasing in both intensity and frequency, in part due to droughts brought about by increasing global temperatures and changing hydrological cycles. "We need a better understanding of the particles emitted by these fires, including how they evolve, so we can improve our predictions of their impacts on climate, climate change, and human health," said Arthur Sedlacek, one of the intrepid smoke samplers. Sedlacek and others at Brookhaven and collaborating institutions recently published a study in the journal Environmental Science & Technology that suggests the global climate models aren't getting the full picture. These models base estimates of how fires impact climate on the optical properties of soot particles sampled in the immediate vicinity of a fire. As the new data show, that approach fails to account for changes in soot particles over time. Those changes, the scientists say, can dramatically influence how much sunlight the particles scatter or absorb, how they interact with water, and how likely they are to form clouds—all of which are important to how they ultimately affect Earth's climate. "Based on these results, we should re-evaluate the use of near-source observations and laboratory experiments as the sole sources for determining how these particles are represented at the regional and global scale," Sedlacek said. Crisscrossing smoke plumes To gather the new data on how soot particles evolve, the scientists installed sensitive instruments designed to analyze aerosol particles on airplanes operated by the DOE and NASA. Through two aircraft-based atmospheric science campaigns led around the world by these two agencies, they flew more than 60 research flights through wildfire plumes, back and forth at increasing distances from fires. By crisscrossing each plume repeatedly, they picked up young particles relatively close to fires, as well as particles that had evolved for hours. Other flights sampled plumes far from their sources, with ages estimated to be over 10 days. Their main quarry was black carbon, or soot, the primary light-absorbing substance emitted by fires and the dominant particulate climate-warming agent. "Black carbon is a great tracer to study wildfire plumes because it is inert, meaning unreactive chemically," said Brookhaven scientist Ernie Lewis, a co-author on the study. "It readily absorbs light (which is why it is black), and thus can be easily detected by our instruments." "In addition, its only sources of removal from the atmosphere are gravity—which has little effect on these particles because of their small size—and raining out after they form cloud drops," he said. For cloud drops to form around soot, black carbon must first take on a coating of other substances. And that's where the story of simple black carbon gets a lot more complex. Within the first hour of forming, a black carbon particle begins to accumulate a coating of organic material. This coating comes mostly from volatile organic compounds that were vaporized from the burning vegetation, which then encapsulate the black carbon "cores." Most climate models assume that all soot particles look like these uniformly coated black carbon cores. But the data collected for this study show that the thickness of the coating material remains relatively constant for only one to two days. Then the level of coating begins decreasing slowly until only about 30 percent of it remains by about day 10 of the particle's lifecycle. "This slow loss of coating material is not captured in today's climate models," Sedlacek said. The data show that the particles spend a greater portion of their existence with thinning coatings—and closer to the final, 30-percent-coated stage of evolution—than any other portion of their lifecycle, which can last a couple of weeks. Absorbing and scattering light These changes have dramatic impacts on the particles' optical properties. For example, as the thickness of a coating on a black carbon particle decreases, the amount of light scattered by the particle decreases more rapidly than the amount of light absorbed. Since light scattering by particles generally has a net cooling effect on Earth's climate and light absorption has a net warming effect, this change in the balance alters the effect of these particles on climate. In addition, for plumes at low elevations where clouds form, particles that have a thicker coating more readily form cloud drops. That means these particles can be removed from the plume and fall out of the atmosphere if the cloud drop becomes a raindrop. The black carbon particles remaining in the plume are therefore the ones with thinner coatings. "Thinner coatings make these remaining plume particles relatively more light-absorbing and less scattering than the mix of particles that was in the plume before clouds formed," Lewis explained. "And once again, more absorption may trap more heat and warm Earth's climate." Tools of the trade The main instrument used to characterize black carbon particles is the Single Particle Soot Photometer (SP2). This instrument sends a stream of particles one-by-one through a laser beam and picks up flashes of light when the particles vaporize. It can glean information about thousands of particles per second, including their sizes and the thickness of their coatings. Here's how it works: If a particle does not contain black carbon, it only scatters light from the laser, and the amount of light scattered can be used to determine the size of the particle. Particles that contain black carbon aren't as simple. They scatter some light, but they also absorb light (remember, that's what makes the black carbon part of the particles black). In doing so, they rapidly heat up, triggering a series of events that occur over just a few millionths of a second. First, the coatings evaporate, making the particles smaller, resulting in less scattered light. Next, with their coatings gone and unable to dissipate heat, the black carbon particles really heat up—to nearly 7,000 degrees Fahrenheit! That makes them vaporize and emit a flash of light. The amount of light picked up by the SP2 is directly related to how much black carbon was in the particle. Subtracting that amount from the particle size determined by the initial laser scattering gives the scientists the thickness of the coating. This wealth of data about both the black carbon particles in the plume and the other particles can then be used to calculate how the plume as a whole interacts with sunlight and affects climate. And by making measurements of plumes at various ages, scientists can obtain better understanding of how these interactions change over the lifetimes of the plumes. The observations made from these field campaigns are also helpful in that they draw attention to new research questions—such as how the particles in the smoke plume form cloud drops. "Understanding the complex and intricate interactions between aerosols and clouds requires a multipronged approach where field measurements inform models, models find discrepancies necessitating targeted laboratory experiments, and where results from laboratory studies motivate new field campaigns," Sedlacek said. More information: Arthur J. Sedlacek et al, Using the Black Carbon Particle Mixing State to Characterize the Lifecycle of Biomass Burning Aerosols, Environmental Science & Technology (2022). DOI: 10.1021/acs.est.2c03851 Journal information: Environmental Science & Technology Provided by Brookhaven National Laboratory
Environmental Science
Examining the bio-impact of toxic chemical cocktails in the environment Purdue University scientists are unraveling the complicated toxicity of a mixture of what are often called "forever chemicals" found in many consumer products. In outdoor experiments under controlled conditions, the team found that tadpoles exposed to a common mixture of these compounds, called perfluoroalkyl and poly-fluoroalkyl substances (PFAS), suffered reduced growth as they transformed into juvenile frogs. Size at this life stage is related to the survival and reproductive success of amphibians generally, said Purdue research faculty member Tyler Hoskins. "There are over 5,000 of these chemicals out there that we know of, and that list continues to grow as our analytical capabilities grow," Hoskins said. A common source of these chemicals is the fire-retardant aqueous film-forming foams (AFFF) that have been used for more than 50 years to douse fuel fires at airports and military sites. But PFAS are widespread environmental contaminants that are also found in fast-food packages, nonstick coatings on cookware, cosmetics, biosolid-derived fertilizers and a broad range of manufacturing processes. "We were trying to mimic what aquatic organisms would experience if they were near a site where AFFF had been historically used. Water bodies at airports and defense sites are the areas where you would expect surface water to end up with AFFF," he said. Hoskins and nine co-authors published their results in a paper highlighted on the cover of the journal Environmental Science & Technology. "PFAS are perhaps the most persistent class of chemicals we have created since we started producing chemicals," said co-author Maria Sepúlveda, professor in the Department of Forestry and Natural Resources. "Studies that look at PFAS mixtures are very critical right now, and there aren't very many because they are hard to do." Existing studies tend to examine PFAS at the cellular level in the laboratory rather than in whole animals. A big challenge for scientists is how to sort the toxicity of various PFAS mixtures. In the field, Hoskins noted, animals become exposed to dozens of these chemicals at the same time. But when scientists run laboratory tests or outdoor studies under controlled conditions that simulate real-world environments, they often focus on exposures to three or fewer compounds. "It's rare to look at what's actually in the environment," Hoskins said. The team designed the study to examine the relative role of one PFAS in particular—perfluorooctane sulfonate (PFOS)—as part of a mixture with four other PFAS. The company that manufactured PFOS voluntarily phased it out in 2002. "But it's still the most commonly detected PFAS in the environment and in animals. It's a really important one to study," Hoskins said. PFOS accumulates in biological tissue more than most PFAS chemicals and also ranks among the most toxic. The researchers had hoped to resolve whether PFOS would stand out as the most toxic chemical of the mixture. Their results suggested, however, that PFOS was no more toxic than the other four PFAS in the mixture. The U.S. banned another class of long-lasting toxic chemicals, polychlorinated biphenyls (PCBs), in 1979. There are close to 250 types of PCBs, compared to thousands for PFAS. Scientists discovered long ago that all PCBs act in the same way. "With PFAS, that's not the case," Sepúlveda said. "There's nothing unique that you can say, "Oh, that's PFAS exposure." There are so many different mechanisms going on that it's hard to study them because you don't know how they act." In studies conducted over the last five years at the whole-animal level, Sepúlveda's team has seen that each chemical in the tested mixtures usually has additive effects. The pervasive nature of PFAS complicates the studies. "If I took a blood sample from you, you're going to have a PFAS profile in your blood," Sepúlveda said. "It's composed of several chemicals, and not all of them have the same toxicity. You need to know how those might interact when they're together to impact toxicity." The chemicals are commonly found in laboratory equipment, too. This includes glassware, plasticware and even the rabbit chow they feed the frogs. "We found it in the tanks that leached. And worst of all, it's in the rainwater," Sepúlveda said. "How do you control for that? We can't keep the tanks sealed. It's a problem right now." The research team has follow-up studies in progress to further study mixture toxicity in insects and in aquatic community dynamics. "Aquatic organisms exist in a community, and they interact with one another," Hoskins said. "When one member of the community gets perturbed, that can have ripple effects for other members of the community. The community-level effects of PFAS have not received much research attention." The team chose amphibians for this study because they have an aquatic life stage and can breed near sites affected by AFFF. "Just like every other animal, they serve important roles in the ecosystem," Hoskins noted. They eat a lot of insects, including mosquitoes. And they serve as prey, in turn, for other animals such as herons, turtles and snakes. "Chemicals serve a lot of important purposes for us. But if we're going to put large amounts of chemicals into the environment, it's our responsibility to understand what they're doing to our health and wildlife health," Hoskins said. "That's what we're trying to do here." More information: Tyler D. Hoskins et al, Chronic Exposure to a PFAS Mixture Resembling AFFF-Impacted Surface Water Decreases Body Size in Northern Leopard Frogs (Rana pipiens), Environmental Science & Technology (2023). DOI: 10.1021/acs.est.3c01118 Journal information: Environmental Science & Technology Provided by Purdue University
Environmental Science
Analyzing the fate and environmental risks of 40 pharmaceutically active compounds in the Pearl River basin Pharmaceutically active compounds (PhACs) have attracted massive attention due to their large quantity of consumption, continuous discharge, and ecological risk to aquatic organisms in the receiving aquatic environments. The Pearl River basin serves as an important source of drinking water for many cities in southern China. Prof. Bin Yang from the South China Normal University and their team members have worked jointly and comprehensively investigated the occurrence, fate, and environmental risk of 40 PhACs from surface waters and sediments in the Beijiang River, Xijiang River, and Maozhou River of the Pearl River basin, South China. Major classes of PhACs in this study included NSAIDs, antidepressants, anxiolytics, and antihypertension drugs. In addition, the seasonal and temporal variations in the concentrations of these PhACs among and between different rivers, along with the environmental risk were evaluated. Their analysis is published in the journal Frontiers of Environmental Science & Engineering on April 15, 2023. Pharmaceutically active compounds (PhACs) are widely used in humans and veterinaries to prevent, diagnose, and treat diseases. Major groups of PhACs include nonsteroidal anti-inflammatory drugs (NSAIDs), antidepressants, anxiolytics, and hypertension medicines. Most of these drugs have been regarded as emerging organic contaminants in aquatic ecosystems. PhACs have been frequently detected in surface waters throughout the world at concentrations range of ng/L to μg/L over the past decades. The primary sources of PhACs to aquatic environments are effluents from the wastewater treatment plants (WWTPs), wastewaters from livestock and poultry industries, and sewage wastewater containing optionally disposal of expired or unused drugs. Among the PhACs, NSAIDs are one of the most frequently used pharmaceuticals globally, and listed among the top 10 persistent pollutants. Widely detected NSAIDs in aquatic environments include ibuprofen, diclofenac, naproxen, ketoprofen, and salicylic acid. Meanwhile, psychotropic pharmaceuticals (PPhs) are widely consumed in the developed countries. Antidepressants and anxiolytics are the most widely detected PPhs in aquatic environments. Numerous studies have shown that many PhACs are toxic to aquatic organisms. Therefore, the environmental risk of PhACs cannot be simply ignored. The Pearl River basin serves as an important source of drinking water for many cities in southern China. Previous research on the pollution of PhACs in the Pearl River basin has been conducted with one or two sampling campaigns along one main stream with recognized sources of PhACs contamination. This approach could not provide a complete understanding on variations in the frequency of occurrence and the environmental risk of PhACs. Moreover, data on PhACs for multiple rivers in the Pearl River basin by one sampling campaign in each of two seasons in a year are of relatively paucity. The work of Professor Bin Yang's team fills this gap. In this study, the research team found the average concentrations of detected PhACs in surface waters and sediments ranged from 0.17 to 19.1 ng/L and 0.10 to 10.4 ng/g, respectively. Meanwhile, PhACs concentration in surface waters and sediments varied greatly among and within the Beijiang River, Xijiang River, and Maozhou River. The largest annual flux of PhACs of the Xijiang River and Beijiang River was more than 11,000 kg per annum, whereas only 25.7 kg/a in the Maozhou River. In addition, the estimated emissions of PhACs in the Beijiang River, Xijiang River, and Maozhou River ranged respectively from 0.28 to 4.22 kg/a, 0.12 to 6.72 kg/a, and 6.66 to 91.0 kg/a, and the backestimated usage varied with a range from 12.0 to 293 kg/a, 6.79 to 944 kg/a, 368 to 17 459 kg/a. Moreover, the emissions of PhACs showed a close relationship with the gross domestic product (GDP) of each city along the Pearl River. The environmental risk assessment suggested that diazepam and ibuprofen had a moderate risk in this region. This study has revealed that PhACs are ubiquitous in the surface waters and sediments of the three major rivers of the Pearl River basin. The concentrations of commonly detected PhACs are comparable to those detected in many other river basins in China, and lower than in other countries. Meanwhile, there is a distinctive spatial and seasonal variation in PhACs occurrence profiles within and among the three river basins. This study can provide important data on the fate and environmental assessment of the PhACs in the Pearl River basin. It is pointed out that a multiple-year monitoring the pollution of PhACs is necessary for a complete understanding of the risk of PhACs in the Pearl River basin. More information: Haojun Lei et al, Occurrence, spatial and seasonal variation, and environmental risk of pharmaceutically active compounds in the Pearl River basin, South China, Frontiers of Environmental Science & Engineering (2022). DOI: 10.1007/s11783-023-1646-8 Provided by Higher Education Press
Environmental Science
University of Florida researchers found high-quality human DNA in nearly all environments, suggesting potential applications in fields like medicine, archaeology, and criminal forensics. However, this raises significant ethical concerns around consent and privacy, underscoring the need for guidelines to govern the collection and use of environmental DNA (eDNA). On the beach. In the ocean. Traveling along riverways. In muggy Florida and chilly Ireland. Even floating through the air. We cough, spit, shed and flush our DNA into all of these places and countless more. Signs of human life can be found nearly everywhere, short of isolated islands and remote mountaintops, according to a new University of Florida study. That ubiquity is both a scientific boon and an ethical dilemma, say the UF researchers who sequenced this widespread DNA. The DNA was of such high quality that the scientists could identify mutations associated with disease and determine the genetic ancestry of nearby populations. They could even match genetic information to individual participants who had volunteered to have their errant DNA recovered. David Duffy, the UF professor of wildlife disease genomics who led the project, says that ethically handled environmental DNA samples could benefit fields from medicine and environmental science to archaeology and criminal forensics. For example, researchers could track cancer mutations from wastewater or spot undiscovered archaeological sites by checking for hidden human DNA. Or detectives could identify suspects from the DNA floating in the air of a crime scene. But this level of personal information must be handled extremely carefully. Now, scientists and regulators must grapple with the ethical dilemmas inherent in accidentally — or intentionally — sweeping up human genetic information, not from blood samples but from a scoop of sand, a vial of water or a person’s breath. Published on May 15 in the journal Nature Ecology and Evolution, the paper by Duffy’s group outlines the relative ease of collecting human DNA nearly everywhere they looked. “We’ve been consistently surprised throughout this project at how much human DNA we find and the quality of that DNA,” Duffy said. “In most cases the quality is almost equivalent to if you took a sample from a person.” Because of the ability to potentially identify individuals, the researchers say that ethical guardrails are necessary for this kind of research. The study was conducted with approval from the institutional review board of UF, which ensures that ethical guidelines are adhered to during research studies. “It’s standard in science to make these sequences publicly available. But that also means if you don’t screen out human information, anyone can come along and harvest this information,” Duffy said. “That raises issues around consent. Do you need to get consent to take those samples? Or institute some controls to remove human information?” Duffy’s team at UF’s Whitney Laboratory for Marine Bioscience and Sea Turtle Hospital has successfully used environmental DNA, or eDNA, to study endangered sea turtles and the viral cancers they are susceptible to. They’ve plucked useful DNA out of turtle tracks in the sand, greatly accelerating their research program. The scientists knew that human eDNA would end up in their turtle samples and probably many other places they looked. With modern genetic sequencing technology, it’s now straightforward to sequence the DNA of every organism in an environmental sample. The questions were how much human DNA there would be and whether it was intact enough to harbor useful information. The team found quality human DNA in the ocean and rivers surrounding the Whitney Lab, both near town and far from human settlement, as well as in sand from isolated beaches. In a test facilitated by the National Park Service, the researchers traveled to part of a remote island never visited by people. It was free of human DNA, as expected. But they were able to retrieve DNA from voluntary participants’ footprints in the sand and could sequence parts of their genomes, with permission from the anonymous participants. Duffy also tested the technique in his native Ireland. Tracing along a river that winds through town on its way to the ocean, Duffy found human DNA everywhere but the remote mountain stream where the river starts, far from civilization. The scientists also collected room air samples from a veterinary hospital. They recovered DNA matching the staff, the animal patient, and common animal viruses. Now that it’s clear human eDNA can be readily sampled, Duffy says it’s time for policymakers and scientific communities to take issues around consent and privacy seriously and balance them against the possible benefits of studying this errant DNA. “Any time we make a technological advance, there are beneficial things that the technology can be used for and concerning things that the technology can be used for. It’s no different here,” Duffy said. “These are issues we are trying to raise early so policymakers and society have time to develop regulations.” Reference: “Inadvertent human genomic bycatch and intentional capture raise beneficial applications and ethical concerns with environmental DNA” by Liam Whitmore, Mark McCauley, Jessica A. Farrell, Maximilian R. Stammnitz, Samantha A. Koda, Narges Mashkour, Victoria Summers, Todd Osborne, Jenny Whilde and David J. Duffy, 15 May 2023, Nature Ecology & Evolution. DOI: 10.1038/s41559-023-02056-2
Environmental Science
The implementation of “clean heating” policies has dramatically improved air quality in northern China — likely preventing about 23,000 premature deaths last year alone, a new study has found. Concentrations of fine particulate matter (PM 2.5) from heating activities dropped by 41.3 percent from 2015 to 2021 in Beijing and 27 other surrounding cities, according to the study, published on Wednesday in Environmental Science & Technology. This pollution plunge in so-called “2+26 cities” was nearly four times bigger than that of other northern Chinese cities, which use lower levels of clean fuels, the researchers found. The 2+26 cohort includes Beijing, Tianjin and 26 cities in the Hebei, Shanxi, Shandong and Henan provinces. Northern municipalities outside the 2+26 group only reduced their pollution levels by the 12.9 percent during the same 2015-2021 window, according to the study. “We showed that heating in northern China was a major source of air pollution,” corresponding author Zongbo Shi, a professor of atmospheric biochemistry at the University of Birmingham, said in a statement. “However, clean heating policies have caused the annual PM 2.5 in mainland China to reduce significantly between 2015 and 2021, with significant public health benefits,” Shi added. Rural areas of China have historically burned biomass in addition to using central heating — with both biomass and coal burning leading to severe haze episodes during the winter, according to the study. But in 2013, China introduced its Air Pollution Prevention and Control Action Plan, which accelerated the use of centralized and district heating and encouraged a switch to clean fuels, the authors noted. Four years later, the Chinese central government issued a Clean Winter Heating Plan for Northern China, aimed at increasing the region’s share of clean heating to 50 percent by 2019 and 70 percent by 2021, compared to a 2016 baseline scenario. Within the urban areas of the 2+26 cities, the share of clean heating was supposed to surpass 90 percent by 2019 and reach 100 percent by 2021, according to the 2017 plan. That plan also defined “clean heating” as heating supplies that run on certain lower-emission fuels, a 2021 study in the Journal of Cleaner Production explained. Some such fuels include natural gas, electricity, geothermal, biomass, solar energy, industrial waste heat, nuclear energy and so-called “clean coal” — coal whose carbon emissions are captured during the burning process. The researchers estimated that clean heating policies helped avoid 23,556 premature deaths in 2021. In total, they found that winter heating from 2015-2021 accounted for about 2.8 percent and 1.6 percent of premature deaths in northern and mainland China, respectively. Meanwhile, they stressed that winter heating can increase death rates associated with exposure to ozone — creating secondary pollution risks across northern China. The potential to improve public health conditions by abating heating-related particulate pollution therefore remains significant, the authors noted. “Clean heating policies in northern China not only reduced air pollution but also greenhouse gas emissions, contributing to China’s push for carbon neutrality,” co-author Robert Elliott, an applied economist at the University of Birmingham, said in a statement. “Decarbonizing heating should remain a key part of China’s carbon neutrality strategy that not only reduces air pollution but also provide significant public health benefits,” Elliott added.
Environmental Science
Scientists fear methane erupting from the burst Nord Stream pipelines into the Baltic Sea could be one of the worst natural gas leaks ever and pose significant climate risks.Neither of the two breached Nord Stream pipelines, which run between Russia and Germany, was operational, but both contained natural gas. This mostly consists of methane – a greenhouse gas that is the biggest cause of climate heating after carbon dioxide.The extent of the leaks is still unclear but rough estimates by scientists, based on the volume of gas reportedly in one of the pipelines, vary between 100,000 and 350,000 tonnes of methane.Jasmin Cooper, a research associate at Imperial College London’s department of chemical engineering, said a “lot of uncertainty” surrounded the leak.“We know there are three explosions but we don’t know if there are three holes in the sides of the pipe or how big the breaks are,” said Cooper. “It’s difficult to know how much is reaching the surface. But it is potentially hundreds of thousands of tonnes of methane: quite a big volume being pumped into the atmosphere.”Nord Stream 2, which was intended to increase the flow of gas from Russia to Germany, reportedly contained 300m cubic metres of gas when Berlin halted the certification process shortly before Russia invaded Ukraine.That volume alone would translate to 200,000 tonnes of methane, Cooper said. If it all escaped, it would exceed the 100,000 tonnes of methane vented by the Aliso Canyon blowout, the biggest gas leak in US history, which happened in California in 2015. Aliso had the warming equivalent of half a million cars. “It has the potential to be one of the biggest gas leaks,” said Cooper. “The climate risks from the methane leak are quite large. Methane is a potent greenhouse gas, 30 times stronger than CO2 over 100 years and more than 80 times stronger over 20 years.”Prof Grant Allen, an expert in Earth and environmental science at Manchester University, said it was unlikely that natural processes, which convert small amounts of methane into carbon dioxide, would be able to absorb much of the leak.Allen said: “This is a colossal amount of gas, in really large bubbles. If you have small sources of gas, nature will help out by digesting the gas. In the Deepwater Horizon spill, there was a lot of attenuation of methane by bacteria.“My scientific experience is telling me that – with a big blow-up like this – methane will not have time to be attenuated by nature. So a significant proportion will be vented as methane gas.”Unlike an oil spill, gas will not have as polluting an effect on the marine environment, Allen said. “But in terms of greenhouse gases, it’s a reckless and unnecessary emission to the atmosphere.”Germany’s environment agency said there were no containment mechanisms on the pipeline, so the entire contents were likely to escape. The Danish Energy Agency said on Wednesday that the pipelines contained 778m cubic metres of natural gas in total – the equivalent of 32% of Danish annual CO2 emissions.This is almost twice the volume initially estimated by scientists. This would significantly bump up estimates of methane leaked to the atmosphere, from 200,000 to more than 400,000 tonnes. More than half the gas had left the pipes and the remainder is expected be gone by Sunday, the agency said.Jean-Francois Gauthier, vice-president of measurements at the commercial methane-measuring satellite firm GHGSat, said evaluating the total gas volume emitted was “challenging”.“There is little information on the size of the breach and whether it is still going on” Gauthier said. “If it’s a significant enough breach, it would empty itself.“It’s safe to say that we’re talking about hundreds of thousands of tonnes of methane. In terms of leaks, it’s certainly a very serious one. The catastrophic instantaneous nature of this one – I’ve certainly never seen anything like that before.”In terms of the climate impact, 250,000 tonnes of methane was equivalent to the impact of 1.3m cars driven on the road for a year, Gauthier said.
Environmental Science
The U.S. Southwest has been in a drought since 2000 — in fact, it's been the region's driest period in 1,200 years. Many researchers have labeled this exceptionally dry period a "megadrought." At the same time, an "exceptionally strong" El Niño event is now 95% likely to last through at least February 2024, National Oceanic and Atmospheric Administration (NOAA) scientists predict. Given that this ocean-warming event typically brings wet weather to the Southwest, could an end to the megadrought finally be in sight? Unfortunately, one strong El Niño on its own is probably not enough to end the megadrought, experts told Live Science. And even if the wetter conditions do end the 22-year-long drought, the region is likely transitioning to a permanently drier baseline. That means the region needs to figure out long-term strategies to make sure there's enough water to go around. Related: The worst droughts in U.S. history While there are various definitions of a megadrought, it's generally considered a drought that lasts longer than two decades, is more severe than other droughts the area has seen, or some combination of the two. "No matter how you slice it, we're in dry conditions in the [US] Southwest and are not projected to come out of them, at least in the long term, anytime soon," Samantha Stevenson, a professor at the Bren School of Environmental Science and Management at the University of California, Santa Barbara, told Live Science. El Niño, meanwhile, occurs when ocean temperatures in the tropical Pacific are warmer than usual. "It has global consequences," said Erika Wise, a professor in the Department of Geography at the University of North Carolina at Chapel Hill. "Some places have floods, some places have droughts, some places are warm, some places are cool. The Southwest has one of the more reliable responses to El Niño, even though it's pretty far away, which is that it tends to be wetter in El Niño years," she told Live Science. Possible end to drought? All that rain could be enough to pull the region out of drought, at least for a while, Stevenson said. But any rain El Niño brings will have to make up for the extreme heat of this summer in much of the Southwest, with temperatures in Phoenix, for example, topping 110 degrees Fahrenheit (43 degrees Celsius) for 31 days in a row. "The thing with drought is that it's not just [about] rain; it's also evaporation," Wise said. Hot weather increases the evaporation of all the rainfall from this year's wet winter. Nor is it guaranteed that El Niño will bring rain. El Niño loads the dice favoring wet conditions, but there's "certainly no guarantee that the wet conditions will play out," said Park Williams, a professor in the Department of Geography at UCLA. Williams said he does think that, at some point, the West will have a sequence of wet years that breaks the current megadrought. And this year is a possible contender. "After the wet conditions of 2023 and given a developing El Niño for 2024 we could certainly be headed in that direction now," Williams said. However, similar predictions have been made before but didn't come to pass. After wet years in 2017 and 2019, researchers thought the megadrought would end. Then, severe drought returned in summer 2020 and stayed until last winter, he said. A drier future In the long term, climate change is shifting to a drier baseline in theU.S. Southwest, Stevenson and her colleagues have shown. "You can get these extreme wet conditions temporarily, but that background warming and drying due to climate change is so powerful that that's going to win out in the end," she said. In fact, a 2022 study by Williams and colleagues showed that the current megadrought only became a megadrought due to human-driven climate change. With the shifting baseline demonstrated by Stevenson's work, it's clear that no matter what, the region needs to use less water in the future than it has over the past century. In the 1920s, the water in the Colorado River was divided among the states in which it runs, for example, and more water was allocated in total than is available today. Ultimately, the question that really matters is not whether the megadrought will persist but whether there will be enough water to go around. "A lot of that has more to do with the types of water infrastructure, and choices that we make in terms of conservation, than the absolute amount of water in the soil," Stevenson said. Even if the megadrought does officially end, climate change will require adjustments to water use to account for the amount available. Live Science newsletter Stay up to date on the latest science news by signing up for our Essentials newsletter. Kelly Oakes is a freelance journalist covering science, health, environment and technology. Her work has been published by New Scientist, BBC Future, The Observer, Wired UK, and more. Previously, she was Science Editor at BuzzFeed UK. Kelly has a degree in physics and a master’s degree in science communication, both from Imperial College London.
Environmental Science
Two major ridesharing companies have promised all-electric fleets by 2030 in an effort to reduce their carbon footprint. To understand additional impacts of this transition, researchers reporting in ACS' Environmental Science & Technology conducted life-cycle comparisons of battery-powered electric vehicle fleets to a gas-powered one, using real-world rideshare data. They found up to a 45% reduction in greenhouse gas emissions from full electrification; however, traffic problems and air pollution could increase. Ridesharing apps are an increasingly popular way to travel around urban areas, especially for people without their own vehicles. But the cars and SUVs used in these situations drive more miles each year than a typical personal vehicle, contributing a higher proportion of greenhouse gases to the environment. Previously, researchers calculated that rideshare companies' carbon footprints could significantly decrease by fully electrifying their fleets. However, few studies have used real-world rideshare trip data in their estimates, or included additional assessments of air pollution and traffic impacts, from the switch. So, Aniruddh Mohan and colleagues wanted to develop a method that evaluated the life-cycle costs and benefits for two battery-powered ridesource fleets and a gasoline-powered one. The researchers collected real-world rideshare trip data for Chicago and used it to simulate rides provided by three fleets: gasoline-powered, and electric-powered with either 40 kWh or 60 kWh battery packs. Then, they did a comprehensive estimate of the use-phase and life-cycle impacts of the trips made in the simulations. Combining these data, they assigned a monetary value to each trip, based on the assumed damage done by carbon emissions, negative health impacts and traffic-related issues. The analysis indicated that electrified fleets had 40-45% lower greenhouse gas costs per trip compared to the gasoline-powered version. But the battery-powered electric vehicles were responsible for slightly higher air pollution from increased demand at local power plants for recharging purposes, as well as more ground-level particulates from tire and brake dust. They also were involved in more traffic problems, including crashes, congestion and noise, than the internal combustion option. In the simulations, battery-powered vehicles, particularly the 40 kWh ones, needed more frequent and longer trips without passengers to get to recharging stations. Overall, a conversion to battery-powered electric rideshare fleets could reduce the costs to society by 3-11% per trip, depending on the cost assigned to greenhouse gas emissions, the researchers say. They conclude that these results are specific to Chicago, and cities with different power grids and street layouts could have different assessments from full electrification. Story Source: Journal Reference: Cite This Page:
Environmental Science
Some lunar regolith are better for living off the land on the moon Between now and the mid-2030s, multiple space agencies hope to send crewed missions to the moon. of These plans all involve establishing bases around the moon's southern polar region, including the Artemis Base Camp and the International Lunar Research Station (ILRS). These facilities will enable a "sustained program of lunar exploration and development," according to the NASA Artemis Program mission statement. In all cases, plans for building facilities on the surface call for a process known as In-Situ Resource Utilization (ISRU), where local resources are used as building materials. This presents a bit of a problem since not all lunar soil (regolith) is well-suited for construction. Much like engineering and construction projects here on Earth, builders need to know what type of soil they are building on and if it can be used to make concrete. In a study, geologist Kevin M. Cannon proposed a lunar soil classification scheme for space resource utilization. This could have significant implications for future missions to the moon, where it would help inform the construction of bases, habitats, and other facilities based on soil type and location. Dr. Cannon is an assistant professor at the Department of Geology and Geological Engineering and Space Resources Program at the Colorado School of Mines, an engineering university in Golden, Colorado. His research is focused on the role geologic processes play in the formation and evolution of planetary materials on the surfaces of different bodies in the solar system. The paper that describes his proposed scheme, "A lunar soil classification system for space resource utilization," appeared in the journal Planetary and Space Science. For space agencies, ISRU comes down to harvesting local resources to create building materials and provide for the crews' basic needs (water, air, fuel, etc.). This reduces the amount of prefabricated components or materials that need to be launched into space, dramatically reducing costs in the process. For years, NASA, the ESA, and other space agencies have investigated how lunar regolith could be used as feedstock for 3D printers. Combined with a bonding agent or sintered to produce a molten ceramic, this regolith could then be "printed" onto inflatable modules to create various facilities. Thanks to soil analysis and sample return missions conducted by robotic missions and the Apollo astronauts, scientists have learned a great deal about the composition of soil on the moon. In particular, they learned that (like soils here on Earth) the composition varies from one location to another. "In any construction project on Earth, you'd want to know what kind of ground you're building on," explained Cannon via email. "The same will almost certainly be true on the moon, and it helps to have a consistent scheme for one person to describe a soil to another." Here on Earth, soil classification schemes are used for everything from construction and civil engineering projects to environmental science. Soil types can also influence keep considerations, like how deep a foundation is needed depending on what is being built or what the slope should be on a highway embankment. However, these schemes do not apply to the lunar environment: "Soil classifications on Earth are largely based on how 'plastic' the soil is due to clays and water, and often you get sorting that happens where for example a soil is mostly coarse gravel or mostly fine silt. On the moon there are no clays, the soils are completely dry, and almost everywhere you have an even blend of different particle sizes. So the systems we use terrestrially really just don't transfer." In addition, there are also the formation mechanisms, which are significantly different on the moon. Whereas soil on Earth results from "weathering" (erosion) by water and wind, lunar regolith was created by a combination of volcanic activity and a long history of impacts by asteroids, meteoroids, and micrometeoroids. The result of this is a lunar surface covered in a thick layer of rock fragments, glass beads (impact and volcanic), aluminum-bearing minerals (plagioclase), and agglutinate—a type of pyroclastic igneous rock formed from partly fused volcanic bombs found only on the moon. To develop his classification scheme, Cannon used sample data from the Apollo soil samples and developed proof of concept maps for the entire moon. To determine which soil types were well-suited for construction purposes, his scheme considered two key characteristics of lunar regolith: bulk iron content and grain size. In terms of composition, lunar regolith consists of (in order of its mass fraction) elemental oxygen, silicon, iron, calcium, aluminum, magnesium, and other trace elements. Additionally, grain sizes typically range from 40–800 micrometers—0.04 to 0.8 mm—with most falling between 45–100 micrometers. The resulting classification scheme is simple, elegant, and can be applied anywhere on the moon. "The lunar system is meant to be straightforward and involves measuring the chemistry of the soil and the average particle size. These two metrics give you nine different soil classes to start," said Cannon. "Then the system can be extended with "tags" that add extra info. Lunar soils can be classified with measurements on the ground by rovers or astronauts, or with satellite data from orbit, for example." Luckily, this classification scheme has application that go beyond base construction. Identifying soils based on their composition is an excellent way to scout resources that needed for specific tasks. This includes soils rich in water ice, which could be used for everything from drinking water and irrigation to oxygen gas and propellant—liquid hydrogen and oxygen (LH2 and LOX). It also includes other mineral elements that are needed to manufacture infrastructure, vehicles, and assorted components. As Cannon summarized: "There's implications for all sorts of things we'd do with soils, like extracting metals and oxygen from them, but specifically for construction, we actually think a lot of structures will end up being made of the soil itself using 3D printing. Some soil types are going to melt at lower temperatures and have more even distributions of grain sizes, which would make for better feedstock in that case. Or, for any type of construction, how compressible the soil is will change from one soil class to another." More information: Kevin M. Cannon, A lunar soil classification system for space resource utilization, Planetary and Space Science (2023). DOI: 10.1016/j.pss.2023.105780 Journal information: Planetary and Space Science Provided by Universe Today
Environmental Science
SAN DIEGO — Relentless storms from a series of atmospheric rivers have saturated the steep mountains and bald hillsides scarred from wildfires along much of California’s long coastline, causing hundreds of landslides this month.So far the debris has mostly blocked roads and highways and has not harmed communities as in 2018 when mudslides roared through Montecito, killing 23 people and wiping out 130 homes.But more rain is in the forecast, increasing the threat.Experts say California has learned important lessons from the Montecito tragedy, and has more tools to pinpoint the hot spots and more basins and nets are in place to capture the falling debris before it hits homes. The recent storms are putting those efforts to the test as climate change produces more severe weather.Mudflow and damage to homes in Montecito, Calif., on Jan. 10, 2018.Matt Udkow / Santa Barbara County Fire Department via AP fileWhy is California prone to mudslides?California has relatively young mountains from a geology standpoint, meaning much of its steep terrain is still in motion and covered in loose rocks and soil that can be sloughed off easily, especially when the ground is wet, according to geologists.Almost all of the state has received rainfall totals of 400% to 600% above average since Christmas, with some areas receiving as much as 30 inches of precipitation, causing massive flooding. The severe weather has killed at least 19 people since late December.Since New Year’s Eve, the California Department of Conservation’s landslide mapping team has documented more than 300 landslides.The state’s prolonged drought has made matters worse.Dan Shugar, an associate professor of geoscience at the University of Calgary, said drought can have a counterintuitive effect when combined with the incredible rainfall California has seen in recent days.“You’d think if the ground is dry it should be able to absorb a lot of water, but when ground becomes too dry, the permeability of the ground actually decreases,” he said. As water runs off the hardened soil, moving downward and picking up energy, it can begin carrying soil and debris away, he said.Added to that, wildfires have left some hillsides with little to no vegetation to hold the soil in place.What are the most vulnerable areas?The most vulnerable areas are hillsides that have burned in the past two to three years with communities below them, said Jeremy Lancaster, who leads the California Department of Conservation’s geological and landslide mapping team.That includes areas that recently burned in Napa, Mariposa, and Monterey counties, he said.In 2018, the deadly mudslides in Montecito occurred about a month after one of the largest fires in California’s history tore through the same area, charring 280,000 acres.Montecito is sandwiched between the Santa Ynez mountains and the Pacific coast. On the fifth anniversary of that tragedy, the entire community was ordered to evacuate on Jan. 9 as rains pummeled the area and debris blocked roads.Lancaster warned that the threat of landslides will linger long after the rains have subsided as the water seeps 50 to 100 feet into the soil, dislodging things.“They can occur weeks later, if not months,” he said.What can be done to protect communities?Lancaster said California has dramatically increased its efforts to identify hotspots since the Montecito mudslides. His department continually updates its map so local communities are aware and can make decisions, including whether to evacuate an entire community.The state is also working on a system to better pinpoint how much rain might trigger a landslide.Marten Geertsema, who studies natural hazards and terrain analysis at the University of Northern British Columbia, said agencies use a variety of tools to gauge the likelihood of landslides in a given area, including terrain maps and lidar — pulsed light from lasers to penetrate foliage to see the ground. Then they can watch for early warnings, such as changes over time in photos taken from the air, or from satellites, or in data from GPS monitoring stations, tilt meters and or other on-site instrumentation.What is the most effective defense against mudslides?One of the best ways to manage landslides is with debris basins — pits carved out of the landscape to catch material flowing downhill.But basins, which can require a lot of land, can also disrupt the natural ecosystem and lead to beaches needing to be replenished by collecting sediment that flows out of the canyons, according to experts.And they are costly, said Douglas Jerolmack, a professor of environmental science and mechanical engineering at the University of Pennsylvania. And if old debris isn’t removed, they can be overwhelmed by new landslides or mudslides.Some might also not be big enough to deal with future slides worsened by climate change, Jerolmack said.After the 2018 mudslides hit Montecito, the Los Angeles Times reported that debris basins above the community were undersized and hadn’t been sufficiently emptied.The tragedy galvanized the community, which raised millions to address the problem, said Patrick McElroy, a retired Santa Barbara fire chief who founded the nonprofit organization, The Project for Resilient Communities.The organization hired an engineering company to map the canyons and installed debris nets. He said the recent storms put them to the test: One net measuring 25 feet tall filled nearly to capacity.McElroy said he’s still haunted by memories from 2018 but feels better, knowing that the community might be safer now.“I’m not over it yet. But to wake up, you know, the other day and see no injuries and no fatalities. I just can’t tell you how impressed I am,” he said of the nets.The best solution for the Montecito and Santa Barbara area is to have both nets and debris basins, according to Larry Gurrola, the engineering geologist hired by the organization.But nothing is cheap. Santa Barbara County’s spent $20 million on a new basin after 2018, while McElroy’s organization spent close to $2 million on installing the nets, which includes liability insurance and other fees. They have a five-year permit for the nets, which will be removed if it is not renewed.Gurrola said the alternative is more costly. With the recent storms, more than half of California’s 58 counties have been declared disaster areas and repairing the damage may cost more than $1 billion.“Most importantly these things protect the community and save lives,” he said.
Environmental Science
Supposedly eco-friendly paper cups may be just as toxic to the environment and human body as plastic ones, a study suggests. Scientists in Sweden found the thin-film of plastic coated onto the surfaces of disposable paper cups to keep their contents from seeping into the paper emit toxic substances. In experiments that involved insects caused birth defects and other developmental damage, both when the paper cup was left to biodegrade in water as well as when left in sediment or dirt. The news comes just weeks after similar results from researchers in Belgium, who found that paper straws tested high in concentrations of the group of toxic synthetic 'forever chemicals'. Now, the Swedish researchers are calling for 'transparency requirements within the plastics industry' hoping to force 'clear reporting of what chemicals all products contain, much like in the pharmaceutical industry.' Because paper easily absorbs water, other liquids, oils and fats from all kinds of food and beverages, manufacturers treat the paper used in food packaging with resistant plastic surface coatings. Today those plastics are frequently a type of renewable, biodegradable bioplastic called polylactide or PLA, which is made from corn, cassava or sugarcane instead of the fossil fuels used to make traditional plastics. Although, PLA breaks down faster than these more common petroleum-based plastics, the researchers at Sweden's University of Gothenburg were able to show that chemicals embedded in the plastic could harm the larvae of harlequin flies. The species, a type of mosquito technically known as Chironomus riparius, has long been a standard candidate for preliminary toxicology and developmental genetic studies of this kind. 'We left paper cups and plastic cups in wet sediment and water for a few weeks and followed how the leached chemicals affected the larvae,' eco-toxicologist Bethanie Carney Almroth, a professor of environmental science at the University of Gothenburg, said in a statement. 'All of the mugs negatively affected the growth of mosquito larvae,' Almroth said. In addition to the PLA-coasted paper cups, the researchers also tested plastic cups made of polypropylene (PP), black lids of polystyrene for the sake of comparison. For some of their PP cases, the researchers found developmental damage in harlequin larvae — but for many of plastic-coated paper cups the larvae did not even reach a state of maturity where developmental genetic damage could be found. 'The low number of individuals reaching the 4th instar [phase],' the scientists wrote in their study for the journal Environmental Pollution, 'in itself is important.' Some of the chemicals found in these PLA and PP plastics include ultra-violet-(UV)-light stabilizers, flame retardants, plasticizers, and the detergent-like substance nonylphenol, which the US Environmental Protection Agency considers toxic. As with the paper straws, PFAS was again a concerning presence, as were, the researchers said, 'hundreds of different chemicals that can migrate into foodstuffs.' Theses chemicals, they report, could including endocrine disrupters that could effect sexual reproduction, growth and other bodily functions; as well as chemicals that can accumulate in the body, cause cancer and other toxic effects. 'Some chemicals in plastics are known to be toxic,' Almroth stressed in her statement, 'others we lack knowledge about.' Paper packaging also presents 'a potential health hazard,' given its own plastic content, she noted, 'and it's becoming more common.' 'When disposable products arrived on the market after the Second World War,' Almroth said, 'large campaigns were conducted to teach people to throw the products away, it was unnatural to us!' She and her co-authors hope that similar public awareness training can return society to the older, less wasteful habits. 'We need to shift back and move away from disposable life styles,' Almroth said. 'It is better if you bring your own mug when buying take away coffee.' 'Or by all means, take a few minutes, sit down and drink your coffee from a porcelain mug,' she said.
Environmental Science
Everywhere scientists have searched for microplastics, they have discovered their presence— in our food, water, air, and even some human body parts. However, the examination of our innermost organs, which are not directly exposed to the environment, remains limited. A new pilot study involving individuals who underwent heart surgery reveals that microplastics are present in many heart tissues. The study, recently published in the journal Environmental Science & Technology also reports evidence suggesting that microplastics were unexpectedly introduced during the procedures. Understanding Microplastics Microplastics are plastic fragments less than 5 millimeters wide, or about the size of a pencil eraser. Research has shown that they can enter the human body through mouths, noses, and other body cavities with connections to the outside world. Yet many organs and tissues are fully enclosed inside a person’s body, and scientists lack information on their potential exposure to, and effects from, microplastics. So, Kun Hua, Xiubin Yang, and colleagues wanted to investigate whether these particles have entered people’s cardiovascular systems through indirect and direct exposures. Research Findings In a pilot experiment, the researchers collected heart tissue samples from 15 people during cardiac surgeries, as well as pre- and post-operation blood specimens from half of the participants. Then the team analyzed the samples with laser direct infrared imaging and identified 20 to 500 micrometer-wide particles made from eight types of plastic, including polyethylene terephthalate, polyvinyl chloride, and poly(methyl methacrylate). This technique detected tens to thousands of individual microplastic pieces in most tissue samples, though the amounts and materials varied between participants. All of the blood samples also contained plastic particles, but after surgery their average size decreased, and the particles came from more diverse types of plastics. Although the study had a small number of participants, the researchers say they have provided preliminary evidence that various microplastics can accumulate and persist in the heart and its innermost tissues. They add that the findings show how invasive medical procedures are an overlooked route of microplastic exposure, providing direct access to the bloodstream and internal tissues. More studies are needed to fully understand the effects of microplastics on a person’s cardiovascular system and their prognosis after heart surgery, the researchers conclude. Reference: “Detection of Various Microplastics in Patients Undergoing Cardiac Surgery” by Yunxiao Yang, Enzehua Xie, Zhiyong Du, Zhan Peng, Zhongyi Han, Linyi Li, Rui Zhao, Yanwen Qin, Mianqi Xue, Fengwang Li, Kun Hua and Xiubin Yang, 13 July 2023, Environmental Science & Technology. DOI: 10.1021/acs.est.2c07179 The authors acknowledge funding from the National Natural Science Foundation of China and the Beijing Natural Science Foundation.
Environmental Science
This March 2022 photo provided by the Conservancy of Southwest Florida shows biologists Ian Easterling, left, and Ian Bartoszek with a 14-foot female Burmese python captured in mangrove habitat of southwestern Florida while tracking a male scout snake. (Conservancy of Southwest Florida via AP) NAPLES, Fla. (AP) — A team of biologists recently hauled in the heaviest Burmese python ever captured in Florida, officials said. The female python weighed in at 215 pounds (98 kilograms), was nearly 18 feet long (5 meters) and had 122 developing eggs, the Conservancy of Southwest Florida said in a news release. The team used radio transmitters transplanted in male “scout” snakes to study python movements, breeding behaviors and habitat use, said Ian Bartoszek, wildlife biologist and environmental science project manager for the conservancy’s program. “How do you find the needle in the haystack? You could use a magnet, and in a similar way our male scout snakes are attracted to the biggest females around,” Bartoszek said. The team used a scout snake named Dionysus — or Dion for short — in an area of the western Everglades. “We knew he was there for a reason, and the team found him with the largest female we have seen to date.” Biologist Ian Easterling and intern Kyle Findley helped capture the female snake and haul it through the woods to the field truck. A necropsy also found hoof cores in the snake’s digest system, meaning that an adult white-tailed deer was its last meal. National Geographic documented the discovery, highlighting the continued impact of the invasive pythons, which are known for rapid reproduction and depletion of surrounding native wildlife. Bartoszek said removal of female pythons plays a critical role in disrupting the breeding cycle. “This is the wildlife issue of our time for southern Florida,” he said. Since the conservancy’s python program began in 2013, they’ve removed over 1,000 pythons from approximately 100 square miles (25,900 hectares) in southwest Florida. Over that stretch, necropsies have found dozens of white-tailed deer inside Burmese pythons. Data researchers at the University of Florida have documented 24 species of mammals, 47 species of birds and 2 reptile species from pythons’ stomachs. Prior to the recent discovery, the largest female removed through the conservancy’s program weighed 185 pounds (84 kilograms) and was the heaviest python captured at the time in Florida, officials said. The state’s python removal program runs for two weeks in August. Participants compete for prizes, including $2,500 for capturing the most pythons. Last year’s challenge involved more than 600 people from 25 states. Tags
Environmental Science
Father is cooking on a gas stove holding a child in his arms. (Getty Images)Gas stoves are responsible for 12.7% of U.S. childhood asthma cases, a new study in the peer-reviewed International Journal of Environmental Research and Public Health finds. That proportion is much higher in states such as Illinois (21.1%) California (20.1%) and New York (18.8%), where gas stoves are more prevalent.“When the gas stove is turned on, and when it’s burning at that hot temperature, it releases a number of air pollutants,” Brady Seals, a coauthor of the study and the carbon-free buildings manager at the energy policy think tank RMI, told Yahoo News. “So these are things like particulate matter, carbon monoxide and nitrogen dioxide, along with others. So, for example, nitrogen dioxide is a known respiratory irritant. And the EPA, in 2016, said that short-term exposure to NO2 causes respiratory effects like asthma attacks.”The study was based on a meta-analysis from 2013 on the correlation between gas stoves and childhood asthma which found that living in a home with a gas stove corresponds to a 42% higher chance of current childhood asthma. Combining that with data on the prevalence of gas stoves, which are present in 35% of U.S. homes, the researchers estimated how many more childhood asthma cases exist because of their presence. As a result, the researchers found that 650,000 American children have asthma because of gas stoves in their home.A girl is using an asthma inhaler indoor. (Getty Images)The new study follows other research showing gas stoves are harmful to indoor air quality. In 2020, UCLA public health researchers commissioned by the Sierra Club found that 90% of homes have unhealthy levels of nitrogen dioxide pollution after cooking with gas for one hour. A 2020 study by RMI found homes with gas stoves have 50% to over 400% higher nitrogen dioxide concentrations than homes with electric stoves. When burned, gas also emits harmful substances such as carbon monoxide, nitrogen oxide, particulate matter, nitrogen dioxide, and formaldehyde, which is a known carcinogen.In addition to the indoor air pollution at issue in this study, home gas use also contributes to outdoor air pollution, another primary driver of asthma. The same toxins that harm children’s lungs indoors contribute to the formation of ground-level ozone, also known as smog, which is toxic. In 2019, the Institute for Health Metrics and Evaluation estimated that ozone is responsible for 11% of deaths from chronic respiratory disease. And natural gas is mostly methane, which is also an ingredient in smog formation.Methane is also a very powerful greenhouse gas, accounting for 11% of planet-warming emissions, according to the Environmental Protection Agency. Global warming also worsens air pollution, as hotter weather contributes to smog formation.A pot on a gas stove. (Getty Images)Increasingly, cities looking to reduce their greenhouse gas emissions are banning the installation of gas appliances in new construction. Liberal bastions such as Berkeley, San Francisco, Seattle and New York City have adopted such measures.Researchers are also discovering that gas stoves and ovens may pollute indoor air when they’re not even in use. A January 2021 study in the journal Environmental Science & Technology found that gas stoves and ovens frequently leak, and it estimated that in the U.S. their leaked methane emissions are equivalent to the carbon emissions of half a million cars.The UCLA study estimated that in California alone, if all residential gas appliances were transitioned to clean-energy electric appliances, the reduction of particulate pollution and nitrogen oxides would result in 354 fewer annual deaths and an even greater reduction in bronchitis.The researchers in the newest study recommend two approaches to reducing indoor pollution from gas stoves: either improving ventilation or replacing them with clean alternatives such as electric stoves. They lean heavily towards the latter.Electric ceramic stove inside the kitchen. (Getty Images)“Notably, ventilation is associated with the reduction, but not elimination, of childhood asthma risk,” they write.Even many stoves with range hoods, they note, do not vent outdoors — which defeats the purpose — and people often forget to turn their vent on.Last month, eight senators and 12 members of the House of Representatives, all Democrats, signed a letter to the Consumer Product Safety Commission to take action to protect consumers from gas stove pollution. The letter did not call for banning gas stoves, instead asking for regulation to require ventilation and performance standards to limit leakage. CPSC Commissioner Richard Trumka, Jr. said in a subsequent webinar that a ban on gas stoves would be a “real possibility.”
Environmental Science
This story originally appeared in Hakai and is part of the Climate Desk collaboration.In the stark sheep pastures of Ireland’s Maumturk Mountains, just a few gnarled trees cling to the hillside, twisted and bent by decades facing down the raw Atlantic winds. Now this unlikely landscape, where native woodlands have been gone for almost 1,000 years, is at the heart of a bold plan to bring back long-forgotten rainforests.The nonprofit Hometree, started by a group of surfers, wants to buy 800 hectares of land across eight sites in western Ireland to reforest over the next four years. Then it wants to encourage neighboring farmers to pledge 800 hectares more—yielding a total area just shy of five times that of New York’s Central Park. The hope is that bringing native trees back to parts of this landscape will inspire a much wider reforestation. But the surfers face steep challenges on a coastline where conservation and rewilding can provoke suspicion and anxiety.For Hometree cofounder and former pro surfer Matt Smith, bringing native forests back to this coast is a logical response to the climate crisis and biodiversity loss. “We’re going for it as if it was an emergency,” he says. “We’re just responding with what I believe is a realistic response based on the meaning of the word emergency.”Forests of oak, birch, and pine once covered the mist-shrouded hills of Ireland’s west coast. But beginning about 6,000 years ago, waves of settlers cut, burned, and sent their flocks to graze on the woods, transforming this once lush, temperate rainforest into a terrain of treeless bogs, heaths, and pastures. Rainforest fragments cling to the landscape where deer and sheep can’t reach—in high mountain cliffs and deep-sided valleys. These woods boast a huge diversity of moisture-loving ferns, mosses, and lichens.Hometree bought its first site last year, a 113-hectare former sheep farm along the salmon-rich Bealnabrack River in the Maumturk Mountains. It’s largely denuded of trees: “There’s probably about 10 or 12 native trees on the whole 280 acres,” says Hometree’s project lead, Ray Ó Foghlú. A former teacher who returned to college to study environmental science, Ó Foghlú met Smith through the surfing scene. Their group funded the $700,000 purchase of the site through a mix of small donations and a loan from a philanthropist.Pockets of western Ireland’s thick woodlands persist, especially in harder-to-reach places like in deep valleys.  Photograph: Ray Ó FoghlúHometree’s restoration effort, says Ó Foghlú, will be based on the advice of ecologists and foresters. But the tentative vision is to return pine forests to the upper slopes of the valley, willow and alder to the floodplain, and oak and birch to the valley sides. The group will also work to regenerate blanket bogs and protect species-rich grasslands.But reforestation won’t be easy. Parts of these hills are so bereft of native trees that woods can’t easily regenerate. Centuries of heavy rainfall have also leached nutrients from the soil. James Rainey, an ecologist at Trees for Life, a nonprofit working on rewilding Scottish forests, says that while tree growth in such environments can be slow at first, woods will regenerate if you reduce the grazing pressure, make sure vegetation doesn’t catch fire too often, and ensure there is a local source of tree seeds. “If you have those criteria, you’re going to get recovery,” he says.It’s about more than trees though. Rainey says specialist wildflowers and lichens may need to be reintroduced to piece the ecosystem back together. He also stresses the need to protect peatlands and other sensitive habitats in these hills. But he believes restoring temperate rainforests is vital. Globally, he says, they can only thrive in narrow climatic bands like the west coast of North America, the south of Chile, and the west of Ireland and Scotland. “We’ve got a massive challenge ahead to restore this ecosystem,” he says.Yet Hometree’s wider vision could prove a tough sell in a land where native woodlands have been gone for so long. “Three to 5,000 years in most parts of Ireland,” says Ó Foghlú. “They’re not really part of our culture. We’ve become pastoralists, and that’s a reality we have to deal with.”There is also little financial incentive for farmers to restore forests. Many farmers, says Ó Foghlú, see giving land over to reforestation as permanently taking it out of economic productivity. Farmers don’t see how their children will be able to earn a living from native woodlands in the future either, says Ó Foghlú. “And as of right now, you’re left with nothing to say to them.”Many of Ireland’s hill farmers are also skeptical of conservation measures in general. In the 1990s, large tracts of Ireland’s western mountains were designated as conservation zones. Farmers say this was done with little input from landowners, that it restricted farming activity and devalued land, and that the payments offered in return have decreased over time. “The designations and the way they’ve been applied in Ireland has been fairly toxic,” says Vincent Roddy, president of the Irish Natura and Hill Farmers Association. “It’s probably done more to undermine the objectives as regards biodiversity than anything else. … And this is why farmers are very suspicious of anything new.”But Hometree hopes to make forest restoration worthwhile for farmers. The organization is looking to raise at least $13 million from government, corporate, and philanthropic sources to fund its 1,600-hectare ambitions. It says it will put $2.7 million of this into a fund to reward farmers for protecting and restoring woodlands, run community events and school programs, and pay for visitor facilities.Ó Foghlú imagines a future where farming and forests thrive side by side. He envisages healthy corridors of woodland providing shelter and forage to grazing animals on hill farms, safeguarding water quality, and connecting larger wooded areas. “I do think you can sell the concept of these woodlands on the idea that there are benefits to the farming systems, too,” he says.If Hometree can demonstrate this along the Bealnabrack River and at its other project sites, the twin goals of thriving forests and thriving farms in these mountains might not seem so poles apart.
Environmental Science
Picture a raft of sea ice in the Arctic Ocean, and you’re probably imagining a pristine marriage of white and blue. But during summertime, below the surface, something much greener and goopier lurks. A type of algae, Melosira arctica, grows in large, dangling masses and curtains that cling to the underside of Arctic sea ice, mostly obscured from a bird’s eye view. The algae, made up of long strings and clumps of single-celled organisms called diatoms, is an essential player in the polar ecosystem. It’s food for zooplankton, which in turn nourish everything from fish to birds to seals to whales—either directly or through an indirect, upwards cascade along the Pac-Man-esque chain of life. In the deep ocean, benthic critters also rely on making meals out of blobs of sunken algae. By one assessment, M. arctica accounted for about 45% of Arctic primary production in 2012. In short: the algae supports the entire food web. But in the hidden, slimy world of under-ice scum, something else is abundant: microplastics. Researchers have documented alarmingly high concentrations of teeny tiny plastic particles inside samples of M. arctica, according to a new study published Friday in the journal Environmental Science & Technology. The work adds to the growing body of evidence that microplastics are truly everywhere: in freshly fallen Antarctic snow, the air, baby poop, our blood—everywhere. All 12 samples of algae the scientists collected from ice floes contained microplastics. In total, they counted about 400 individual plastic bits in the algae they examined. Extrapolating that to a concentration by volume, the researchers estimate that every cubic meter of M. arctica contains 31,000 microplastic particles—greater that 10 times the concentration they detected in the surrounding sea water. It could be bad news for the algae, the organisms that rely on it, and even the climate. Though microplastics are seemingly ubiquitous, the findings were still doubly surprising to Melanie Bergmann, the lead study author and a biologist at the Alfred Wegener Institute in Bremerhaven, Germany. In an email, she told Gizmodo she hadn’t expected to document such high levels of microplastics in M. arctica, nor for those concentrations to be so much higher than what was in the water. But in retrospect, the gummy nature of the algae probably explains it. Sea ice itself contains a lot of microplasitcs (up to millions of particles per cubic meter, depending on location, according to earlier research Bergmann worked on). Sea ice both sequesters plastic from the ocean through its freeze/melt cycle and collects the pollution from above as it is deposited by wind currents. In turn, that sea ice contamination likely trickles down to the algae. “When the sea ice melts in spring, microplastic probably becomes trapped [by] their sticky surface,” Bergmann hypothesizes. And both ice floes and their attached algal masses move around, scooping up plastic particles as they follow ocean currents. Within the Arctic marine ecosystem, previous research has found the highest levels of microplastics in seafloor sediments, the biologist further explained. The algae cycle may explain a large part of those plastic deposits. By getting trapped in a gunky web of M. arctica filaments, the minuscule bits of manmade trash are actually hitching an express ride to the bottom of the ocean. Large chunks of algae sink much faster than tiny bits of debris on their own, which are more likely to remain suspended in the water column. So, on the bright side, the new study solves something of a mystery. But the benefit of novel knowledge may be the only silver lining here. Because the algae is the scaffolding of an Arctic food web, everything that eats it (or eats something that eats it) is almost certainly ingesting all of the plastic bits contained within. The health impacts of microplastics aren’t yet well established, but some early studies suggest they’re probably not good for people or wildlife. In this way, M. arctica’s sticky affinity for plastic could be slowly poisoning the entire ecosystem. Then, there’s the way the pollution could be hurting the algae itself. Laboratory experiments of other algal species have shown that microplastics can hinder an organism’s ability to photosynthesize and damage algal cells. “We don’t yet know how widely this occurs amongst different algae and if this also affects ice algae,” said Bergmann; the impact of microplastics seems to vary a lot by species, she added. But in the era of climate change, any additional stress on already rapidly changing Arctic systems is unwelcome. And, if algae is indeed less able to photosynthesize when it’s stuffed with plastic, then it’s also less able to sequester carbon and less able to mitigate climate change—a small but potentially significant Arctic feedback loop, she explained. For now, all of this is still a question mark. More research is needed to understand how microplastics travel through the food web and what they do to the organisms that ingest them (Bergmann is hoping to conduct future studies specifically on the deep-sea creatures living among the plastic-inundated sediments). But if scientific experiments don’t soon reveal the consequences of our plastic dependence, time probably will. “As microplastic concentrations are increasing, we will see an increase in its effects. In certain areas or species, we may cross critical thresholds,” Bergmann said. “Some scientists think that we have already.”
Environmental Science