Categories
ScienceDaily

Fast calculation dials in better batteries

A simpler and more efficient way to predict performance will lead to better batteries, according to Rice University engineers.

That their method is 100,000 times faster than current modeling techniques is a nice bonus.

The analytical model developed by materials scientist Ming Tang and graduate student Fan Wang of Rice University’s Brown School of Engineering doesn’t require complex numerical simulation to guide the selection and design of battery components and how they interact.

The simplified model developed at Rice — freely accessible online — does the heavy lifting with an accuracy within 10% of more computationally intensive algorithms. Tang said it will allow researchers to quickly evaluate the rate capability of batteries that power the planet.

The results appear in the open-access journal Cell Reports Physical Science.

There was a clear need for the updated model, Tang said.

“Almost everyone who designs and optimizes battery cells uses a well-established approach called P2D (for pseudo-two dimensional) simulations, which are expensive to run,” Tang said. “This especially becomes a problem if you want to optimize battery cells, because they have many variables and parameters that need to be carefully tuned to maximize the performance.

“What motivated this work is our realization that we need a faster, more transparent tool to accelerate the design process, and offer simple, clear insights that are not always easy to obtain from numerical simulations,” he said.

Battery optimization generally involves what the paper calls a “perpetual trade-off” between energy (the amount it can store) and power density (the rate of its release), all of which depends on the materials, their configurations and such internal structures as porosity.

“There are quite a few adjustable parameters associated with the structure that you need to optimize,” Tang said. “Typically, you need to make tens of thousands of calculations and sometimes more to search the parameter space and find the best combination. It’s not impossible, but it takes a really long time.”

He said the Rice model could be easily implemented in such common software as MATLAB and Excel, and even on calculators.

To test the model, the researchers let it search for the optimal porosity and thickness of an electrode in common full- and half-cell batteries. In the process, they discovered that electrodes with “uniform reaction” behavior such as nickel-manganese-cobalt and nickel-cobalt-aluminum oxide are best for applications that require thick electrodes to increase the energy density.

They also found that battery half-cells (with only one electrode) have inherently better rate capability, meaning their performance is not a reliable indicator of how electrodes will perform in the full cells used in commercial batteries.

The study is related to the Tang lab’s attempts at understanding and optimizing the relationship between microstructure and performance of battery electrodes, the topic of several recent papers that showed how defects in cathodes can speed lithium absorption and how lithium cells can be pushed too far in the quest for speed.

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Contagion model predicts flooding in urban areas

Inspired by the same modeling and mathematical laws used to predict the spread of pandemics, researchers at Texas A&M University have created a model to accurately forecast the spread and recession process of floodwaters in urban road networks. With this new approach, researchers have created a simple and powerful mathematical approach to a complex problem.

“We were inspired by the fact that the spread of epidemics and pandemics in communities has been studied by people in health sciences and epidemiology and other fields, and they have identified some principles and rules that govern the spread process in complex social networks,” said Dr. Ali Mostafavi, associate professor in the Zachry Department of Civil and Environmental Engineering. “So we ask ourselves, are these spreading processes the same for the spread of flooding in cities? We tested that, and surprisingly, we found that the answer is yes.”

The findings of this study were recently published in Nature Scientific Reports.

The contagion model, Susceptible-Exposed-Infected-Recovered (SEIR), is used to mathematically model the spread of infectious diseases. In relation to flooding, Mostafavi and his team integrated the SEIR model with the network spread process in which the probability of flooding of a road segment depends on the degree to which the nearby road segments are flooded.

In the context of flooding, susceptible is a road that can be flooded because it is in a flood plain; exposed is a road that has flooding due to rainwater or overflow from a nearby channel; infected is a road that is flooded and cannot be used; and recovered is a road where the floodwater has receded.

The research team verified the model’s use with high-resolution historical data of road flooding in Harris County during Hurricane Harvey in 2017. The results show that the model can monitor and predict the evolution of flooded roads over time.

“The power of this approach is it offers a simple and powerful mathematical approach and provides great potential to support emergency managers, public officials, residents, first responders and other decision makers for flood forecast in road networks,” Mostafavi said.

The proposed model can achieve decent precision and recall for the spatial spread of the flooded roads.

“If you look at the flood monitoring system of Harris County, it can show you if a channel is overflowing now, but they’re not able to predict anything about the next four hours or next eight hours. Also, the existing flood monitoring systems provide limited information about the propagation of flooding in road networks and the impacts on urban mobility. But our models, and this specific model for the road networks, is robust at predicting the future spread of flooding,” he said. “In addition to flood prediction in urban networks, the findings of this study provide very important insights about the universality of the network spread processes across various social, natural, physical and engineered systems; this is significant for better modeling and managing cities, as complex systems.”

The only limitation to this flood prediction model is that it cannot identify where the initial flooding will begin, but Mostafavi said there are other mechanisms in place such as sensors on flood gauges that can address this.

“As soon as flooding is reported in these areas, we can use our model, which is very simple compared to hydraulic and hydrologic models, to predict the flood propagation in future hours. The forecast of road inundations and mobility disruptions is critical to inform residents to avoid high-risk roadways and to enable emergency managers and responders to optimize relief and rescue in impacted areas based on predicted information about road access and mobility. This forecast could be the difference between life and death during crisis response,” he said.

Civil engineering doctoral student and graduate research assistant Chao Fan led the analysis and modeling of the Hurricane Harvey data, along with Xiangqi (Alex) Jiang, a graduate student in computer science, who works in Mostafavi’s UrbanResilience.AI Lab.

“By doing this research, I realize the power of mathematical models in addressing engineering problems and real-world challenges.

This research expands my research capabilities and will have a long-term impact on my career,” Fan said. “In addition, I am also very excited that my research can contribute to reducing the negative impacts of natural disasters on infrastructure services.”

Story Source:

Materials provided by Texas A&M University. Original written by Alyson Chapman. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

This online calculator can predict your stroke risk

Doctors can predict patients’ risk for ischemic stroke based on the severity of their metabolic syndrome, a conglomeration of conditions that includes high blood pressure, abnormal cholesterol levels and excess body fat around the abdomen and waist, a new study finds.

The study found that stroke risk increased consistently with metabolic syndrome severity even in patients without diabetes. Doctors can use this information — and a scoring tool developed by a UVA Children’s pediatrician and his collaborator at the University of Florida — to identify patients at risk and help them reduce that risk.

“We had previously shown that the severity of metabolic syndrome was linked to future coronary heart disease and type 2 diabetes,” said UVA’s Mark DeBoer, MD. “This study showed further links to future ischemic strokes.”

Ischemic Stroke Risk

DeBoer developed the scoring tool, an online calculator to assess the severity of metabolic syndrome, with Matthew J. Gurka, PhD, of the Department of Health Outcomes and Biomedical Informatics at the University of Florida, Gainesville. The tool is available for free at https://metscalc.org/.

To evaluate the association between ischemic stroke and metabolic syndrome, DeBoer and Gurka reviewed more than 13,000 participants in prior studies and their stroke outcomes. Among that group, there were 709 ischemic strokes over a mean period of 18.6 years assessed in the studies. (Ischemic strokes are caused when blood flow to the brain is obstructed by blood clots or clogged arteries. Hemorrhagic strokes, on the other hand, are caused when blood vessels rupture.)

The researchers used their tool to calculate “Z scores” measuring the severity of metabolic syndrome among the study participants. They could then analyze the association between metabolic syndrome and ischemic stroke risk.

The subgroup with the highest association between metabolic syndrome and risk for ischemic stroke was white women, the researchers found. In this group, the research team was able to identify relationships between the individual contributors to metabolic syndrome, such as high blood pressure, and stroke risk.

The researchers note that race and sex did not seem to make a major difference in stroke risk overall, and they caution that the increased risk seen in white women could be the results of chance alone. “Nevertheless,” they write in a new scientific article outlining their findings, “these results are notable enough that they may warrant further study into race and sex differences.”

The overall relationship between metabolic syndrome severity and stroke risk was clear, however. And this suggests people with metabolic syndrome can make lifestyle changes to reduce that risk. Losing weight, exercising more, choosing healthy foods — all can help address metabolic syndrome and its harmful effects.

DeBoer hopes that the tool he and Gurka developed will help doctors guide patients as they seek to reduce their stroke risk and improve their health and well-being.

“In case there are still individuals out there debating whether to start exercising or eating a healthier diet,” DeBoer said, “this study provides another wake-up call to motivate us all toward lifestyle changes.”

Story Source:

Materials provided by University of Virginia Health System. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Machine learning can predict market behavior

Machine learning can assess the effectiveness of mathematical tools used to predict the movements of financial markets, according to new Cornell research based on the largest dataset ever used in this area.

The researchers’ model could also predict future market movements, an extraordinarily difficult task because of markets’ massive amounts of information and high volatility.

“What we were trying to do is bring the power of machine learning techniques to not only evaluate how well our current methods and models work, but also to help us extend these in a way that we never could do without machine learning,” said Maureen O’Hara, the Robert W. Purcell Professor of Management at the SC Johnson College of Business.

O’Hara is co-author of “Microstructure in the Machine Age,” published July 7 in The Review of Financial Studies.

“Trying to estimate these sorts of things using standard techniques gets very tricky, because the databases are so big. The beauty of machine learning is that it’s a different way to analyze the data,” O’Hara said. “The key thing we show in this paper is that in some cases, these microstructure features that attach to one contract are so powerful, they can predict the movements of other contracts. So we can pick up the patterns of how markets affect other markets, which is very difficult to do using standard tools.”

Markets generate vast amounts of data, and billions of dollars are at stake in mining that data for patterns to shed light on future market behavior. Companies on Wall Street and elsewhere employ various algorithms, examining different variables and factors, to find such patterns and predict the future.

In the study, the researchers used what’s known as a random forest machine learning algorithm to better understand the effectiveness of some of these models. They assessed the tools using a dataset of 87 futures contracts — agreements to buy or sell assets in the future at predetermined prices.

“Our sample is basically all active futures contracts around the world for five years, and we use every single trade — tens of millions of them — in our analysis,” O’Hara said. “What we did is use machine learning to try to understand how well microstructure tools developed for less complex market settings work to predict the future price process both within a contract and then collectively across contracts. We find that some of the variables work very, very well — and some of them not so great.”

Machine learning has long been used in finance, but typically as a so-called “black box” — in which an artificial intelligence algorithm uses reams of data to predict future patterns but without revealing how it makes its determinations. This method can be effective in the short term, O’Hara said, but sheds little light on what actually causes market patterns.

“Our use for machine learning is: I have a theory about what moves markets, so how can I test it?” she said. “How can I really understand whether my theories are any good? And how can I use what I learned from this machine learning approach to help me build better models and understand things that I can’t model because it’s too complex?”

Huge amounts of historical market data are available — every trade has been recorded since the 1980’s — and vast volumes of information are generated every day. Increased computing power and greater availability of data have made it possible to perform more fine-grained and comprehensive analyses, but these datasets, and the computing power needed to analyze them, can be prohibitively expensive for scholars.

In this research, finance industry practitioners partnered with the academic researchers to provide the data and the computers for the study as well as expertise in machine learning algorithms used in practice.

“This partnership brings benefits to both,” said O’Hara, adding that the paper is one in a line of research she, Easley and Lopez de Prado have completed over the last decade. “It allows us to do research in ways generally unavailable to academic researchers.”

Story Source:

Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Path to quantum computing at room temperature

Army researchers predict quantum computer circuits that will no longer need extremely cold temperatures to function could become a reality after about a decade.

For years, solid-state quantum technology that operates at room temperature seemed remote. While the application of transparent crystals with optical nonlinearities had emerged as the most likely route to this milestone, the plausibility of such a system always remained in question.

Now, Army scientists have officially confirmed the validity of this approach. Dr. Kurt Jacobs, of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, working alongside Dr. Mikkel Heuck and Prof. Dirk Englund, of the Massachusetts Institute of Technology, became the first to demonstrate the feasibility of a quantum logic gate comprised of photonic circuits and optical crystals.

“If future devices that use quantum technologies will require cooling to very cold temperatures, then this will make them expensive, bulky, and power hungry,” Heuck said. “Our research is aimed at developing future photonic circuits that will be able to manipulate the entanglement required for quantum devices at room temperature.”

Quantum technology offers a range of future advances in computing, communications and remote sensing.

In order to accomplish any kind of task, traditional classical computers work with information that is fully determined. The information is stored in many bits, each of which can be on or off. A classical computer, when given an input specified by a number of bits, can process this input to produce an answer, which is also given as a number of bits. A classical computer processes one input at a time.

In contrast, quantum computers store information in qubits that can be in a strange state where they are both on and off at the same time. This allows a quantum computer to explore the answers to many inputs at the same time. While it cannot output all the answers at once, it can output relationships between these answers, which allows it to solve some problems much faster than a classical computer.

Unfortunately, one of the major drawbacks of quantum systems is the fragility of the strange states of the qubits. Most prospective hardware for quantum technology must be kept at extremely cold temperatures — close to zero kelvins — to prevent the special states being destroyed by interacting with the computer’s environment.

“Any interaction that a qubit has with anything else in its environment will start to distort its quantum state,” Jacobs said. “For example, if the environment is a gas of particles, then keeping it very cold keeps the gas molecules moving slowly, so they don’t crash into the quantum circuits as much.”

Researchers have directed various efforts to resolve this issue, but a definite solution is yet to be found. At the moment, photonic circuits that incorporate nonlinear optical crystals have presently emerged as the sole feasible route to quantum computing with solid-state systems at room temperatures.

“Photonic circuits are a bit like electrical circuits, except they manipulate light instead of electrical signals,” Englund said. “For example, we can make channels in a transparent material that photons will travel down, a bit like electrical signals traveling along wires.”

Unlike quantum systems that use ions or atoms to store information, quantum systems that use photons can bypass the cold temperature limitation. However, the photons must still interact with other photons to perform logic operations. This is where the nonlinear optical crystals come into play.

Researchers can engineer cavities in the crystals that temporarily trap photons inside. Through this method, the quantum system can establish two different possible states that a qubit can hold: a cavity with a photon (on) and a cavity without a photon (off). These qubits can then form quantum logic gates, which create the framework for the strange states.

In other words, researchers can use the indeterminate state of whether or not a photon is in a crystal cavity to represent a qubit. The logic gates act on two qubits together, and can create “quantum entanglement” between them. This entanglement is automatically generated in a quantum computer, and is required for quantum approaches to applications in sensing.

However, scientists based the idea to make quantum logic gates using nonlinear optical crystals entirely on speculation — up until this point. While it showed immense promise, doubts remained as to whether this method could even lead to practical logic gates.

The application of nonlinear optical crystals had remained in question until researchers at the Army’s lab and MIT presented a way to realize a quantum logic gate with this approach using established photonic circuit components.

“The problem was that if one has a photon travelling in a channel, the photon has a ‘wave-packet’ with a certain shape,” Jacobs said. “For a quantum gate, you need the photon wave-packets to remain the same after the operation of the gate. Since nonlinearities distort wave-packets, the question was whether you could load the wave-packet into cavities, have them interact via a nonlinearity, and then emit the photons again so that they have the same wave-packets as they started with.”

Once they designed the quantum logic gate, the researchers performed numerous computer simulations of the operation of the gate to demonstrate that it could, in theory, function appropriately. Actual construction of a quantum logic gate with this method will first require significant improvements in the quality of certain photonic components, researchers said.

“Based on the progress made over the last decade, we expect that it will take about ten years for the necessary improvements to be realized,” Heuck said. “However, the process of loading and emitting a wave-packet without distortion is something that we should able to realize with current experimental technology, and so that is an experiment that we will be working on next.”

Go to Source
Author:

Categories
ScienceDaily

AI techniques used to improve battery health and safety

Researchers have designed a machine learning method that can predict battery health with 10x higher accuracy than current industry standard, which could aid in the development of safer and more reliable batteries for electric vehicles and consumer electronics.

The researchers, from Cambridge and Newcastle Universities, have designed a new way to monitor batteries by sending electrical pulses into them and measuring the response. The measurements are then processed by a machine learning algorithm to predict the battery’s health and useful lifespan. Their method is non-invasive and is a simple add-on to any existing battery system. The results are reported in the journal Nature Communications.

Predicting the state of health and the remaining useful lifespan of lithium-ion batteries is one of the big problems limiting widespread adoption of electric vehicles: it’s also a familiar annoyance to mobile phone users. Over time, battery performance degrades via a complex network of subtle chemical processes. Individually, each of these processes doesn’t have much of an effect on battery performance, but collectively they can severely shorten a battery’s performance and lifespan.

Current methods for predicting battery health are based on tracking the current and voltage during battery charging and discharging. This misses important features that indicate battery health. Tracking the many processes that are happening within the battery requires new ways of probing batteries in action, as well as new algorithms that can detect subtle signals as they are charged and discharged.

“Safety and reliability are the most important design criteria as we develop batteries that can pack a lot of energy in a small space,” said Dr Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “By improving the software that monitors charging and discharging, and using data-driven software to control the charging process, I believe we can power a big improvement in battery performance.”

The researchers designed a way to monitor batteries by sending electrical pulses into it and measuring its response. A machine learning model is then used to discover specific features in the electrical response that are the tell-tale sign of battery aging. The researchers performed over 20,000 experimental measurements to train the model, the largest dataset of its kind. Importantly, the model learns how to distinguish important signals from irrelevant noise. Their method is non-invasive and is a simple add-on to any existing battery systems.

The researchers also showed that the machine learning model can be interpreted to give hints about the physical mechanism of degradation. The model can inform which electrical signals are most correlated with aging, which in turn allows them to design specific experiments to probe why and how batteries degrade.

“Machine learning complements and augments physical understanding,” said co-first author Dr Yunwei Zhang, also from the Cavendish Laboratory. “The interpretable signals identified by our machine learning model are a starting point for future theoretical and experimental studies.”

The researchers are now using their machine learning platform to understand degradation in different battery chemistries. They are also developing optimal battery charging protocols, powering by machine learning, to enable fast charging and minimise degradation.

This work was carried out with funding from the Faraday Institution. Dr Lee is also a Research Fellow at St Catharine’s College.

Story Source:

Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

To predict an epidemic, evolution can’t be ignored

When scientists try to predict the spread of something across populations — anything from a coronavirus to misinformation — they use complex mathematical models to do so. Typically, they’ll study the first few steps in which the subject spreads, and use that rate to project how far and wide the spread will go.

But what happens if a pathogen mutates, or information becomes modified, changing the speed at which it spreads? In a new study appearing in this week’s issue of Proceedings of the National Academy of Sciences (PNAS), a team of Carnegie Mellon University researchers show for the first time how important these considerations are.

“These evolutionary changes have a huge impact,” says CyLab faculty member Osman Yagan, an associate research professor in Electrical and Computer Engineering (ECE) and corresponding author of the study. “If you don’t consider the potential changes over time, you will be wrong in predicting the number of people that will get sick or the number of people who are exposed to a piece of information.”

Most people are familiar with epidemics of disease, but information itself — nowadays traveling at lightning speeds over social media — can experience its own kind of epidemic and “go viral.” Whether a piece of information goes viral or not can depend on how the original message is tweaked.

“Some pieces of misinformation are intentional, but some may develop organically when many people sequentially make small changes like a game of ‘telephone,'” says Yagan. “A seemingly boring piece of information can evolve into a viral Tweet, and we need to be able to predict how these things spread.”

In their study, the researchers developed a mathematical theory that takes these evolutionary changes into consideration. They then tested their theory against thousands of computer-simulated epidemics in real-world networks, such as Twitter for the spread of information or a hospital for the spread of disease.

In the context of spreading of infectious disease, the team ran thousands of simulations using data from two real-world networks: a contact network among students, teachers, and staff at a US high school, and a contact network among staff and patients in a hospital in Lyon, France.

These simulations served as a test bed: the theory that matches what is observed in the simulations would prove to be the more accurate one.

“We showed that our theory works over real-world networks,” says the study’s first author, Rashad Eletreby, who was a Carnegie Mellon Ph.D. student when he wrote the paper. “Traditional models that don’t consider evolutionary adaptations fail at predicting the probability of the emergence of an epidemic.”

While the study isn’t a silver bullet for predicting the spread of today’s coronavirus or the spread of fake news in today’s volatile political environment with 100% accuracy — one would need real-time data tracking the evolution of the pathogen or information to do that — the authors say it’s a big step.

“We’re one step closer to reality,” says Eletreby.

Other authors on the study included ECE Ph.D. student Yong Zhuang, Institute for Software Research professor Kathleen Carley, and Princeton Electrical Engineering professor Vincent Poor.

Story Source:

Materials provided by College of Engineering, Carnegie Mellon University. Original written by Daniel Tkacik. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Improving AI’s ability to identify students who need help

Researchers have designed an artificial intelligence (AI) model that is better able to predict how much students are learning in educational games. The improved model makes use of an AI training concept called multi-task learning, and could be used to improve both instruction and learning outcomes.

Multi-task learning is an approach in which one model is asked to perform multiple tasks.

“In our case, we wanted the model to be able to predict whether a student would answer each question on a test correctly, based on the student’s behavior while playing an educational game called Crystal Island,” says Jonathan Rowe, co-author of a paper on the work and a research scientist in North Carolina State University’s Center for Educational Informatics (CEI).

“The standard approach for solving this problem looks only at overall test score, viewing the test as one task,” Rowe says. “In the context of our multi-task learning framework, the model has 17 tasks — because the test has 17 questions.”

The researchers had gameplay and testing data from 181 students. The AI could look at each student’s gameplay and at how each student answered Question 1 on the test. By identifying common behaviors of students who answered Question 1 correctly, and common behaviors of students who got Question 1 wrong, the AI could determine how a new student would answer Question 1.

This function is performed for every question at the same time; the gameplay being reviewed for a given student is the same, but the AI looks at that behavior in the context of Question 2, Question 3, and so on.

And this multi-task approach made a difference. The researchers found that the multi-task model was about 10 percent more accurate than other models that relied on conventional AI training methods.

“We envision this type of model being used in a couple of ways that can benefit students,” says Michael Geden, first author of the paper and a postdoctoral researcher at NC State. “It could be used to notify teachers when a student’s gameplay suggests the student may need additional instruction. It could also be used to facilitate adaptive gameplay features in the game itself. For example, altering a storyline in order to revisit the concepts that a student is struggling with.

“Psychology has long recognized that different questions have different values,” Geden says. “Our work here takes an interdisciplinary approach that marries this aspect of psychology with deep learning and machine learning approaches to AI.”

“This also opens the door to incorporating more complex modeling techniques into educational software — particularly educational software that adapts to the needs of the student,” says Andrew Emerson, co-author of the paper and a Ph.D. student at NC State.

The paper, “Predictive Student Modeling in Educational Games with Multi-Task Learning,” will be presented at the 34th AAAI Conference on Artificial Intelligence, being held Feb. 7-12 in New York, N.Y. The paper was co-authored by James Lester, Distinguished University Professor of Computer Science and director of CEI at NC State; and by Roger Azevedo of the University of Central Florida.

The work was done with support from the National Science Foundation, under grant DRL-1661202; and from the Social Sciences and Humanities Research Council of Canada, under grant SSHRC 895-2011-1006.

Story Source:

Materials provided by North Carolina State University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Deforestation, erosion exacerbate mercury spikes near Peruvian gold mining

Scientists from Duke University have developed a model that can predict the amount of mercury being released into a local ecosystem by deforestation and small-scale gold mining.

The research, which appears online on December 11 in the journal Environmental Science and Technology, could point toward ways to mitigate the worst effects of mercury poisoning in regions such as those that are already experiencing elevated mercury levels caused by gold mining.

“We’ve taken a lot of ground measurements in the Peruvian Amazon of mercury levels in the water, soil and fish,” said Heileen Hsu-Kim, professor of civil and environmental engineering at Duke University. “But many areas in the Amazon aren’t easily accessible, and the government often does not have the resources needed to test local sites.”

“When you clear the land for mining, it leaves behind a landscape that basically went from lush greenery to barren desert,” said Hsu-Kim. “You can easily see the effects in satellite images. If (governments) could use publicly available satellite imagery to identify areas that are likely to be contaminated, it could help them make informed policy decisions to protect public health.”

The past two decades have seen a sharp increase in illegal and informal gold mining in Peru’s southern Amazon region of Madre de Dios. These small-scale operations typically involve cutting down all of the trees in a particular area, digging a large pit and then using mercury to extract gold from the excavated soil.

After larger, coarse particles are separated, the remaining fine soil is combined with water and mercury inside a large drum much like an oil barrel, and shaken. The mercury binds to any gold in the soil, creating a large chunk that can be easily removed. This chunk is then burned, evaporating and releasing the mercury into the air while leaving behind pure gold.

Besides releasing mercury into the atmosphere, miners typically add three to four times more mercury to each barrel than is actually needed, says Hsu-Kim. While this ensures all of the gold is extracted, it also means there is a large amount of leftover mercury in the slurry that is inevitably dumped back into the excavated pit.

And because the whole process started with the clearing of trees, there’s nothing to stop the mercury-laden soil from eroding into nearby rivers.

“While the local mercury levels might only double due to the mining itself, the effect of the erosion creates a four-fold increase in the amount of mercury being released into local rivers,” said Hsu-Kim.

“This means mining practices can hit people three times with mercury — once from direct contact, once from atmospheric transport and deposition, and once from soil mercury mobilization due to land clearing,” said William Pan, the Elizabeth Brooks Reid and Whitelaw Reid Associate Professor of Population Studies at Duke. “The scenarios we run demonstrate that even if mining were to end today, since vegetation is unlikely to return for several decades, the cleared land will continue to release mercury.”

Hsu-Kim and Pan worked with graduate students Sarah Diringer and Axel Berky to build a model to predict the amount of mercury and other contaminants being released into rivers. It combines data from a watershed erosion model, local variables such as annual rainfall, landscape and soil types, and data gathered about deforestation from satellite imagery.

When they analyzed mercury content in samples of soil and water from nine locations in the Colorado River watershed in Madre de Dios, they found that their model accurately predicted which areas were likely to have higher water concentrations of mercury.

The model suggests that over the last two decades, deforestation has doubled the amount of mercury entering local water sources in the Colorado River watershed, and increased it four-fold in the Puquiri subwatershed. Their model also suggests that if the current trends of deforestation continue, the amount of mercury being released into the local river systems may increase by 20 to 25 percent by 2030.

While the findings may sound bleak, the fact that the model works does offer some light at the end of the tunnel.

“We have shared our model with the Peruvian Ministry of Environment and Ministry of Health,” Pan said. “We are working with them to evaluate whether our approach can be used as a tool for developing new policies regarding mining, environmental monitoring of mercury and human exposure.”

Story Source:

Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Hydrologic simulation models that inform policy decisions are difficult to interpret

Hydrologic models that simulate and predict water flow are used to estimate how natural systems respond to different scenarios such as changes in climate, land use, and soil management. The output from these models can inform policy and regulatory decisions regarding water and land management practices.

Numerical models have become increasingly easy to employ with advances in computer technology and software with graphical user interface (GUI). While these technologies make the models more accessible, problems can arise if they are used by inexperienced modelers, says Juan Sebastian Acero Triana, a doctoral student in the Department of Agricultural and Biological Engineering at the University of Illinois.

Acero Triana is lead author on a study that evaluates the accuracy of a commonly used numerical model in hydrology.

Findings from the research show that even when the model appears to be properly calibrated, its results can be difficult to interpret correctly. The study, published in the Journal of Hydrology, provides recommendations for how to fine-tune the process and obtain more precise results.

Model accuracy is important to ensure that policy decisions are based on realistic scenarios, says Maria Chu, a co-author of the study. Chu is an assistant professor of agricultural and biological engineering in the College of Agricultural, Consumer and Environmental Sciences and The Grainger College of Engineering at U of I.

“For example, you may want to estimate the impacts of future climate on the water availability over the next 100 years. If the model is not representing reality, you are going to draw the wrong conclusions. And wrong conclusions will lead to wrong policies, which can greatly affect communities that rely on the water supply,” Chu says.

The study focuses on the Soil and Water Assessment model (SWAT), which simulates water circulation by incorporating data on land use, soil, topography, and climate. It is a popular model used to evaluate the impacts of climate and land management practices on water resources and contaminant movement.

The researchers conducted a case study at the Fort Cobb Reservoir Experimental Watershed (FCREW) in Oklahoma to assess the model’s accuracy. FCREW serves as a test site for the United States Department of Agriculture-Agricultural Research Service (USDA-ARS) and the United States Geological Survey (USGS); thus, detailed data are already available on stream flow, reservoir, groundwater, and topography.

The study coupled the SWAT model with another model called MODFLOW, or the Modular Finite-difference Flow Model, which includes more detailed information on groundwater levels and fluxes.

“Our purpose was to determine if the SWAT model by itself can appropriately represent the hydrologic system,” Acero Triana says. “We discovered that is not the case. It cannot really represent the entire hydrologic system.”

In fact, the SWAT model yielded 12 iterations of water movement that all appeared to be acceptable. However, when combined with MODFLOW it became clear that only some of these results properly accounted for groundwater flow. The researchers compared the 12 results from SWAT with 103 different groundwater iterations from MODFLOW in order to find a realistic representation of the water fluxes in the watershed.

Yielding several different results that all appear equally likely to be correct is called “equifinality.” Careful calibration of the model can reduce equifinality, Acero Triana explains. Calibration must also be able to account for inherent limitations in the way the model is designed and how parameters are defined. In technical terms, it must account for model and constraint inadequacy.

However, inexperienced modelers may not fully understand the intricacies of calibration. And because of the inherent constraints of both SWAT and MODFLOW, using metrics from just one model may not provide accurate results.

The researchers recommend using a combination model called SWATmf, which integrates the SWAT and the MODFLOW processes.

“This paper presents a case study that provides general guidelines for how to use hydrological models,” Acero Triana says. “We show that to really represent a hydrologic system you need two domain models. You need to represent both the surface and the sub-surface processes that are taking place.”

The differences in results may be small, but over time the effect could be significant, he concludes.

Go to Source
Author: