Cities beat suburbs at inspiring cutting-edge innovations

The disruptive inventions that make people go “Wow!” tend to come from research in the heart of cities and not in the suburbs, a new study suggests.

Researchers found that, within metro areas, the majority of patents come from innovations created in suburbs — often in the office parks of big tech companies like Microsoft and IBM.

But the unconventional, disruptive innovations — the ones that combine research from different technological fields — are more likely to be produced in cities, said Enrico Berkes, co-author of the study and postdoctoral researcher in economics at The Ohio State University.

These unconventional patents are ones that, for example, may blend research on acoustics with research on information storage — the basis for digital music players like the iPod. Or patents that cite previous work on “vacuum cleaning” and “computing” to produce the Roomba.

“Densely populated cities do not generate more patents than the suburbs, but they tend to generate more unconventional patents,” said Berkes, who did the work as a doctoral student at Northwestern University.

“Our findings suggest that cities provide more opportunities for creative people in different fields to interact informally and exchange ideas, which can lead to more disruptive innovation.”

Berkes conducted the study with Ruben Gaetani, assistant professor of strategic management at the University of Toronto. Their research was published online recently in The Economic Journal.

Previous research had shown that large metropolitan areas are where patenting activity tends to concentrate, Berkes said, suggesting that population density is an important factor for innovation.

But once Berkes and Gaetani started looking more closely at metro areas, they found that a sizable share of these patents was developed in the suburbs — the least densely populated part. Nearly three-quarters of patents came from places that had density below 3,650 people per square mile in 2000, about the density of Palo Alto, California.

“If new technology is spurred by population density, we wanted to know why so much is happening in the least dense parts of the metro areas,” Berkes said.

So Berkes and Gaetani analyzed more than 1 million U.S. patents granted between January 2002 and August 2014. They used finely geolocated data from the U.S. Patent and Trademark Office that allowed them to see exactly where in metro areas — including city centers and specific suburbs — that patented discoveries were made.

But they were also interested in determining the type of innovations produced — whether they would be considered conventional or unconventional. They did this by analyzing the previous work on which each patent was based.

The researchers tagged new patents as unconventional if the inventors cited previous work in widely different areas.

For example, a patent from 2000 developed in Pittsburgh is one of the first recorded inventions in wearable technologies and one of the precursors to products such as Fitbit. It was recognized as unconventional because it cites previous patents in both apparel and electrical equipment — two very distant fields.

After analyzing the data, the researchers found that both urban and suburban areas played a prominent role in the innovation process, but in different ways, Berkes said.

Large innovative companies, such as IBM or Microsoft, tend to perform their research in large office parks located outside the main city centers.

“These companies are very successful in taking advantage of formal channels of knowledge diffusion, such as meetings or conferences, where they can capitalize on the expertise of their scientists and have them work together on specialized projects for the company,” Berkes said.

“But it is more difficult for them to tap ideas from other scientific fields because this demands interactions with inventors they’re not communicating with every day or running into in the cafeteria or in the hallway.”

That’s where the urban cores excelled. In cities like San Francisco and Boston, researchers may meet people in entirely different fields at bars, restaurants, museums and cultural events. Any chance encounter could lead to productive partnerships, he said.

“If you want to create something truly new and disruptive, it helps if you have opportunities to casually bump into people from other scientific fields and exchange ideas and experiences and knowledge. That’s what happens in cities,” he said.

“Density plays an important role in the type, rather than the amount, of innovation.”

These findings show the potential value of tech parks that gather technology startup companies in a variety of fields in one place, Berkes said. But they have to be set up properly.

“Our research suggests that informal interactions are important. Tech parks should be structured in a way that people from different startups can easily interact with each other on a regular basis and share ideas,” he said.

Go to Source


Theoretically, two layers are better than one for solar-cell efficiency

Solar cells have come a long way, but inexpensive, thin film solar cells are still far behind more expensive, crystalline solar cells in efficiency. Now, a team of researchers suggests that using two thin films of different materials may be the way to go to create affordable, thin film cells with about 34% efficiency.

“Ten years ago I knew very little about solar cells, but it became clear to me they were very important,” said Akhlesh Lakhtakia, Evan Pugh University Professor and Charles Godfrey Binder Professor of Engineering Science and Mechanics, Penn State.

Investigating the field, he found that researchers approached solar cells from two sides, the optical side — looking on how the sun’s light is collected — and the electrical side — looking at how the collected sunlight is converted into electricity. Optical researchers strive to optimize light capture, while electrical researchers strive to optimize conversion to electricity, both sides simplifying the other.

“I decided to create a model in which both electrical and optical aspects will be treated equally,” said Lakhtakia. “We needed to increase actual efficiency, because if the efficiency of a cell is less than 30% it isn’t going to make a difference.” The researchers report their results in a recent issue of Applied Physics Letters.

Lakhtakia is a theoretician. He does not make thin films in a laboratory, but creates mathematical models to test the possibilities of configurations and materials so that others can test the results. The problem, he said, was that the mathematical structure of optimizing the optical and the electrical are very different.

Solar cells appear to be simple devices, he explained. A clear top layer allows sunlight to fall on an energy conversion layer. The material chosen to convert the energy, absorbs the light and produces streams of negatively charged electrons and positively charged holes moving in opposite directions. The differently charged particles get transferred to a top contact layer and a bottom contact layer that channel the electricity out of the cell for use. The amount of energy a cell can produce depends on the amount of sunlight collected and the ability of the conversion layer. Different materials react to and convert different wavelengths of light.

“I realized that to increase efficiency we had to absorb more light,” said Lakhtakia. “To do that we had to make the absorbent layer nonhomogeneous in a special way.”

That special way was to use two different absorbent materials in two different thin films. The researchers chose commercially available CIGS — copper indium gallium diselenide — and CZTSSe — copper zinc tin sulfur selenide — for the layers. By itself, CIGS’s efficiency is about 20% and CZTSSe’s is about 11%.

These two materials work in a solar cell because the structure of both materials is the same. They have roughly the same lattice structure, so they can be grown one on top of the other, and they absorb different frequencies of the spectrum so they should increase efficiency, according to Lakhtakia.

“It was amazing,” said Lakhtakia. “Together they produced a solar cell with 34% efficiency. This creates a new solar cell architecture — layer upon layer. Others who can actually make solar cells can find other formulations of layers and perhaps do better.”

According to the researchers, the next step is to create these experimentally and see what the options are to get the final, best answers.

Story Source:

Materials provided by Penn State. Original written by A’ndrea Elyse Messer. Note: Content may be edited for style and length.

Go to Source


Seeing objects through clouds and fog

Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision — only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.

“A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”

This technique complements other vision systems that can see through barriers on the microscopic scale — for applications in medicine — because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.

Supersight from scattered light

In order to see through environments that scatter light every-which-way, the system pairs a laser with a super-sensitive photon detector that records every bit of laser light that hits it. As the laser scans an obstruction like a wall of foam, an occasional photon will manage to pass through the foam, hit the objects hidden behind it and pass back through the foam to reach the detector. The algorithm-supported software then uses those few photons — and information about where and when they hit the detector — to reconstruct the hidden objects in 3D.

This is not the first system with the ability to reveal hidden objects through scattering environments, but it circumvents limitations associated with other techniques. For example, some require knowledge about how far away the object of interest is. It is also common that these systems only use information from ballistic photons, which are photons that travel to and from the hidden object through the scattering field but without actually scattering along the way.

“We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image,” said David Lindell, a graduate student in electrical engineering and lead author of the paper. “This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.”

In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.

“You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” said Lindell. “But, with just a handful of photons, the reconstruction algorithm can expose these objects — and you can see not only what they look like, but where they are in 3D space.”

Space and fog

Someday, a descendant of this system could be sent through space to other planets and moons to help see through icy clouds to deeper layers and surfaces. In the nearer term, the researchers would like to experiment with different scattering environments to simulate other circumstances where this technology could be useful.

“We’re excited to push this further with other types of scattering geometries,” said Lindell. “So, not just objects hidden behind a thick slab of material but objects that are embedded in densely scattering material, which would be like seeing an object that’s surrounded by fog.”

Lindell and Wetzstein are also enthusiastic about how this work represents a deeply interdisciplinary intersection of science and engineering.

“These sensing systems are devices with lasers, detectors and advanced algorithms, which puts them in an interdisciplinary research area between hardware and physics and applied math,” said Wetzstein. “All of those are critical, core fields in this work and that’s what’s the most exciting for me.”

Story Source:

Materials provided by Stanford University. Original written by Taylor Kubota. Note: Content may be edited for style and length.

Go to Source


‘Black dwarf supernova’: Physicist calculates when the last supernova ever will happen

The end of the universe as we know it will not come with a bang. Most stars will very, very slowly fizzle as their temperatures fade to zero.

“It will be a bit of a sad, lonely, cold place,” said theoretical physicist Matt Caplan, who added no one will be around to witness this long farewell happening in the far far future. Most believe all will be dark as the universe comes to an end. “It’s known as ‘heat death,’ where the universe will be mostly black holes and burned-out stars,” said Caplan, who imagined a slightly different picture when he calculated how some of these dead stars might change over the eons.

Punctuating the darkness could be silent fireworks — explosions of the remnants of stars that were never supposed to explode. New theoretical work by Caplan, an assistant professor of physics at Illinois State University, finds that many white dwarfs may explode in supernova in the distant far future, long after everything else in the universe has died and gone quiet.

In the universe now, the dramatic death of massive stars in supernova explosions comes when internal nuclear reactions produce iron in the core. Iron cannot be burnt by stars — it accumulates like a poison, triggering the star’s collapse creating a supernova. But smaller stars tend to die with a bit more dignity, shrinking and becoming white dwarfs at the end of their lives.

“Stars less than about 10 times the mass of the sun do not have the gravity or density to produce iron in their cores the way massive stars do, so they can’t explode in a supernova right now,” said Caplan. “As white dwarfs cool down over the next few trillion years, they’ll grow dimmer, eventually freeze solid, and become ‘black dwarf’ stars that no longer shine.” Like white dwarfs today, they’ll be made mostly of light elements like carbon and oxygen and will be the size of the Earth but contain about as much mass as the sun, their insides squeezed to densities millions of times greater than anything on Earth.

But just because they’re cold doesn’t mean nuclear reactions stop. “Stars shine because of thermonuclear fusion — they’re hot enough to smash small nuclei together to make larger nuclei, which releases energy. White dwarfs are ash, they’re burnt out, but fusion reactions can still happen because of quantum tunneling, only much slower, Caplan said. “Fusion happens, even at zero temperature, it just takes a really long time.” He noted this is the key for turning black dwarfs into iron and triggering a supernova.

Caplan’s new work, accepted for publication by Monthly Notices of the Royal Astronomical Society, calculates how long these nuclear reactions take to produce iron, and how much iron black dwarfs of different sizes need to explode. He calls his theoretical explosions “black dwarf supernova” and calculates that the first one will occur in about 10 to the 1100th years. “In years, it’s like saying the word ‘trillion’ almost a hundred times. If you wrote it out, it would take up most of a page. It’s mindbogglingly far in the future.”

Of course, not all black dwarfs will explode. “Only the most massive black dwarfs, about 1.2 to 1.4 times the mass of the sun, will blow.” Still, that means as many as 1 percent of all stars that exist today, about a billion trillion stars, can expect to die this way. As for the rest, they’ll remain black dwarfs. “Even with very slow nuclear reactions, our sun still doesn’t have enough mass to ever explode in a supernova, even in the far far future. You could turn the whole sun to iron and it still wouldn’t pop.”

Caplan calculates that the most massive black dwarfs will explode first, followed by progressively less massive stars, until there are no more left to go off after about 1032000 years. At that point, the universe may truly be dead and silent. “It’s hard to imagine anything coming after that, black dwarf supernova might be the last interesting thing to happen in the universe. They may be the last supernova ever.” By the time the first black dwarfs explode, the universe will already be unrecognizable. “Galaxies will have dispersed, black holes will have evaporated, and the expansion of the universe will have pulled all remaining objects so far apart that none will ever see any of the others explode.It won’t even be physically possible for light to travel that far.”

Even though he’ll never see one, Caplan remains unbothered. “I became a physicist for one reason. I wanted to think about the big questions- why is the universe here, and how will it end?” When asked what big question comes next, Caplan says, “Maybe we’ll try simulating some black dwarf supernova. If we can’t see them in the sky then at least we can see them on a computer.”

Go to Source


Should You Hire A Developer Or Use The API For Your Website’s CMS?

It doesn’t matter how powerful or well-rounded your chosen CMS happens to be: there can still come a point at which you decide that its natural state isn’t enough and something more is needed. It could be a new function, a fresh perspective, or improved performance, and you’re unwilling to settle for less. What should you do?

Your instinct might be to hire a web developer, ideally one with some expertise in that particular CMS, but is that the right way to go? Developers can be very costly, and whether you have some coding skill or you’re a total novice, you might be able to get somewhere without one — and the key could be using the API for your CMS.

In this post, we’re going to consider why you might want to hire a developer, why you should investigate APIs, and how you can choose between these options. Let’s get to it.

Why you should hire a developer

It’s fairly simple to make a case for hiring a web developer. For one thing, it’s easy. By sharing the load, you get to preserve your existing workload and collaborate with an expert, a second pair of eyes that can complete your vision and deftly deal with any issues that might arise. Additionally, it’s the best way to get quick results if you’re willing to allocate enough money to afford a top-notch developer and make your project a priority.

The ease of this option explains why it’s so popular. We so often outsource things that would be easy to do ourselves (getting store-bought sandwiches, using cleaning services, etc.) that outsourcing something as complex as a website development project seems like an obvious choice for anyone who isn’t themselves a programmer with plenty of free time.

And even if you are a programmer with enough free time to take on a personal project, you might not have the right skills for the job. Every system has its own nuances, whether it’s a powerful platform with proprietary parts (like Shopify) or an open-source foundation built around ease of use (like Ghost), so getting a CMS expert can make for a smoother experience.

Why you should use the API for your CMS

So, with such a good argument to be made for immediately consulting a developer, why should you take the time to get involved directly? Well, one of the core goals of an API — as you may well be aware — is to make system functions readily accessible to outside systems, and you can take advantage of that to extend your system through integrations.

Becoming familiar with the workings of an API doesn’t require you to have an exhaustive knowledge of the CMS itself. You need only understand the available fields and functions and how you can call them (and interact with them) from elsewhere. From there, it’s more about finding — or creating — the external systems that can give you the results you need.

The best developer portals will have detailed API references along with getting started guides, sample code, SDKs and everything else a developer needs to successfully consume the API. The providers  behind them want as many people as possible to gravitate towards their platforms, after all more compatible modules (along with services like Zapier) means a stronger ecosystem and more interest overall. This means that even people with relatively meager technical understanding can get somewhere.

Additionally, getting to know the API for your CMS will help you understand what the system can and can’t do natively. It’s possible that by consuming the API you will uncover existing functionality that you otherwise wouldn’t have noticed. Overall, then, taking this step first will help you understand your CMS and either source an existing integration or build a more economical outline of a project that you can then pass to a developer.

How you can choose the right approach

In talking about building a project outline, I hinted at the natural conclusion here, which is that these options aren’t mutually exclusive. Having studied the API for your website’s CMS, you can develop something else or bring in a suitable module, but you can also continue to work with an external developer. It doesn’t subtract from your options. For that reason, then, I strongly recommend working with the API first and seeing what you can glean from it. That will allow you to make the smartest decision about how to proceed.

Go to Source
Author: <a href="">rodneylaws</a>


157 day cycle in unusual cosmic radio bursts

An investigation into one of the current great mysteries of astronomy has come to the fore thanks to a four-year observing campaign conducted at the Jodrell Bank Observatory.

Using the long-term monitoring capabilities of the iconic Lovell Telescope, an international team led by Jodrell Bank astronomers has been studying an object known as a repeating Fast Radio Burst (FRB), which emits very short duration bright radio pulses.

Using the 32 bursts discovered during the campaign, in conjunction with data from previously published observations, the team has discovered that emission from the FRB known as 121102 follows a cyclic pattern, with radio bursts observed in a window lasting approximately 90 days followed by a silent period of 67 days. The same behaviour then repeats every 157 days.

This discovery provides an important clue to identifying the origin of these enigmatic fast radio bursts. The presence of a regular sequence in the burst activity could imply that the powerful bursts are linked to the orbital motion of a massive star, a neutron star or a black hole.

Dr Kaustubh Rajwade of The University of Manchester, who led the new research, said: “This is an exciting result as it is only the second system where we believe we see this modulation in burst activity. Detecting a periodicity provides an important constraint on the origin of the bursts and the activity cycles could argue against a precessing neutron star.”

Repeating FRBs could be explained by the precession, like a wobbling top, of the magnetic axis of a highly magnetized neutron star but with current data scientists believe it may be hard to explain a 157-day precession period given the large magnetic fields expected in these stars.

The existence of FRBs was only discovered as recently as 2007 and they were initially thought to be one-off events related to a cataclysmic event such as an exploding star. This picture partly changed once FRB 121102, originally discovered with the Arecibo radio telescope on November 2 2012, was seen to repeat in 2016. However, until now, no one recognised that these bursts were in fact organised in a regular pattern.

Professor Benjamin Stappers, who leads the MeerTRAP project to hunt for FRBs using the MeerKAT telescope in South Africa said: “This result relied on the regular monitoring possible with the Lovell Telescope, and non-detections were just as important as the detections.”

In a new paper published in Monthly Notices of the Royal Astronomical Society, the team confirm that FRB 121102 is only the second repeating source of FRBs to display such periodic activity. To their surprise, the timescale for this cycle is almost 10 times longer than the 16-day periodicity exhibited by the first repeating source, FRB 180916.J10158+56, which was recently discovered by the CHIME telescope in Canada.

“This exciting discovery highlights how little we know about the origin of FRBs,” says Duncan Lorimer who serves as Associate Dean for Research at West Virginia University and, along with PhD student Devansh Agarwal, helped develop the data analysis technique that led to the discovery. “Further observations of a larger number of FRBs will be needed in order to obtain a clearer picture about these periodic sources and elucidate their origin,” he added.

Story Source:

Materials provided by University of Manchester. Note: Content may be edited for style and length.

Go to Source


‘Nature’s antifreeze’ provides formula for more durable concrete

Secrets to cementing the sustainability of our future infrastructure may come from nature, such as proteins that keep plants and animals from freezing in extremely cold conditions. CU Boulder researchers have discovered that a synthetic molecule based on natural antifreeze proteins minimizes freeze-thaw damage and increases the strength and durability of concrete, improving the longevity of new infrastructure and decreasing carbon emissions over its lifetime.

They found that adding a biomimetic molecule — one that mimics antifreeze compounds found in Arctic and Antarctic organisms — to concrete effectively prevents ice crystal growth and subsequent damage. This new method, published today in Cell Reports Physical Science, challenges more than 70 years of conventional approaches in mitigating frost damage in concrete infrastructure.

“No one thinks about concrete as a high-tech material,” said Wil Srubar III, author of the new study and assistant professor of civil, environmental and architectural engineering. “But it’s a lot more high-tech than one might think. In the face of climate change, it is critical to pay attention to not only how we manufacture concrete and other construction materials that emit a lot of carbon dioxide in their production, but also how we ensure the long-term resilience of those materials.”

Concrete is formed by mixing water, cement powder and various aggregates, like sand or gravel.

Since the 1930s, small air bubbles have been put into concrete to protect it from water and ice crystal damage. This allows any water that seeps into the concrete to have room to expand when it freezes. Without it, the surface of damaged concrete will flake off.

But this finicky process can come at a cost, decreasing strength and increasing permeability. This allows road salts and other chemicals to leach into the concrete, which can then degrade steel embedded within.

“While you’re solving one problem, you’re actually exacerbating another problem,” said Srubar.

As the U.S. faces a significant amount of aging infrastructure across the country, billions of dollars are spent each year to mitigate and prevent damage. This new biomimetic molecule, however, could dramatically reduce costs.

In tests, concrete made with this molecule — instead of air bubbles — was shown to have equivalent performance, higher strength, lower permeability and a longer lifespan.

With a patent pending, Srubar is hoping this new method will enter the commercial market in the next 5 to 10 years.

Nature finds a way

From the below-freezing waters of Antarctica to the ice-cold tundras of the Arctic, many plants, fish, insects and bacteria contain proteins that prevent them from freezing. These antifreeze proteins bind to the surface of ice crystals in an organism the moment they form — keeping them really, really small, and unable to do any damage.

“We thought that was quite clever,” said Srubar. “Nature had already found a way to solve this problem.”

Concrete suffers from the same issue of ice crystal formation, which previous engineers had tried to mitigate by adding air bubbles. So Srubar and his team thought: Why not gather a bunch of this protein, and put it into concrete?

Unfortunately, these proteins found in nature don’t like to be removed from their natural environments. They unravel or disintegrate, like overcooked spaghetti.

Concrete is also extremely basic, with a pH commonly over 12 or 12.5. This is not a friendly environment for most molecules, and these proteins were no exception.

So Srubar and his graduate students used a synthetic molecule — polyvinyl alcohol, or PVA — that behaves exactly like these antifreeze proteins but is much more stable at a high pH, and combined it with another non-toxic, robust molecule — polyethylene glycol — often used in the pharmaceutical industry to prolong the circulation time of drugs in the body. This molecular combination of two polymers remained stable at a high pH and inhibited ice crystal growth.

Increased stressors

After water, concrete is the second most consumed material on Earth: two tons per person are manufactured each year. That’s a new New York City being built every 35 days for at least the next 32 years, according to Srubar.

“Its manufacture, use and disposal have significant environmental consequences. The production of cement alone, the powder that we use to make concrete, is responsible for about 8 percent of our global CO2 emissions.”

In order to meet Paris Agreement goals and keep global temperature increase well below 3.6 degrees Fahrenheit, the construction industry must decrease emissions by 40 percent by 2030 and eliminate them altogether by 2050. Climate change itself will only exacerbate stressors on concrete and aging infrastructure, with increased extreme temperatures and freeze and thaw cycles occurring more often in some geographic locations.

“The infrastructure which is designed today will be facing different climatic conditions in the future. In the coming decades, materials will be tested in a way they’ve never been before,” said Srubar. “So the concrete that we do make needs to last.”

Go to Source

3D Printing Industry

Are You Ready for the New Supply Chain?

Big companies may come knocking at your door as supply chains adjust after COVID-19. Will your 3D printing business be ready? By Mike Moceri, founder and CEO, MakerOS Earlier this month during a Saturday press conference, Governor Andrew Cuomo of New York State said this: “China is, remarkably, the repository for all of these orders – […]

Go to Source
Author: 3D Printing Industry


Water reuse could be key for future of hydraulic fracturing

Enough water will come from the ground as a byproduct of oil production from unconventional reservoirs during the coming decades to theoretically counter the need to use fresh water for hydraulic fracturing operations in many of the nation’s large oil-producing areas. But while other industries, such as agriculture, might want to recycle some of that water for their own needs, water quality issues and the potential costs involved mean it could be best to keep the water in the oil patch.

That is the takeaway from two new studies led by researchers at The University of Texas at Austin.

“We need to first maximize reuse of produced water for hydraulic fracturing,” said Bridget Scanlon, lead author on both of the studies and a senior research scientist with the UT Bureau of Economic Geology. “That’s really the message here.”

The first study was published in Environmental Science and Technology on Feb. 16. It quantifies for the first time how much water is produced with oil and natural gas operations compared with how much is needed for hydraulic fracturing. The authors also projected water demand for hydraulic fracturing needs and produced water over the life of the oil and gas plays, which span decades. A play is a group of oil or natural gas fields controlled by the same geology.

The second study was published in Science of the Total Environment on Feb. 3. It assesses the potential for using the water produced with oil and natural gas in other sectors, such as agriculture. It included researchers from New Mexico State University, The University of Texas at El Paso and Penn State University. It shows that current volumes of produced water are relatively small compared with irrigation water demands and will not solve water scarcity issues.

Dealing with water issues has become increasingly challenging with oil and natural gas development in unconventional shale reservoirs. Operators need significant amounts of water to hydraulically fracture shales to produce oil and natural gas, which can be an issue in areas where water is scarce. And large quantities of water are brought up from the reservoirs as a byproduct of production, posing a whole new set of issues for how to manage the produced water, particularly as science has shown that pumping it back into the deep subsurface is linked to seismic activity in some regions.

The studies can help inform significant public policy debates about water management related to oil and natural gas production in Texas, Oklahoma, New Mexico and other parts of the country, Scanlon said.

“The water volumes that are quoted vary widely. That’s why we did this study,” she said. “This really provides a quantitative analysis of hydraulic fracturing water demand and produced water volumes.”

The research looked at eight major plays across the U.S., including the Permian (Midland and Delaware), Bakken, Barnett, Eagle Ford, Fayetteville, Haynesville, Marcellus and Niobrara plays.

The scientists used historical data from 2009 to 2017 for all plays, and projections were developed for the life of the oil plays based on the technically recoverable oil using current technology. Oil plays produced much more water than natural gas plays, with the Permian Basin producing about 50 times as much water as the Marcellus in 2017. As far as recycling potential for hydraulic fracturing, the research shows that in many cases there’s plenty of water that could be put to good use. For instance, in the Delaware Basin, which is part of the larger Permian Basin in Texas, scientists found that projected produced water volumes will be almost four times as great as the amount of water required for hydraulic fracturing.

Managing this produced water will pose a significant challenge in the Delaware, which accounts for about 50% of the country’s projected oil production. Although the water could theoretically be used by other sectors, such as agriculture in arid West Texas, scientists said water quality issues and the cost to treat the briny water could be hurdles. In addition, if the water is highly treated to remove all the solids, large volumes of salt would be generated. The salt from the produced water in the Delaware Basin in 2017 alone could fill up to 3,000 Olympic swimming pools.

“The ability to beneficially reuse produced waters in arid and semi-arid regions, which can be water stressed, is not the panacea that we were hoping,” said co-author Mark Engle, a professor at The University of Texas at El Paso. “There is definitely potential to do some good, but it will require cautious and smart approaches and policies.”

Go to Source

3D Printing Industry

Researchers create roadmap for 3D bioprinting

A worldwide collective of researchers and scientists from universities, institutions, and hospitals have come together to produce a roadmap for 3D bioprinting.  Published in Biofabrication, the paper details the current state of bioprinting, including recent advances of the technology in selected applications as well as the present developments and challenges. It also envisions how the […]

Go to Source
Author: Anas Essop