Categories
ScienceDaily

Fast calculation dials in better batteries

A simpler and more efficient way to predict performance will lead to better batteries, according to Rice University engineers.

That their method is 100,000 times faster than current modeling techniques is a nice bonus.

The analytical model developed by materials scientist Ming Tang and graduate student Fan Wang of Rice University’s Brown School of Engineering doesn’t require complex numerical simulation to guide the selection and design of battery components and how they interact.

The simplified model developed at Rice — freely accessible online — does the heavy lifting with an accuracy within 10% of more computationally intensive algorithms. Tang said it will allow researchers to quickly evaluate the rate capability of batteries that power the planet.

The results appear in the open-access journal Cell Reports Physical Science.

There was a clear need for the updated model, Tang said.

“Almost everyone who designs and optimizes battery cells uses a well-established approach called P2D (for pseudo-two dimensional) simulations, which are expensive to run,” Tang said. “This especially becomes a problem if you want to optimize battery cells, because they have many variables and parameters that need to be carefully tuned to maximize the performance.

“What motivated this work is our realization that we need a faster, more transparent tool to accelerate the design process, and offer simple, clear insights that are not always easy to obtain from numerical simulations,” he said.

Battery optimization generally involves what the paper calls a “perpetual trade-off” between energy (the amount it can store) and power density (the rate of its release), all of which depends on the materials, their configurations and such internal structures as porosity.

“There are quite a few adjustable parameters associated with the structure that you need to optimize,” Tang said. “Typically, you need to make tens of thousands of calculations and sometimes more to search the parameter space and find the best combination. It’s not impossible, but it takes a really long time.”

He said the Rice model could be easily implemented in such common software as MATLAB and Excel, and even on calculators.

To test the model, the researchers let it search for the optimal porosity and thickness of an electrode in common full- and half-cell batteries. In the process, they discovered that electrodes with “uniform reaction” behavior such as nickel-manganese-cobalt and nickel-cobalt-aluminum oxide are best for applications that require thick electrodes to increase the energy density.

They also found that battery half-cells (with only one electrode) have inherently better rate capability, meaning their performance is not a reliable indicator of how electrodes will perform in the full cells used in commercial batteries.

The study is related to the Tang lab’s attempts at understanding and optimizing the relationship between microstructure and performance of battery electrodes, the topic of several recent papers that showed how defects in cathodes can speed lithium absorption and how lithium cells can be pushed too far in the quest for speed.

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Study shows demolishing vacant houses can have positive effect on neighbor maintenance

New research suggests that demolishing abandoned houses may lead nearby property owners to better maintain their homes.

This study, by Daniel Kuhlmann, assistant professor of community and regional planning at Iowa State University, was published recently in the peer-reviewed Journal of Planning Education and Research. He examined whether the demolition of dilapidated and abandoned housing affects the maintenance decisions of nearby homeowners.

In the wake of the 2008 recession, many cities experienced an increase in the number of vacant and abandoned houses. Some cities, such as Cleveland and Detroit, received federal funding to acquire vacant properties through land bank programs. While land banks were able to remodel and sell some of these properties, for the most distressed houses, demolition was the only option.

Kuhlmann wondered how effective those policies and demolitions had been.

“Demolition programs have two goals. The first is to get nuisance properties out of neighborhoods because they can be dangerous,” he said. “The second goal is to help stabilize declining neighborhoods.”

Past research showed that demolitions have little effect on neighboring property values. But what about the physical condition of nearby homes?

Kuhlmann looked at changes to houses over time, including the presence of boarded or broken windows, dumping or yard debris, and damages to roofs, paint, siding, gutters and porches.

Using the results of two property condition surveys and administrative records on demolitions in some of the most distressed neighborhoods in Cleveland, Kuhlmann found properties near demolitions were more likely to show signs of improvement between the two surveys and less likely to deteriorate themselves.

Kuhlmann recognizes the longstanding disinvestment in some of these neighborhoods, many of which are doubly affected by racial inequities. This fact makes studies like Kuhlmann’s “challenging because even if distressed housing contributes to decline, it is certainly also a symptom of it.” He suggests that future research should look at demolitions’ long-term effects on a neighborhood.

“Community-wide, residents tend to see demolitions as a good idea in specific instances, but they would like larger investments,” he said. “It can’t end with demolitions.”

These findings are useful for planners, policymakers and academics concerned about damages caused by abandoned and deteriorating housing, Kuhlmann says.

“My research in general focuses on the extremes of decline, but I do think these types of properties exist in more cities than we might expect,” he said.

Story Source:

Materials provided by Iowa State University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Understanding of relaxor ferroelectric properties could lead to many advances

A new fundamental understanding of polymeric relaxor ferroelectric behavior could lead to advances in flexible electronics, actuators and transducers, energy storage, piezoelectric sensors and electrocaloric cooling, according to a team of researchers at Penn State and North Carolina State.

Researchers have debated the theory behind the mechanism of relaxor ferroelectrics for more than 50 years, said Qing Wang, professor of materials science and engineering at Penn State. While relaxor ferroelectrics are well-recognized, fundamentally fascinating and technologically useful materials, a Nature article commented in 2006 that they were heterogeneous, hopeless messes.

Without a fundamental understanding of the mechanism, little progress has been made in designing new relaxor ferroelectric materials. The new understanding, which relies on both experiment and theoretical modeling, shows that relaxor ferroelectricity in polymers comes from chain conformation disorders induced by chirality. Chirality is a feature of many organic materials in which molecules are mirror images of each other, but not exactly the same. The relaxor mechanism in polymers is vastly different from the mechanism proposed for ceramics whose relaxor behavior originates from chemical disorders.

“Different from ferroelectrics, relaxors exhibit no long-range large ferroelectric domains but disordered local polar domains,” Wang explained. “The research in relaxor polymeric materials has been challenging owing to the presence of multiple phases such as crystalline, amorphous and crystalline-amorphous interfacial area in polymers.”

In energy storage capacitors, relaxors can deliver a much higher energy density than normal ferroelectrics, which have high ferroelectric loss that turns into waste heat. In addition, relaxors can generate larger strain under the applied electric fields and have a much better efficiency of energy conversion than normal ferroelectrics, which makes them preferred materials for actuators and sensors.

Penn State has a long history of discovery in ferroelectric materials. Qiming Zhang, professor of electrical engineering at Penn State, discovered the first relaxor ferroelectric polymer in 1998, when he used an electron beam to irradiate a ferroelectric polymer and found it had become a relaxor. Zhang along with Qing Wang also made seminal discoveries in the electrocaloric effect using relaxor polymers, which allows for solid state cooling without the use of noxious gases and uses much less energy than conventional refrigeration.

“The new understanding of relaxor behavior would open up unprecedented opportunities for us to design relaxor ferroelectric polymers for a range of energy storage and conversion applications,” said Wang.

Story Source:

Materials provided by Penn State. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

European partners set to develop handheld eye scanner containing 3D printed optics

The Medical University of Vienna is set to lead a five year European project which will see the development of a partly 3D printed mobile ophthalmic (eye-related) imaging device. The engineers and scientists working on the handheld device aim to minitiarize photonic chip technology, bringing down its cost in the process. The project partners hope […]

Go to Source
Author: Kubi Sertoglu

Categories
ScienceDaily

AI techniques in medical imaging may lead to incorrect diagnoses

Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests.

A team of researchers, led by the University of Cambridge and Simon Fraser University, designed a series of tests for medical image reconstruction algorithms based on AI and deep learning, and found that these techniques result in myriad artefacts, or unwanted alterations in the data, among other major errors in the final images. The effects were typically not present in non-AI based imaging techniques.

The phenomenon was widespread across different types of artificial neural networks, suggesting that the problem will not be easily remedied. The researchers caution that relying on AI-based image reconstruction techniques to make diagnoses and determine treatment could ultimately do harm to patients. Their results are reported in the Proceedings of the National Academy of Sciences.

“There’s been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionise modern medicine: however, there are potential pitfalls that must not be ignored,” said Dr Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics, who led the research with Dr Ben Adcock from Simon Fraser University. “We’ve found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output.”

A typical MRI scan can take anywhere between 15 minutes and two hours, depending on the size of the area being scanned and the number of images being taken. The longer the patient spends inside the machine, the higher resolution the final image will be. However, limiting the amount of time patients spend inside the machine is desired, both to reduce the risk to individual patients and to increase the overall number of scans that can be performed.

Using AI techniques to improve the quality of images from MRI scans or other types of medical imaging is an attractive possibility for solving the problem of getting the highest quality image in the smallest amount of time: in theory, AI could take a low-resolution image and make it into a high-resolution version. AI algorithms ‘learn’ to reconstruct images based on training from previous data, and through this training procedure aim to optimise the quality of the reconstruction. This represents a radical change compared to classical reconstruction techniques that are solely based on mathematical theory without dependency on previous data. In particular, classical techniques do not learn.

Any AI algorithm needs two things to be reliable: accuracy and stability. An AI will usually classify an image of a cat as a cat, but tiny, almost invisible changes in the image might cause the algorithm to instead classify the cat as a truck or a table, for instance. In this example of image classification, the one thing that can go wrong is that the image is incorrectly classified. However, when it comes to image reconstruction, such as that used in medical imaging, there are several things that can go wrong. For example, details like a tumour may get lost or may falsely be added. Details can be obscured and unwanted artefacts may occur in the image.

“When it comes to critical decisions around human health, we can’t afford to have algorithms making mistakes,” said Hansen. “We found that the tiniest corruption, such as may be caused by a patient moving, can give a very different result if you’re using AI and deep learning to reconstruct medical images — meaning that these algorithms lack the stability they need.”

Hansen and his colleagues from Norway, Portugal, Canada and the UK designed a series of tests to find the flaws in AI-based medical imaging systems, including MRI, CT and NMR. They considered three crucial issues: instabilities associated with tiny perturbations, or movements; instabilities with respect to small structural changes, such as a brain image with or without a small tumour; and instabilities with respect to changes in the number of samples.

They found that certain tiny movements led to myriad artefacts in the final images, details were blurred or completely removed, and that the quality of image reconstruction would deteriorate with repeated subsampling. These errors were widespread across the different types of neural networks.

According to the researchers, the most worrying errors are the ones that radiologists might interpret as medical issues, as opposed to those that can easily be dismissed due to a technical error.

“We developed the test to verify our thesis that deep learning techniques would be universally unstable in medical imaging,” said Hansen. “The reasoning for our prediction was that there is a limit to how good a reconstruction can be given restricted scan time. In some sense, modern AI techniques break this barrier, and as a result become unstable. We’ve shown mathematically that there is a price to pay for these instabilities, or to put it simply: there is still no such thing as a free lunch.”

The researchers are now focusing on providing the fundamental limits to what can be done with AI techniques. Only when these limits are known will we be able to understand which problems can be solved. “Trial and error-based research would never discover that the alchemists could not make gold: we are in a similar situation with modern AI,” said Hansen. “These techniques will never discover their own limitations. Such limitations can only be shown mathematically.”

Go to Source
Author:

Categories
ScienceDaily

Water may look like a simple liquid; however, it is anything but simple to analyze

An international team of scientists lead by Professor Martina Havenith from Ruhr-Universität Bochum (RUB) has been able to shed new light on the properties of water at the molecular level. In particular, they were able to describe accurately the interactions between three water molecules, which contribute significantly to the energy landscape of water. The research could pave the way to better understand and predict water behaviour at different conditions, even under extreme ones. The results have been published online in the journal Angewandte Chemie on 19 April 2020.

Interactions via vibrations

Despite water is at first glance looking like a simple liquid it has many unusual properties, one of them being that it is less dense when it is frozen than when it is liquid. In the simplest way liquids are described by the interaction of their direct partners, which are mostly sufficient for a good description, but not in the case of water: The interactions in water dimers account for 75 per cent of the energy that keeps water together. Martina Havenith, head of the Bochum-based Chair of Physical Chemistry II and spokesperson for the Ruhr Explores Solvation (Resolv) Cluster of Excellence, and her colleagues from Emory University in Atlanta, US, recently published an accurate description of the interactions related to the water dimer. In order to get access to the cooperative interactions, which make up 25 per cent of the total water interaction, the water trimer had to be investigated.

Now, the team lead by Martina Havenith in collaboration with colleagues from Emory University and of the University of Mississipi, US, has been able to describe for the first time in an accurate way the interaction energy among three water molecules. They tested modern theoretical descriptions against the result of the spectroscopic fingerprint of these intermolecular interactions.

Obstacles for experimental research

Since more than 40 years, scientists have developed computational models and simulations to describe the energies involved in the water trimer. Experiments have been less successful, despite some pioneer insights in gas phase studies, and they rely on spectroscopy. The technique works by irradiating a water sample with radiation and recording how much light has been absorbed. The obtained pattern is related to the different type of excitations of intermolecular motions involving more than one water molecules. Unfortunately, to obtain these spectroscopic fingerprints for water dimers and trimers, one needs to irradiate in the terahertz frequency region. And laser sources that provide high-power have been lacking for that frequency region.

This technical gap has been filled only recently. In the current publication, the RUB scientists used the free electron lasers at Radboud University in Nijmegen in The Netherlands, which allows for high powers in the terahertz frequency region. The laser was applied through tiny droplets of superfluid helium, which is cooled down at extremely low temperatures, at minus 272,75 degrees Celsius. These droplets can collect water molecules one by one, allowing to isolate small aggregates of dimers and trimers. In this way the scientists were able to irradiate exactly the molecules they wanted to and to acquire the first comprehensive spectrum of the water trimer in the terahertz frequency region.

The experimental observations of the intermolecular vibrations were compared to and interpreted using high level quantum calculations. In this way the scientists could analyse the spectrum and assign up to six different intermolecular vibrations.

Story Source:

Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Lipid gradient that keeps your eyes wet

New understandings of how lipids function within tears could lead to better drugs for treating dry eye disease.

A new approach has given Hokkaido University researchers insight into the synthesis and functions of lipids found in tears. Their findings, published in the journal eLife, could help the search for new treatments for dry eye disease.

The film of tears covering the eye’s surface is vital for eliminating foreign objects, providing oxygen and nutrients to the eye’s outer tissues, and reducing friction with the eyelid. The film is formed of an outer lipid layer and an inner liquid layer. The outer lipid layer, which is itself formed of two sublayers, prevents water evaporation from the liquid layer. Dry eye disease develops when the glands that produce these lipids dysfunction. However, it has remained unclear how those generally incompatible layers — water and lipid — can form and maintain tear films.

Hokkaido University biochemist Akio Kihara and colleagues wanted to understand the functions of a subclass of lipids called OAHFAs (O-Acyl)-ω-hydroxy fatty acids) that are present in the inner lipid sublayer (amphiphilic lipid sublayer) just above the liquid layer of the tear film. OAHFAs are known to have both polar and non-polar ends in its molecule, giving them affinity for both water and lipid.

To do this, they turned off a gene called Cyp4f39 in mice that is known for its involvement in ω-hydroxy fatty acid synthesis. Previous attempts at studying the gene’s functions in this way had led to neonatal death in mice, as it impaired the skin’s protective role. The team used a way to turn the gene off, except in the skin.

The mice were found to have damaged corneas and unstable tear films, both indicative of dry eyes. Further analyses showed that these mice were lacking OAHFAs and their derivatives in their tear films. Interestingly, the scientists also discovered that the OAHFA derivatives have polarities intermediate between OAHFAs and other lipids in the tear film. This strongly suggests that those lipids together form a polarity gradient that plays an important role in connecting the tear film’s inner liquid layer and outer lipid layer, helping the film spread uniformly over the surface of the eye.

“Drugs currently used in dry eye disease target the liquid layer of the tear film, but there aren’t any drugs that target its lipid layer,” says Akio Kihara. “Since most cases of dry eye disease are caused by abnormalities in the lipid layer, eye drops containing OAHFAs and their derivatives could be an effective treatment.”

Further studies are required to fully understand the functions and synthesis of OAHFAs.

Story Source:

Materials provided by Hokkaido University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Printing complex cellulose-based objects

Trees and other plants lead the way: they produce cellulose themselves and use it to build complex structures with extraordinary mechanical properties. That makes cellulose attractive to materials scientists who are seeking to manufacture sustainable products with special functions. However, processing materials into complex structures with high cellulose content is still a big challenge for materials scientists.

A group of researchers at ETH Zurich and Empa have now found a way to process cellulose using 3D printing so as to create objects of almost unlimited complexity that contain high levels of cellulose particles.

Print first, then densify

To do this, the researchers combined printing via direct ink writing (DIW) method with a subsequent densification process to increase the cellulose content of the printed object to a volume fraction of 27 percent. Their work was recently published in the Advanced Functional Materials journal.

The ETH and Empa researchers are admittedly not the first to process cellulose with the 3D printer. However, previous approaches, which also used cellulose-containing ink, have not been able to produce solid objects with such a high cellulose content and complexity.

The composition of the printing ink is extremely simple. It consists only of water in which cellulose particles and fibres measuring a few hundred nanometres have been dispersed. The cellulose content is in between six and 14 percent of the ink volume.

Solvent bath densifies cellulose

The ETH researchers used the following trick to densify the printed cellulose products: After printing a cellulose-based water ink, they put the objects in a bath containing organic solvents. As cellulose does not like organic solvents, the particles tend to aggregate. This process results into shrinkage of the printed part and consequently to a significant increase in the relative amount of cellulose particles within the material.

In a further step, the scientists soaked the objects in a solution containing a photosensitive plastic precursor. By removing the solvent by evaporation, the plastic precursors infiltrate the cellulose-based scaffold. Next, to convert the plastic precursors into a solid plastic, they exposed the objects to UV light. This produced a composite material with a cellulose content of the aforementioned 27 volume percent. “The densification process allowed us to start out with a 6 to 14 percent in volume of water-cellulose mixture and finish with a composite object that exhibits up to 27 volume percent of cellulose nanocrystals,” says Hausmann.

Elasticity can be predetermined

As if that were not enough, depending on the type of plastic precursor used, the researchers can adjust the mechanical properties of the printed objects, such as their elasticity or strength. This allows them to create hard or soft parts, depending on the application.

Using this method, the researchers were able to manufacture various composite objects, including some of a delicate nature, such as a type of flame sculpture that is only 1 millimetre thick. However, densification of printed parts with wall thickness higher than five milimeters lead to distortion of the structure because the surface of the densifying object contracts faster than its core.

Similar fibre orientation to wood

The researchers investigated their objects using X-ray analyses and mechanical tests. Their findings showed that the cellulose nanocrystals are aligned similarly to those present in natural materials. “This means that we can control the cellulose microstructure of our printed objects to manufacture materials whose microstructure resembles those of biological systems, such as wood,” says Rafael Libanori, senior assistant in ETH Professor André Studart’s research group.

The printed parts are still small — laboratory scale you could say. But there are many potential applications, from customised packaging to cartilage-replacement implants for ears. The researchers have also printed an ear based on a human model. Until such a product could be used in clinical practice, however, more research and, above all, clinical trials are needed.

This kind of cellulose technology could also be of interest to the automotive industry. Japanese carmakers have already built a prototype of a sports car for which the body parts are made almost entirely of cellulose-based materials.

Story Source:

Materials provided by ETH Zurich. Original written by Peter Rüegg. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

UC Riverside to lead scalable quantum computing project using 3D printed ion traps

UC Riverside (UCR) is set to lead a project focused on enabling scalable quantum computing after winning a $3.75 million Multicampus-National Lab Collaborative Research and Training Award. The collaborative effort will see contributions from UC Berkeley, UCLA and UC Santa Barbara, with UCR acting as project coordinator. Scalable quantum computing Quantum computing is currently in […]

Go to Source
Author: Kubi Sertoglu

Categories
UnrealEngine

Forging new paths for filmmakers on The Mandalorian

Having spent the last 15 years as a game engineer and technical lead at Epic Games, I’ve learned firsthand that rapid feedback loops are critical for successful creative collaborations. The quick iteration, spontaneity, and sense of shared purpose that comes from working closely together is irreplaceable. At Epic, we go to great lengths to give our creative teams as much time together as possible.
 

So when I began to learn more about filmmaking, it was surprising to realize that it’s common for critical departments on a traditional visual effects-heavy production to be decentralized. Weeks or months can pass between the on-set work of key creatives and the post-production work to fully realize the vision. This seemed like an opportunity where real-time game engine technology could make a real difference.
 
Fortunately, Jon Favreau is way ahead of the curve. His pioneering vision for filming The Mandalorian presented an opportunity to turn the conventional filmmaking paradigm on its head.

When we first met with Jon, he was excited to bring more real-time interactivity and collaboration back into the production process. It was clear he was willing to experiment with new workflows and take risks to achieve that goal. Ultimately, these early talks evolved into a groundbreaking virtual production methodology: shooting the series on a stage surrounded by massive LED walls displaying dynamic digital sets, with the ability to react to and manipulate this digital content in real time during live production. Working together with ILM, we drew up plans for how the pieces would fit together. The result was an ambitious new system and a suite of technologies to be deployed at a scale that had never been attempted for the fast-paced nature of episodic television production.
Mandalorian_HUC-027199.pip.jpg
By the time shooting began, Unreal Engine was running on four synchronized PCs to drive the pixels on the LED walls in real time. At the same time, three Unreal operators could simultaneously manipulate the virtual scene, lighting, and effects on the walls. The crew inside the LED volume was also able to control the scene remotely from an iPad, working side-by-side with the director and DP. This virtual production workflow was used to film more than half of The Mandalorian Season 1, enabling the filmmakers to eliminate location shoots, capture a significant amount of complex VFX shots with accurate lighting and reflections in-camera, and iterate on scenes together in real time while on set. The combination of Unreal Engine’s real-time capabilities and the immersive LED screens enabled a creative flexibility previously unimaginable.
Mandalorian_HUC-058679.pip.jpg

The Mandalorian was not only an inspiring challenge, but a powerful test bed for developing production-proven tools that benefit all Unreal Engine users. Our multi-user collaboration tools were a big part of this, along with the nDisplay system to allow a cluster of machines to synchronously co-render massive images in real time, and our live compositing system that enabled the filmmakers to see real-time previews. We also focused on abilities to interface with the engine from external sources, such as recording take data into Sequencer or manipulating the LED wall environment from the iPad. All of these features are available now in 4.24 or coming soon in 4.25.

FEED_THUMB_Mandalorian_V1.jpg

Ultimately, being part of The Mandalorian Season 1 was one of the highlights of my career – the scope of what we were able to achieve with real-time technology was unlike anything else I’ve worked on. Giving filmmakers like Jon Favreau, executive producer and director Dave Filoni, visual effects supervisor Richard Bluff, cinematographers Greig Fraser and Barry Baz Idoine, and the episodic directors the freedom and opportunities to make creative decisions on the fly, fostering live collaboration across all departments, and letting everyone see their full creative vision realized in mere seconds, was a truly gratifying experience. ILM, Golem Creations, Lux Machina, Fuse, Profile, ARRI, and all of the amazing collaborators on this project were deeply inspiring to work with and I’m proud to have been a part of it. But what’s even more exciting is that the techniques and technology we developed on The Mandalorian are only the tip of the iceberg – I can’t wait to see what the future has in store.
Mandalorian_HUC-066962.PIP.jpg

 

Go to Source
Author: Jeff Farris