Categories
ScienceDaily

Fast calculation dials in better batteries

A simpler and more efficient way to predict performance will lead to better batteries, according to Rice University engineers.

That their method is 100,000 times faster than current modeling techniques is a nice bonus.

The analytical model developed by materials scientist Ming Tang and graduate student Fan Wang of Rice University’s Brown School of Engineering doesn’t require complex numerical simulation to guide the selection and design of battery components and how they interact.

The simplified model developed at Rice — freely accessible online — does the heavy lifting with an accuracy within 10% of more computationally intensive algorithms. Tang said it will allow researchers to quickly evaluate the rate capability of batteries that power the planet.

The results appear in the open-access journal Cell Reports Physical Science.

There was a clear need for the updated model, Tang said.

“Almost everyone who designs and optimizes battery cells uses a well-established approach called P2D (for pseudo-two dimensional) simulations, which are expensive to run,” Tang said. “This especially becomes a problem if you want to optimize battery cells, because they have many variables and parameters that need to be carefully tuned to maximize the performance.

“What motivated this work is our realization that we need a faster, more transparent tool to accelerate the design process, and offer simple, clear insights that are not always easy to obtain from numerical simulations,” he said.

Battery optimization generally involves what the paper calls a “perpetual trade-off” between energy (the amount it can store) and power density (the rate of its release), all of which depends on the materials, their configurations and such internal structures as porosity.

“There are quite a few adjustable parameters associated with the structure that you need to optimize,” Tang said. “Typically, you need to make tens of thousands of calculations and sometimes more to search the parameter space and find the best combination. It’s not impossible, but it takes a really long time.”

He said the Rice model could be easily implemented in such common software as MATLAB and Excel, and even on calculators.

To test the model, the researchers let it search for the optimal porosity and thickness of an electrode in common full- and half-cell batteries. In the process, they discovered that electrodes with “uniform reaction” behavior such as nickel-manganese-cobalt and nickel-cobalt-aluminum oxide are best for applications that require thick electrodes to increase the energy density.

They also found that battery half-cells (with only one electrode) have inherently better rate capability, meaning their performance is not a reliable indicator of how electrodes will perform in the full cells used in commercial batteries.

The study is related to the Tang lab’s attempts at understanding and optimizing the relationship between microstructure and performance of battery electrodes, the topic of several recent papers that showed how defects in cathodes can speed lithium absorption and how lithium cells can be pushed too far in the quest for speed.

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

An unexpected origin story for a lopsided black hole merger

A lopsided merger of two black holes may have an oddball origin story, according to a new study by researchers at MIT and elsewhere.

The merger was first detected on April 12, 2019 as a gravitational wave that arrived at the detectors of both LIGO (the Laser Interferometer Gravitational-wave Observatory), and its Italian counterpart, Virgo. Scientists labeled the signal as GW190412 and determined that it emanated from a clash between two David-and-Goliath black holes, one three times more massive than the other. The signal marked the first detection of a merger between two black holes of very different sizes.

Now the new study, published today in the journal Physical Review Letters, shows that this lopsided merger may have originated through a very different process compared to how most mergers, or binaries, are thought to form.

It’s likely that the more massive of the two black holes was itself a product of a prior merger between two parent black holes. The Goliath that spun out of that first collision may have then ricocheted around a densely packed “nuclear cluster” before merging with the second, smaller black hole — a raucous event that sent gravitational waves rippling across space.

GW190412 may then be a second generation, or “hierarchical” merger, standing apart from other first-generation mergers that LIGO and Virgo have so far detected.

“This event is an oddball the universe has thrown at us — it was something we didn’t see coming,” says study coauthor Salvatore Vitale, an assistant professor of physics at MIT and a LIGO member. “But nothing happens just once in the universe. And something like this, though rare, we will see again, and we’ll be able to say more about the universe.”

Vitale’s coauthors are Davide Gerosa of the University of Birmingham and Emanuele Berti of Johns Hopkins University.

A struggle to explain

There are two main ways in which black hole mergers are thought to form. The first is known as a common envelope process, where two neighboring stars, after billions of years, explode to form two neighboring black holes that eventually share a common envelope, or disk of gas. After another few billion years, the black holes spiral in and merge.

“You can think of this like a couple being together all their lives,” Vitale says. “This process is suspected to happen in the disc of galaxies like our own.”

The other common path by which black hole mergers form is via dynamical interactions. Imagine, in place of a monogamous environment, a galactic rave, where thousands of black holes are crammed into a small, dense region of the universe. When two black holes start to partner up, a third may knock the couple apart in a dynamical interaction that can repeat many times over, before a pair of black holes finally merges.

In both the common envelope process and the dynamical interaction scenario, the merging black holes should have roughly the same mass, unlike the lopsided mass ratio of GW190412. They should also have relatively no spin, whereas GW190412 has a surprisingly high spin.

“The bottom line is, both these scenarios, which people traditionally think are ideal nurseries for black hole binaries in the universe, struggle to explain the mass ratio and spin of this event,” Vitale says.

Black hole tracker

In their new paper, the researchers used two models to show that it is very unlikely that GW190412 came from either a common envelope process or a dynamical interaction.

They first modeled the evolution of a typical galaxy using STAR TRACK, a simulation that tracks galaxies over billions of years, starting with the coalescing of gas and proceeding to the way stars take shape and explode, and then collapse into black holes that eventually merge. The second model simulates random, dynamical encounters in globular clusters — dense concentrations of stars around most galaxies.

The team ran both simulations multiple times, tuning the parameters and studying the properties of the black hole mergers that emerged. For those mergers that formed through a common envelope process, a merger like GW190412 was very rare, cropping up only after a few million events. Dynamical interactions were slightly more likely to produce such an event, after a few thousand mergers.

However, GW190412 was detected by LIGO and Virgo after only 50 other detections, suggesting that it likely arose through some other process.

“No matter what we do, we cannot easily produce this event in these more common formation channels,” Vitale says.

The process of hierarchical merging may better explain the GW190412’s lopsided mass and its high spin. If one black hole was a product of a previous pairing of two parent black holes of similar mass, it would itself be more massive than either parent, and later significantly overshadow its first-generation partner, creating a high mass ratio in the final merger.

A hierarchical process could also generate a merger with a high spin: The parent black holes, in their chaotic merging, would spin up the resulting black hole, which would then carry this spin into its own ultimate collision.

“You do the math, and it turns out the leftover black hole would have a spin which is very close to the total spin of this merger,” Vitale explains.

No escape

If GW190412 indeed formed through hierarchical merging, Vitale says the event could also shed light on the environment in which it formed. The team found that if the larger of the two black holes formed from a previous collision, that collision likely generated a huge amount of energy that not only spun out a new black hole, but kicked it across some distance.

“If it’s kicked too hard, it would just leave the cluster and go into the empty interstellar medium, and not be able to merge again,” Vitale says.

If the object was able to merge again (in this case, to produce GW190412), it would mean the kick that it received was not enough to escape the stellar cluster in which it formed. If GW190412 indeed is a product of hierarchical merging, the team calculated that it would have occurred in an environment with an escape velocity higher than 150 kilometers per second. For perspective, the escape velocity of most globular clusters is about 50 kilometers per second.

This means that whatever environment GW190412 arose from had an immense gravitational pull, and the team believes that such an environment could have been either the disk of gas around a supermassive black hole, or a “nuclear cluster” — an incredibly dense region of the universe, packed with tens of millions of stars.

“This merger must have come from an unusual place,” Vitale says. “As LIGO and Virgo continue to make new detections, we can use these discoveries to learn new things about the universe.”

Go to Source
Author:

Categories
ScienceDaily

NASA’s Maven observes Martian night sky pulsing in ultraviolet light

Vast areas of the Martian night sky pulse in ultraviolet light, according to images from NASA’s MAVEN spacecraft. The results are being used to illuminate complex circulation patterns in the Martian atmosphere.

The MAVEN team was surprised to find that the atmosphere pulsed exactly three times per night, and only during Mars’ spring and fall. The new data also revealed unexpected waves and spirals over the winter poles, while also confirming the Mars Express spacecraft results that this nightglow was brightest over the winter polar regions.

“MAVEN’s images offer our first global insights into atmospheric motions in Mars’ middle atmosphere, a critical region where air currents carry gases between the lowest and highest layers,” said Nick Schneider of the University of Colorado’s Laboratory for Atmospheric and Space Physics (LASP), Boulder, Colorado. The brightenings occur where vertical winds carry gases down to regions of higher density, speeding up the chemical reactions that create nitric oxide and power the ultraviolet glow. Schneider is instrument lead for the MAVEN Imaging Ultraviolet Spectrograph (IUVS) instrument that made these observations, and lead author of a paper on this research appearing August 6 in the Journal of Geophysical Research, Space Physics. Ultraviolet light is invisible to the human eye but detectable by specialized instruments.

“The ultraviolet glow comes mostly from an altitude of about 70 kilometers (approximately 40 miles), with the brightest spot about a thousand kilometers (approximately 600 miles) across, and is as bright in the ultraviolet as Earth’s northern lights,” said Zac Milby, also of LASP. “Unfortunately, the composition of Mars’ atmosphere means that these bright spots emit no light at visible wavelengths that would allow them to be seen by future Mars astronauts. Too bad: the bright patches would intensify overhead every night after sunset, and drift across the sky at 300 kilometers per hour (about 180 miles per hour).”

The pulsations reveal the importance of planet-encircling waves in the Mars atmosphere. The number of waves and their speed indicates that Mars’ middle atmosphere is influenced by the daily pattern of solar heating and disturbances from the topography of Mars’ huge volcanic mountains. These pulsating spots are the clearest evidence that the middle atmosphere waves match those known to dominate the layers above and below.

“MAVEN’s main discoveries of atmosphere loss and climate change show the importance of these vast circulation patterns that transport atmospheric gases around the globe and from the surface to the edge of space.” said Sonal Jain, also of LASP.

Next, the team plans to look at nightglow “sideways,” instead of down from above, using data taken by IUVS looking just above the edge of the planet. This new perspective will be used to understand the vertical winds and seasonal changes even more accurately.

The Martian nightglow was first observed by the SPICAM instrument on the European Space Agency’s Mars Express spacecraft. However, IUVS is a next-generation instrument better able to repeatedly map out the nightside glow, finding patterns and periodic behaviors. Many planets including Earth have nightglow, but MAVEN is the first mission to collect so many images of another planet’s nightglow.

The research was funded by the MAVEN mission. MAVEN’s principal investigator is based at the University of Colorado’s Laboratory for Atmospheric and Space Physics, Boulder, and NASA Goddard manages the MAVEN project. NASA is exploring our Solar System and beyond, uncovering worlds, stars, and cosmic mysteries near and far with our powerful fleet of space and ground-based missions.

Story Source:

Materials provided by NASA/Goddard Space Flight Center. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New fabric could help keep you cool in the summer, even without A/C

Air conditioning and other space cooling methods account for about 10% of all electricity consumption in the U.S., according to the U.S. Energy Information Administration. Now, researchers reporting in ACS Applied Materials & Interfaces have developed a material that cools the wearer without using any electricity. The fabric transfers heat, allows moisture to evaporate from the skin and repels water.

Cooling off a person’s body is much more efficient than cooling an entire room or building. Various clothing and textiles have been designed to do just that, but most have disadvantages, such as poor cooling capacity; large electricity consumption; complex, time-consuming manufacturing; and/or high cost. Yang Si, Bin Ding and colleagues wanted to develop a personal cooling fabric that could efficiently transfer heat away from the body, while also being breathable, water repellent and easy to make.

The researchers made the new material by electrospinning a polymer (polyurethane), a water-repelling version of the polymer (fluorinated polyurethane) and a thermally conductive filler (boron nitride nanosheets) into nanofibrous membranes. These membranes repelled water from the outside, but they had large enough pores to allow sweat to evaporate from the skin and air to circulate. The boron nitride nanosheets coated the polymer nanofibers, forming a network that conducted heat from an inside source to the outside air. In tests, the thermal conductivity was higher than that of many other conventional or high-tech fabrics. The membrane could be useful not only for personal cooling, but also for solar energy collection, seawater desalination and thermal management of electronic devices, the researchers say.

Story Source:

Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

A new family of nanocars ready for the next nano ‘Grand Prix’

According to the British Royal Automobile and the French Automobile clubs, the first car was created in 1770 by the Frenchman Joseph Cugnot. This “Fardier” (French name for a trolley used to transport heavy loads) was a car propelled by a steam engine and powered by a boiler. This 7 m long self-propelled machine reached a speed of 4 km/h, for an average autonomy of 15 min. 250 years later, researchers at the Nara Institute of Science and Technology (NAIST), Japan, in partnership with colleagues in the University Paul Sabatier (Toulouse, France), report in Chemistry — A European Journal a new family of nanocars integrating a dipole to speed up their motion in the nanoworld.

After the first nanocar race organized in spring 2017, in Toulouse (France) we designed a new family of molecules to behave as cars in the nanoworld. When I established my laboratory in NAIST in April 2018, Toshio Nishino (Assistant Professor) and Hiroki Takeuchi (Master student) started the synthesis. Two years later, we are reporting the results in a publication presenting the synthesis of 9 dipolar nanocars. The result is amazing. In every flask, about 100 mg of green or blue powders (i.e. 60 x 1018 nanocars) stick to the walls. These are the Franco-Japanese racing cars that sleep wisely in the garage waiting for the next Grand Prix in 2021.

“To hope to win the race, nanocars have to be fast but they need also to be controllable,” emphasizes Gwénaël Rapenne. The design of the molecules has long been thought to need a compromise between opposite requirements. Consistently, the nanocar Rapenne and his colleagues designed is made up of 150 atoms (chemical formula C85H59N5Zn). A planar chassis made from porphyrin, a fragment already used in nature for many processes like oxygen transportation (hemoglobin) or photosynthesis (chlorophyl). Ultimately, the presence of a zinc atom could allow transportation of small molecules on the car body. “The nanocar is 2 nm long and surrounded by four wheels designed to minimize contact with the ground and has two legs which are able to donate or accept electrons making the nanocar dipolar” specifies the researcher.

What kind of application could be envisioned with such small vehicles?

“Honestly, today, we do not know yet what this technology will be used for. But just like the liquid crystals discovered in 1910 and not used until half a century later in calculator screens and now in all our LCD supports, the manipulation of molecules could well be revolutionary, ” dreams Gwénaël Rapenne. One of the directions of the research could be to carry a load to transport reactants or drugs from one place to another.

Story Source:

Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Proposed seismic surveys in Arctic Refuge likely to cause lasting damage

Winter vehicle travel can cause long-lasting damage to the tundra, according to a new paper by University of Alaska Fairbanks researchers published in the journal Ecological Applications.

Scars from seismic surveys for oil and gas exploration in the Arctic National Wildlife Refuge remained for decades, according to the study. The findings counter assertions made by the Bureau of Land Management in 2018 that seismic exploration causes no “significant impacts” on the landscape. That BLM determination would allow a less-stringent environmental review process of seismic exploration in the Arctic Refuge 1002 Area.

UAF’s Martha Raynolds, the lead author of the study, said she and other scientists have documented lasting impacts of winter trails throughout years of field research. Their paper, authored by an interdisciplinary team with expertise in Arctic vegetation, snow, hydrology and permafrost, summarizes what is currently known about the effects of Arctic seismic exploration and what additional information is needed to effectively regulate winter travel to minimize impacts.

A grid pattern of seismic survey lines is used to study underground geology. These trails, as well as trails caused by camps that support workers, damage the underlying tundra, even when limited to frozen, snow-covered conditions. Some of the existing scars on the tundra date back more than three decades, when winter 2D seismic surveys were initiated. Modern 3D surveying requires a tighter network of survey lines, with larger crews and more vehicles. The proposed 1002 Area survey would result in over 39,000 miles of tracks.

“Winter tundra travel is not a technology that has changed much since the ’80s,” said Raynolds, who studies Arctic vegetation at UAF’s Institute of Arctic Biology. “The impacts are going to be as bad or worse, and there are proposing many, many more miles of trails.”

Conditions for winter tundra travel have become more difficult, due to a mean annual temperature increase of 7-9 degrees F on Alaska’s Arctic coastal plain since 1986. Those warmer conditions have contributed to changing snow cover and thawing permafrost. The impact of tracks on the vegetation, soils and permafrost eventually changes the hydrology and habitat of the tundra, which affects people and wildlife who rely on the ecosystem.

The paper argues that more data are needed before proceeding with Arctic Refuge exploration efforts. That includes better information about the impacts of 3D seismic exploration; better weather records in the region, particularly wind and snow data; and high-resolution maps of the area’s ground ice and hydrology. The study also emphasizes that the varied terrain and topography in the 1002 Area are different from other parts of the North Slope, making it more vulnerable to damage from seismic exploration.

Other contributors to the paper included UAF’s Mikhail Kanevskiy, Matthew Sturm and Donald “Skip” Walker; Anna Liljedahl, UAF and Woods Hole Research Center; Janet Jorgenson, U.S. Fish and Wildlife Service; Torre Jorgenson, Alaska Ecoscience; and Matthew Nolan, Fairbanks Fodar.

Story Source:

Materials provided by University of Alaska Fairbanks. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Exotic nanotubes move in less-mysterious ways

Boron nitride nanotubes are anything but boring, according to Rice University scientists who have found a way to watch how they move in liquids.

The researchers’ method to study the real-time dynamics of boron nitride nanotubes (BNNTs) allowed them to confirm, for the first time, that Brownian motion of BNNTs in solution matches predictions and that, like carbon nanotubes of comparable sizes, they remain rigid.

Those properties and others — BNNTs are nearly transparent to visible light, resist oxidation, are stable semiconductors and are excellent conductors of heat — could make them useful as building blocks for composite materials or in biomedical studies, among other applications. The study will help scientists better understand particle behavior in the likes of liquid crystals, gels and polymer networks.

Rice scientists Matteo Pasquali and Angel Martí and graduate student and lead author Ashleigh Smith McWilliams isolated single BNNTs by combining them with a fluorescent rhodamine surfactant.

This allowed the researchers to show their Brownian motion — the random way particles move in a fluid, like dust in air — is the same as for carbon nanotubes, and thus they will behave in a similar way in fluid flows. That means BNNTs can be used in liquid-phase processing for the large-scale production of films, fibers and composites.

“BNNTs are typically invisible in fluorescence microscopy,” Martí said. “However, when they are covered by fluorescent surfactants, they can be easily seen as small moving rods. BNNTs are a million times thinner than a hair. Understanding how these nanostructures move and diffuse in solution at a fundamental level is of great importance for manufacturing materials with specific and desired properties.”

The new data comes from experiments carried out at Rice and reported in the Journal of Physical Chemistry B.

Understanding how shear helps nanotubes align has already paid off in the Pasquali lab’s development of conductive carbon nanotube fibers, films and coatings, already making waves in materials and medical research.

“BNNTs are the neglected cousins of carbon nanotubes,” Pasquali said. “They were discovered just a few years later, but took much longer to take off, because carbon nanotubes had taken most of the spotlight.

“Now that BNNT synthesis has advanced and we understand their fundamental fluid behavior, the community could move much faster towards applications,” he said. “For example, we could make fibers and coatings that are thermally conductive but electrically insulating, which is very unusual as electrical insulators have poor thermal conductivity.”

Unlike carbon nanotubes that emit lower-energy near-infrared light and are easier to spot under the microscope, the Rice team had to modify the multiwalled BNNTs to make them both dispersible and viewable. Rhodamine molecules combined with long aliphatic chains served this purpose, attaching to the nanotubes to keep them separate and allowing them to be located between glass slides separated just enough to let them move freely. The rhodamine tag let the researchers track single nanotubes for up to five minutes.

“We needed to be able to visualize the nanotube for relatively long periods of time, so we could accurately model its movement,” Smith McWilliams said. “Since rhodamine tags coordinated to the BNNT surface were less likely to photobleach (or go dim) than those free in solution, the BNNT appeared as a bright fluorescent signal against a dark background, as you can see in the video. This helped me keep the nanotube in focus throughout the video and enabled our code to accurately track its movement over time.”

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

‘A litmus paper for CO2:’ Scientists develop paper-based sensors for carbon dioxide

A new sensor for detecting carbon dioxide can be manufactured on a simple piece of paper, according to a new study by University of Alberta physicists.

“You can basically think of it as a litmus paper for carbon dioxide,” said Al Meldrum, professor in the Department of Physics and co-author on the study led by graduate student Hui Wang. “The work showed that you can make a sensitive carbon dioxide detector out of a simple piece of paper. One could easily imagine mass produced sensors for carbon dioxide or other gases using the same basic methods.”

The sensor, which changes colours based on the amount of carbon dioxide in the environment, has many potential applications — from industries that make use of carbon dioxide to smart buildings. And due to its paper base, the sensor is low cost to create and provides a simple template for mass production.

“In smart buildings, carbon dioxide sensors can tell you about the occupancy and where people tend to congregate and spend their time, by detecting the carbon dioxide exhaled when we breathe,” explained Meldrum. “This can help to aid in building usage and design. Carbon dioxide sensors currently can be very expensive if they are sensitive enough for many applications, so a cheap and mass-produced alternative could be beneficial for these applications.”

While this research demonstrates the sensing ability and performance of the technology, a mass-producible sensor would require further design, optimization, and packaging.

Story Source:

Materials provided by University of Alberta. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Measuring blood damage

According to the National Kidney Foundation, more than 37 million people are living with kidney disease.

The kidneys play an important role in the body, from removing waste products to filtering the blood. For people with kidney disease, dialysis can help the body perform these essential functions when the kidneys aren’t working at full capacity.

However, red blood cells sometimes rupture when blood is sent through faulty equipment that is supposed to clean the blood, such as a dialysis machine. This is called hemolysis. Hemolysis also can occur during blood work when blood is drawn too quickly through a needle, leading to defective laboratory samples.

There is no reliable indicator that red blood cells are being damaged in a clinical setting until an individual begins showing symptoms, such as fever, weakness, dizziness or confusion.

University of Delaware mechanical engineer Tyler Van Buren and collaborating colleagues at Princeton University have developed a method to monitor blood damage in real-time.

“Our goal was to find a method that could detect red blood cell damage without the need for lab sample testing,” said Van Buren, an assistant professor of mechanical engineering with expertise in fluid dynamics.

The researchers recently reported their technique in Scientific Reports, a Nature publication.

Detecting blood cell damage

In the body, red blood cells float in plasma alongside white blood cells and platelets. The plasma is naturally conductive and is efficient at passing an electric charge. Red blood cells are chock-full of hemoglobin, an oxygen-transporting protein, that also is conductive.

This hemoglobin is typically insulated from the body by the cell lining. But as red blood cells rupture, hemoglobin is released into the bloodstream, causing the blood to become more conductive.

“Think of the blood like a river and red blood cells like water balloons in that river,” said Van Buren, who joined UD in 2019. “If you have electrons (negatively charged particles) waiting to cross the river, it is more difficult when there are a lot of water balloons present. This is because the rubber is insulated, so the blood will be less conductive. As the water balloons (or blood cells) break, there are fewer barriers and the blood becomes more conductive, making it easier for electrons to move from one side to the other.”

In dialysis, a patient’s blood is removed from the body, cleaned, then recirculated into the body. The researchers developed a simple experiment to see if they could measure the blood’s mechanical resistance outside of the body.

To test their technique, the researchers circulated healthy blood through the laboratory system and gradually introduced mechanically damaged blood to see if it would change the conductive nature of the fluid in the system.

It did. The researchers saw a direct correlation between the conductivity of the fluid in the system and the amount of damaged blood included in the sample.

While this issue of damaged blood is very rare, the research team’s method does introduce one potential way to indirectly monitor blood damage in the body during dialysis. The researchers theorize that if clinicians were able to monitor the resistance of a patient’s blood going into a dialysis machine and coming out, and they saw a major change in resistance — or conductivity — there is good reason to believe the blood is being damaged.

“We are not doctors, we’re mechanical engineers,” said Van Buren. “This technique would need a lot more vetting before being applied in a clinical setting.”

For example, Van Buren said the method wouldn’t necessarily work across patient populations because an individual’s blood conductivity is just that, individual.

In the future, Van Buren said it would be interesting to evaluate whether conductivity also could be used in place of lab sampling for applications outside of dialysis. For example, this might be useful in research aimed at understanding how blood cells may be damaged, both inside and outside of the body, and possible methods for prevention.

He also is curious whether this method could be used to evaluate and identify compromised blood samples on-site, saving time and money for hospitals or diagnostic laboratories, while eliminating the need for patients to make multiple trips to have blood drawn if there is a problem.

Co-authors on the paper include Alexander J. Smits, the Eugene Higgins Professor of Mechanical and Aerospace Engineering at Princeton University and the project principal investigator, and Gilad Arwatz, a former graduate student at Princeton University, now president and CEO of Instrumems Inc.

Go to Source
Author:

Categories
ScienceDaily

‘Tantalizing’ clues about why a mysterious material switches from conductor to insulator

Tantalum disulfide is a mysterious material. According to textbook theory, it should be a conducting metal, but in the real world it acts like an insulator. Using a scanning tunneling microscope, researchers from the RIKEN Center for Emergent Matter Science have taken a high-resolution look at the structure of the material, revealing why it demonstrates this unintuitive behavior. It has long been known that crystalline materials should be good conductors when they have an odd number of electrons in each repeating cell of the structure, but may be poor conductors when the number is even. However, sometimes this formula does not work, with one case being “Mottness,” a property based on the work of Sir Nevill Mott. According to that theory, when there is strong repulsion between electrons in the structure, it leads the electrons to become “localized” — paralyzed in other words — and being unable to move around freely to create an electric current. What makes the situation complicated is that there are also situations where electrons in different layers of a 3-D structure can interact, pairing up to create a bilayer structure with an even number of electrons. It has been previously suggested that this “pairing” of electrons would restore the textbook understanding of the insulator, making it unnecessary to invoke “Mottness” as an explanation.

For the current study, published in Nature Communications, the research group decided to look at tantalum disulfide, a material with 13 electrons in each repeating structure, which should therefore be a conductor. However, it is not, and there has been controversy over whether this property is caused by its “Mottness” or by a pairing structure.

To perform the research, the researchers created crystals of tantalum disulfide and then cleaved the crystals in a vacuum to reveal ultra-clean surfaces which they then examined, at a temperature close to absolute zero — with a method known as scanning tunneling microscopy — a method involving a tiny and extremely sensitive metal tip that can sense where electrons are in a material, and their degree of conducting behavior, by using the quantum tunneling effect. Their results showed that there was indeed a stacking of layers which effectively arranged them into pairs. Sometimes the crystals cleaved between the pairs of layers, and sometimes through a pair, breaking it. They performed spectroscopy on both the paired and unpaired layers and found that even the unpaired ones are insulating, leaving Mottness as the only explanation.

According to Christopher Butler, the first author of the study, “The exact nature of the insulating state and of the phase transitions in tantalum disulfide have been long-standing mysteries, and it was very exciting to find that Mottness is a key player, aside from the pairing of the layers. This is because theorists suspect that a Mott state could set the stage for an interesting phase of matter known as a quantum spin liquid.”

Tetsuo Hanaguri, who led the research team, said, “The question of what makes this material move between insulating to conducting phases has long been a puzzle for physicists, and I am very satisfied we have been able to put a new piece into the puzzle. Future work may help us to find new interesting and useful phenomena emerging from Mottness, such as high-temperature superconductivity.”

Story Source:

Materials provided by RIKEN. Note: Content may be edited for style and length.

Go to Source
Author: