New brain cell-like nanodevices work together to identify mutations in viruses

In the September issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

Go to Source


Chemists make cellular forces visible at the molecular scale

Scientists have developed a new technique using tools made of luminescent DNA, lit up like fireflies, to visualize the mechanical forces of cells at the molecular level. Nature Methods published the work, led by chemists at Emory University, who demonstrated their technique on human blood platelets in laboratory experiments.

“Normally, an optical microscope cannot produce images that resolve objects smaller than the length of a light wave, which is about 500 nanometers,” says Khalid Salaita, Emory professor of chemistry and senior author of the study. “We found a way to leverage recent advances in optical imaging along with our molecular DNA sensors to capture forces at 25 nanometers. That resolution is akin to being on the moon and seeing the ripples caused by raindrops hitting the surface of a lake on the Earth.”

Almost every biological process involves a mechanical component, from cell division to blood clotting to mounting an immune response. “Understanding how cells apply forces and sense forces may help in the development of new therapies for many different disorders,” says Salaita, whose lab is a leader in devising ways to image and map bio-mechanical forces.

The first authors of the paper, Joshua Brockman and Hanquan Su, did the work as Emory graduate students in the Salaita lab. Both recently received their PhDs.

The researchers turned strands of synthetic DNA into molecular tension probes that contain hidden pockets. The probes are attached to receptors on a cell’s surface. Free-floating pieces of DNA tagged with fluorescence serve as imagers. As the unanchored pieces of DNA whizz about they create streaks of light in microscopy videos.

When the cell applies force at a particular receptor site, the attached probes stretch out causing their hidden pockets to open and release tendrils of DNA that are stored inside. The free-floating pieces of DNA are engineered to dock onto these DNA tendrils. When the florescent DNA pieces dock, they are briefly demobilized, showing up as still points of light in the microscopy videos.

Hours of microscopy video are taken of the process, then speeded up to show how the points of light change over time, providing the molecular-level view of the mechanical forces of the cell.

The researchers use a firefly analogy to describe the process.

“Imagine you’re in a field on a moonless night and there is a tree that you can’t see because it’s pitch black out,” says Brockman, who graduated from the Wallace H. Coulter Department of Biomedical Engineering, a joint program of Georgia Tech and Emory, and is now a post-doctoral fellow at Harvard. “For some reason, fireflies really like that tree. As they land on all the branches and along the trunk of the tree, you could slowly build up an image of the outline of the tree. And if you were really patient, you could even detect the branches of the tree waving in the wind by recording how the fireflies change their landing spots over time.”

“It’s extremely challenging to image the forces of a living cell at a high resolution,” says Su, who graduated from Emory’s Department of Chemistry and is now a post-doctoral fellow in the Salaita lab. “A big advantage of our technique is that it doesn’t interfere with the normal behavior or health of a cell.”

Another advantage, he adds, is that DNA bases of A, G, T and C, which naturally bind to one another in particular ways, can be engineered within the probe-and-imaging system to control specificity and map multiple forces at one time within a cell.

“Ultimately, we may be able to link various mechanical activities of a cell to specific proteins or to other parts of cellular machinery,” Brockman says. “That may allow us to determine how to alter the cell to change and control its forces.”

By using the technique to image and map the mechanical forces of platelets, the cells that control blood clotting at the site of a wound, the researchers discovered that platelets have a concentrated core of mechanical tension and a thin rim that continuously contracts. “We couldn’t see this pattern before but now we have a crisp image of it,” Salaita says. “How do these mechanical forces control thrombosis and coagulation? We’d like to study them more to see if they could serve as a way to predict a clotting disorder.”

Just as increasingly high-powered telescopes allow us to discover planets, stars and the forces of the universe, higher-powered microscopy allows us to make discoveries about our own biology.

“I hope this new technique leads to better ways to visualize not just the activity of single cells in a laboratory dish, but to learn about cell-to-cell interactions in actual physiological conditions,” Su says. “It’s like opening a new door onto a largely unexplored realm — the forces inside of us.”

Co-authors of the study include researchers from Children’s Healthcare of Atlanta, Ludwig Maximilian University in Munich, the Max Planck Institute and the University of Alabama at Birmingham. The work was funded by grants from the National Institutes of Health, the National Science Foundation, the Naito Foundation and the Uehara Memorial Foundation.

Go to Source


Nanoparticle SARS-CoV-2 model may speed drug discovery for COVID-19

A team of scientists from the National Center for Advancing Translational Sciences (NCATS) and Naval Research Laboratory (NRL) in Washington, D.C., has developed a new tool that mimics how SARS-CoV-2 — the virus that causes COVID-19 — infects a cell, information that could potentially speed the search for treatments against the disease.

The tool is a fluorescent nanoparticle probe that uses the spike protein on the surface of SARS-CoV-2 to bind to cells and trigger the process that pulls the virus into the cell. The probe could be used in tests to rapidly gauge the ability of biologics, drugs and compounds to block the actual virus from infecting human cells. The researchers’ findings appeared online Aug. 26 in ACS Nano.

“Our goal is to create a screening system to find compounds that block SARS-CoV-2 from binding to cells and infecting them,” explained Kirill Gorshkov, Ph.D., a translational scientist at NCATS and a co-corresponding author of the study.

However, using the actual virus in such screening studies would be difficult and require special facilities. Instead, Gorshkov and Eunkeu Oh, Ph.D., a research biophysicist at NRL and co?corresponding author of the study, and their colleagues wanted to use nanoparticles to mimic the viral function of binding to and invading the host human cell.

The NCATS and NRL researchers collaborated to design and test the probe, combining their complementary skill sets to deliver results far sooner than separate research efforts would have. The NRL team, led by Mason Wolak, Ph.D., an expert in optical nanomaterials, put the initial collaboration together.

“We at NRL are experts in nanoparticles, and the NCATS researchers are experts in drug screening using cellular systems,” explained Oh. “So, it was the perfect match.”

To create the probe, NRL scientists built a fluorescent nanoparticle called a quantum dot, fashioned from cadmium and selenium. At around 10 nanometers in size, these spherical nanoparticles are 3,000 times smaller than the width of a human hair.

The NCATS-NRL research team then studded the quantum dots’ surfaces with a section of the SARS-CoV-2 spike protein that binds to the angiotensin-converting enzyme 2 (ACE2) receptor on human cells. The union of the spike protein with ACE2 is the first step in the pathway to viral infection.

The glow from the quantum dots allows scientists to track the dots’ behavior under a microscope. “Because they’re such bright fluorescent objects, the quantum dots give us a powerful system to track viral attachment and effects on the cell in real time,” explained Gorshkov.

The investigators tracked how the quantum dot probes interacted with human cells that have ACE2 on their surfaces. They watched the nanoparticle probes attach to ACE2, which combined with the probes and pulled them into the cells. The quantum dot probes did the same in a lung cell line commonly used in coronavirus assays. Safety data showed that the probes were not toxic to the test cells at the concentrations and exposure times used in the study.

The quantum dots followed the SARS-CoV-2 pathway into cells, but the research team found the probes also mimicked the virus in the presence of antibodies. Antibodies are proteins made by the immune system that can neutralize viruses such as SARS-CoV-2. The antibodies proved to be potent inhibitors of the quantum dot probes as well, preventing them from binding to ACE2 and entering human cells.

That antibody response means the quantum dot probes could help researchers rapidly test the ability of potential therapeutic agents to block the virus from entering and infecting cells. Assays using the probes also could determine the concentrations at which potential treatments may safely and effectively block infection.

“Using the quantum dots, we could create tests to use in drug screening and drug repurposing, using libraries of compounds that have activity but that also are approved by the U.S. Food and Drug Administration,” Gorshkov said. “Such assays could rapidly identify promising, safe treatments for COVID-19.”

ACE2 may not be the only receptor SARS-CoV-2 targets, and the quantum dot probe’s flexible design will allow researchers to swap in spikes that bind to other receptors. With the probe, researchers also could test how mutations in the spike change the way the virus behaves — and how well treatments work — by adding the mutated spikes to the quantum dots.

Beyond SARS-CoV-2, researchers could revise the nanoparticle probe to mimic other viruses and reveal their pathways to infection. The quantum dot probes also could be useful when testing potential therapies for other diseases, Gorshkov said. The quantum dots also might deliver drugs directly to cells, narrowing treatment to specific cell types, organs or cancers.

Go to Source


Astronomers discover an Earth-sized ‘pi planet’ with a 3.14-day orbit

In a delightful alignment of astronomy and mathematics, scientists at MIT and elsewhere have discovered a “pi Earth” — an Earth-sized planet that zips around its star every 3.14 days, in an orbit reminiscent of the universal mathematics constant.

The researchers discovered signals of the planet in data taken in 2017 by the NASA Kepler Space Telescope’s K2 mission. By zeroing in on the system earlier this year with SPECULOOS, a network of ground-based telescopes, the team confirmed that the signals were of a planet orbiting its star. And indeed, the planet appears to still be circling its star today, with a pi-like period, every 3.14 days.

“The planet moves like clockwork,” says Prajwal Niraula, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), who is the lead author of a paper published today in the Astronomical Journal.

“Everyone needs a bit of fun these days,” says co-author Julien de Wit, of both the paper title and the discovery of the pi planet itself.

Planet extraction

The new planet is labeled K2-315b; it’s the 315th planetary system discovered within K2 data — just one system shy of an even more serendipitous place on the list.

The researchers estimate that K2-315b has a radius of 0.95 that of Earth’s, making it just about Earth-sized. It orbits a cool, low-mass star that is about one-fifth the size of the sun. The planet circles its star every 3.14 days, at a blistering 81 kilometers per second, or about 181,000 miles per hour.

While its mass is yet to be determined, scientists suspect that K2-315b is terrestrial, like the Earth. But the pi planet is likely not habitable, as its tight orbit brings the planet close enough to its star to heat its surface up to 450 kelvins, or around 350 degrees Fahrenheit — perfect, as it turns out, for baking actual pie.

“This would be too hot to be habitable in the common understanding of the phrase,” says Niraula, who adds that the excitement around this particular planet, aside from its associations with the mathematical constant pi, is that it may prove a promising candidate for studying the characteristics of its atmosphere.

“We now know we can mine and extract planets from archival data, and hopefully there will be no planets left behind, especially these really important ones that have a high impact,” says de Wit, who is an assistant professor in EAPS, and a member of MIT’s Kavli Institute for Astrophysics and Space Research.

Niraula and de Wit’s MIT co-authors include Benjamin Rackham and Artem Burdanov, along with a team of international collaborators.

Dips in the data

The researchers are members of SPECULOOS, an acronym for The Search for habitable Planets EClipsing ULtra-cOOl Stars, and named for a network of four 1-meter telescopes in Chile’s Atacama Desert, which scan the sky across the southern hemisphere. Most recently, the network added a fifth telescope, which is the first to be located in the northern hemisphere, named Artemis — a project that was spearheaded by researchers at MIT.

The SPECULOOS telescopes are designed to search for Earth-like planets around nearby, ultracool dwarfs — small, dim stars that offer astronomers a better chance of spotting an orbiting planet and characterizing its atmosphere, as these stars lack the glare of much larger, brighter stars.

“These ultracool dwarfs are scattered all across the sky,” Burdanov says. “Targeted ground-based surveys like SPECULOOS are helpful because we can look at these ultracool dwarfs one by one.”

In particular, astronomers look at individual stars for signs of transits, or periodic dips in a star’s light, that signal a possible planet crossing in front of the star, and briefly blocking its light.

Earlier this year, Niraula came upon a cool dwarf, slightly warmer than the commonly accepted threshold for an ultracool dwarf, in data collected by the K2 campaign — the Kepler Space Telescope’s second observing mission, which monitored slivers of the sky as the spacecraft orbited around the sun.

Over several months in 2017, the Kepler telescope observed a part of the sky that included the cool dwarf, labeled in the K2 data as EPIC 249631677. Niraula combed through this period and found around 20 dips in the light of this star, that seemed to repeat every 3.14 days.

The team analyzed the signals, testing different potential astrophysical scenarios for their origin, and confirmed that the signals were likely of a transiting planet, and not a product of some other phenomena such as a binary system of two spiraling stars.

The researchers then planned to get a closer look at the star and its orbiting planet with SPECULOOS. But first, they had to identify a window of time when they would be sure to catch a transit.

“Nailing down the best night to follow up from the ground is a little bit tricky,” says Rackham, who developed a forecasting algorithm to predict when a transit might next occur. “Even when you see this 3.14 day signal in the K2 data, there’s an uncertainty to that, which adds up with every orbit.”

With Rackham’s forecasting algorithm, the group narrowed in on several nights in February 2020 during which they were likely to see the planet crossing in front of its star. They then pointed SPECULOOS’ telescopes in the direction of the star and were able to see three clear transits: two with the network’s Southern Hemisphere telescopes, and the third from Artemis, in the Northern Hemisphere.

The researchers say the new pi planet may be a promising candidate to follow up with the James Webb Space Telescope (JWST), to see details of the planet’s atmosphere. For now, the team is looking through other datasets, such as from NASA’s TESS mission, and are also directly observing the skies with Artemis and the rest of the SPECULOOS network, for signs of Earthlike planets.

“There will be more interesting planets in the future, just in time for JWST, a telescope designed to probe the atmosphere of these alien worlds,” says Niraula. “With better algorithms, hopefully one day, we can look for smaller planets, even as small as Mars.”

This research was supported in part by the Heising-Simons Foundation, and the European Research Council.

Go to Source


Solar storm forecasts for Earth improved with help from the public

Solar storm analysis carried out by an army of citizen scientists has helped researchers devise a new and more accurate way of forecasting when Earth will be hit by harmful space weather. Scientists at the University of Reading added analysis carried out by members of the public to computer models designed to predict when coronal mass ejections (CMEs) — huge solar eruptions that are harmful to satellites and astronauts — will arrive at Earth.

The team found forecasts were 20% more accurate, and uncertainty was reduced by 15%, when incorporating information about the size and shape of the CMEs in the volunteer analysis. The data was captured by thousands of members of the public during the latest activity in the Solar Stormwatch citizen science project, which was devised by Reading researchers and has been running since 2010.

The findings support the inclusion of wide-field CME imaging cameras on board space weather monitoring missions currently being planned by agencies like NASA and ESA.

Dr Luke Barnard, space weather researcher at the University of Reading’s Department of Meteorology, who led the study, said: “CMEs are sausage-shaped blobs made up of billions of tonnes of magnetised plasma that erupt from the Sun’s atmosphere at a million miles an hour. They are capable of damaging satellites, overloading power grids and exposing astronauts to harmful radiation.

“Predicting when they are on a collision course with Earth is therefore extremely important, but is made difficult by the fact the speed and direction of CMEs vary wildly and are affected by solar wind, and they constantly change shape as they travel through space.

“Solar storm forecasts are currently based on observations of CMEs as soon as they leave the Sun’s surface, meaning they come with a large degree of uncertainty. The volunteer data offered a second stage of observations at a point when the CME was more established, which gave a better idea of its shape and trajectory.

“The value of additional CME observations demonstrates how useful it would be to include cameras on board spacecraft in future space weather monitoring missions. More accurate predictions could help prevent catastrophic damage to our infrastructure and could even save lives.”

In the study, published in AGU Advances, the scientists used a brand new solar wind model, developed by Reading co-author Professor Mathew Owens, for the first time to create CME forecasts.

The simplified model is able to run up to 200 simulations — compared to around 20 currently used by more complex models — to provide improved estimates of the solar wind speed and its impact on the movement of CMEs, the most harmful of which can reach Earth in 15-18 hours.

Adding the public CME observations to the model’s predictions helped provide a clearer picture of the likely path the CME would take through space, reducing the uncertainty in the forecast. The new method could also be applied to other solar wind models.

The Solar Stormwatch project was led by Reading co-author Professor Chris Scott. It asked volunteers to trace the outline of thousands of past CMEs captured by Heliospheric Imagers — specialist, wide-angle cameras — on board two NASA STEREO spacecraft, which orbit the Sun and monitor the space between it and Earth.

The scientists retrospectively applied their new forecasting method to the same CMEs the volunteers had analysed to test how much more accurate their forecasts were with the additional observations.

Using the new method for future solar storm forecasts would require swift real-time analysis of the images captured by the spacecraft camera, which would provide warning of a CME being on course for Earth several hours or even days in advance of its arrival.

Story Source:

Materials provided by University of Reading. Note: Content may be edited for style and length.

Go to Source


Using laser to cool polyatomic molecule

After firing the lasers and bombarding the molecules with light, the scientists gathered around the camera to check the results. By seeing how far these cold molecules expanded they would know almost instantly whether they were on the right track or not to charting new paths in quantum science by being the first to cool (aka slow down) a particularly complex, six-atom molecule using nothing but light.

“When we started out on the project we were optimistic but were not sure that we would see something that would show a very dramatic effect,” said Debayan Mitra, a postdoctoral researcher in Harvard’s Doyle Research Group. “We thought that we would need more evidence to prove that we were actually cooling the molecule, but then when we saw the signal, it was like, ‘Yeah, nobody will doubt that.’ It was big and it was right there.”

The study led by Mitra and graduate student Nathaniel B. Vilas is the focus of a new paper published in Science. In it, the group describes using a novel method combining cryogenic technology and direct laser light to cool the nonlinear polyatomic molecule calcium monomethoxide (CaOCH3) to just above absolute zero.

The scientists believe their experiment marks the first time such a large complex molecule has been cooled using laser light and say it unlocks new avenues of study in quantum simulation and computation, particle physics, and quantum chemistry.

“These kinds of molecules have structure that is ubiquitous in chemical and biological systems,” said John M. Doyle, the Henry B. Silsbee Professor of Physics and senior author on the paper. “Controlling perfectly their quantum states is basic research that could shed light on fundamental quantum processes in these building blocks of nature.”

The use of lasers to control atoms and molecules — the eventual building-blocks of a quantum computer — has been used since the 1960s and has since revolutionized atomic, molecular, and optical physics.

The technique essentially works by firing a laser at them, causing the atoms and molecules to absorb the photons from the light and recoil in the opposite direction. This eventually slows them down and even stops them in their tracks. When this happens quantum mechanics becomes the dominant way to describe and study their motion.

“The idea is that on one end of the spectrum there are atoms that have very few quantum states,” Doyle said. Because of this, these atoms are easy to control with light since they often remain in the same quantum state after absorbing and emitting light, he said. “With molecules they have motion that does not occur in atoms — vibrations and rotations. When the molecule absorbs and emits light this process can sometimes make the molecule spin around or vibrate internally. When this happens, it is now in a different quantum state and absorbing and emitting light no longer works [to cool it]. We have to ‘calm the molecule down,’ get rid of its extra vibration before it can interact with the light the way we want.”

Scientists — including those from the Doyle Group which is part of the Harvard Department of Physics and a member of the Harvard-MIT Center for Ultracold Atoms — have been able to cool a number of molecules using light, such as diatomic and triatomic molecules which each have two or three atoms.

Polyatomic molecules, on the other hand, are much more complex and have proven much more difficult to manipulate because of all the vibrations and rotations.

To get around this, the group used a method they pioneered to cool diatomic and triatomic molecules. Researchers set up up a sealed cryogenic chamber where they cooled helium to below four Kelvin (that’s close to 450 degrees below zero in Fahrenheit). This chamber essentially acts as a fridge. It’s this fridge where the scientists created the molecule CaOCH3. Right off the bat, it was already moving at a much slower velocity than it would normally, making it ideal for further cooling.

Next came the lasers. They turned on two beams of light coming at the molecule from opposing directions. These counterpropagating lasers prompted a reaction known as Sisyphus cooling. Mitra says the name is fitting since in Greek mythology Sisyphus is punished by having to roll a giant boulder up a hill for eternity, only for it to roll back down when he nears the top.

The same principle happens here with molecules, Mitra said. When two identical laser beams are firing in opposite directions, they form a standing wave of light. There are places where the light is less intense and there are places where it is stronger. This wave is what forms a metaphorical hill for the molecules.

The “molecule starts at the bottom of a hill formed by the counter-propagating laser beams and it starts climbing that hill just because it has some kinetic energy in it and as it climbs that hill, slowly, the kinetic energy that was its velocity gets converted into potential energy and it slows down and slows down and slows down until it gets to the top of the hill where it’s the slowest,” he said.

At that point, the molecule moves closer to a region where the light intensity is high, where it will more likely absorb a photon and rolls back down to the opposite side. “All they can do is keep doing this again and again and again,” Mitra said.

By looking at images from cameras placed outside the sealed chamber, the scientists then inspect how much a cloud of these molecules expands as it travels through the system. The smaller the width of the cloud, the less kinetic energy it has — therefore the colder it is.

Analyzing the data further, they saw just how cold. They took it from 22 milikelvin to about 1 milikelvin. In other words, just a few thousandths of a decimal above absolute zero.

In the paper, the scientists lay out ways get the molecule even colder and lay out some of the doors it opens in a range of modern physical and chemical research frontiers. The scientists explain, the study is a proof of concept that their method could be used to cool other carefully chosen complex molecules to help advance quantum science.

“What we did here is sort of extending the state of the art,” Mitra said. “It’s always been debated whether we would ever have technology that will be good enough to control complex molecules at the quantum level. This particular experiment is just a stepping stone.”

Go to Source


A new twist on DNA origami

A team of scientists from ASU and Shanghai Jiao Tong University (SJTU) led by Hao Yan, ASU’s Milton Glick Professor in the School of Molecular Sciences, and director of the ASU Biodesign Institute’s Center for Molecular Design and Biomimetics, has just announced the creation of a new type of meta-DNA structures that will open up the fields of optoelectronics (including information storage and encryption) as well as synthetic biology.

This research was published today in Nature Chemistry — indeed the meta-DNA self-assembly concept may totally transform the microscopic world of structural DNA nanotechnology.

It is common knowledge that the predictable nature of Watson-Crick base-pairing and the structural features of DNA have allowed DNA to be used as a versatile building block to engineer sophisticated nanoscale structures and devices.

“A milestone in DNA technology was certainly the invention of DNA origami, where a long single-stranded DNA (ssDNA) is folded into designated shapes with the help of hundreds of short DNA staple strands,” explained Yan. “However it has been challenging to assemble larger (micron to millimeter) sized DNA architectures which up until recently has limited the use of DNA origami.” The new micron sized structures are on the order of the width of a human hair which is 1000 times larger than the original DNA nanostructures.

Ever since gracing the cover of Science Magazine in 2011 with their elegant DNA origami nanostructures, Yan and collaborators have been working tirelessly, capitalizing on inspiration from nature, seeking to solve complex human problems.

“In this current research we developed a versatile “meta-DNA” (M-DNA) strategy that allowed various sub-micrometer to micrometer sized DNA structures to self-assemble in a manner similar to how simple short DNA strands self-assemble at the nanoscale level,” said Yan.

The group demonstrated that a 6-helix bundle DNA origami nanostructure in the sub-micrometer scale (meta-DNA) could be used as a magnified analogue of single-stranded DNA (ssDNA), and that two meta-DNAs containing complementary “meta-base pairs” could form double helices with programmed handedness and helical pitches.

Using meta-DNA building blocks they have constructed a series of sub-micrometer to micrometer scale DNA architectures, including meta-multi-arm junctions, 3D polyhedrons, and various 2D/3D lattices. They also demonstrated a hierarchical strand-displacement reaction on meta-DNA to transfer the dynamic features of DNA to the meta-DNA.

With the help of assistant professor Petr Sulc (SMS) they used a coarse-grained computational model of the DNA to simulate the double-stranded M-DNA structure and to understand the different yields of left-handed and right-handed structures that were obtained.

Further, by just changing the local flexibility of the individual M-DNA and their interactions, they were able to build a series of sub-micrometer or micron-scale DNA structures from 1D to 3D with a wide variety of geometric shapes, including meta-junctions, meta-double crossover tiles (M-DX), tetrahedrons, octahedrons, prisms, and six types of closely packed lattices.

In the future, more complicated circuits, molecular motors, and nanodevices could be rationally designed using M-DNA and used in applications related to biosensing and molecular computation. This research will make the creation of dynamic micron-scale DNA structures, that are reconfigurable upon stimulation, significantly more feasible.

The authors anticipate that the introduction of this M-DNA strategy will transform DNA nanotechnology from the nanometer to the microscopic scale. This will create a range of complex static and dynamic structures in the sub-micrometer and micron-scale that will enable many new applications.

For example, these structures may be used as a scaffold for patterning complex functional components that are larger and more complex than previously thought possible. This discovery may also lead to more sophisticated and complex behaviors that mimic cell or cellular components with a combination of different M-DNA based hierarchical strand displacement reactions.

Story Source:

Materials provided by Arizona State University. Note: Content may be edited for style and length.

Go to Source


‘Floppy’ atomic dynamics help turn heat into electricity

Materials scientists at Duke University have uncovered an atomic mechanism that makes certain thermoelectric materials incredibly efficient near high-temperature phase transitions. The information will help fill critical knowledge gaps in the computational modeling of such materials, potentially allowing researchers to discover new and better options for technologies that rely on transforming heat into electricity.

The results appear online on September 4 in the journal Nature Communications.

Thermoelectric materials convert heat into electricity when electrons migrate from the hot side of the material to the cold side. Because providing a temperature difference between its two sides is required, researchers are interested in trying to use these materials to generate electricity from the heat of a car’s tailpipe or recovering energy lost as heat in power plants.

Over the past couple of years, new records were set for thermoelectric efficiency with an emerging material called tin selenide and its sister compound, tin sulfide. The sulfide version is not quite as good a thermoelectric yet, but it is being optimized further because it is cheaper to produce and more environmentally friendly.

While scientists know that both of these compounds are excellent thermoelectric materials, they don’t exactly know why. In the new study, Olivier Delaire, associate professor of mechanical engineering and materials science at Duke, and two of his graduate students, Tyson Lanigan-Atkins and Shan Yang, tried to fill in a bit of that knowledge gap.

“We wanted to try to understand why these materials have such low thermal conductivity, which helps enable the strong thermoelectric properties they’re known for,” said Delaire. “Using a powerful combination of neutron scattering measurements and computer simulations, we discovered that it’s related to the material’s atomic vibrations at high temperature, which nobody had seen before.”

Low thermal conductivity is a necessary ingredient of any good thermoelectric material. Because electricity generation requires a heat differential between its two sides, it makes sense that materials that stop heat from spreading across them would perform well.

To get a view of tin sulfide’s atomic vibrations in action, Delaire and Lanigan-Atkins took samples to the High Flux Isotope Reactor at Oak Ridge National Laboratory. By ricocheting neutrons off of the tin sulfide’s atoms and detecting where they end up after, the researchers could determine where the atoms were and how they were collectively vibrating in the crystal’s lattice.

The facilities at ORNL were particularly well-suited for the task. Because the atomic vibrations of tin sulfide are relatively slow, the researchers need low-energy “cold” neutrons that are delicate enough to see them. And ORNL has some of the best cold-neutron instruments in the world.

“We found that the tin sulfide effectively has certain modes of vibration that are very ‘floppy,'” said Delaire. “And that its properties are connected with inherent instability in its crystal lattice.”

At lower temperatures, tin sulfide is a layered material with distorted grids of tin and sulfide lying on top of another, corrugated like an accordion. But at temperatures near its phase transition point of 980 degrees Fahrenheit — which is where thermoelectric generators often operate — that distorted environment begins to breaks down. The two layers, as if by magic, become undistorted again and more symmetric, which is where the “floppiness” comes into play.

Because the material is sloshing between the two structural arrangements at high temperature, its atoms no longer vibrate together like a well-tuned guitar string and instead become anharmonically damped. To understand this better, think of a car with terrible shocks as having a harmonic vibration — it will keep bouncing long after going over the slightest bump. But proper shocks will dampen that vibration, making it anharmonic and stopping it from oscillating for a long time.

“Heat waves travel through atomic vibrations in a material,” said Delaire. “So when the atomic vibrations in tin sulfide become floppy, they don’t transmit vibrations very quickly and they also don’t vibrate for very long. That’s the root cause of its ability to stop heat from traveling within it.”

With these results in hand, Delaire and Yang then sought to confirm and understand them computationally. Using supercomputers at Lawrence Berkeley National Laboratory, Yang was able to reproduce the same anharmonic effects at high temperatures. Besides confirming what they saw in the experiments, Delaire says these updated models will allow researchers to better search for new thermoelectric materials to use in tomorrow’s technologies.

“Researchers in the field have not been accounting for strong temperature dependences on heat propagation velocities, and this modeling shows just how important that variable can be,” said Delaire. “Adopting these results and other theoretical advances will make it easier for materials scientists to predict other good thermoelectric materials.”

Story Source:

Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length.

Go to Source


New anode material could lead to safer fast-charging batteries

Scientists at UC San Diego have discovered a new anode material that enables lithium-ion batteries to be safely recharged within minutes for thousands of cycles. Known as a disordered rocksalt, the new anode is made up of earth-abundant lithium, vanadium and oxygen atoms arranged in a similar way as ordinary kitchen table salt, but randomly. It is promising for commercial applications where both high energy density and high power are desired, such as electric cars, vacuum cleaners or drills.

The study, jointly led by nanoengineers in the labs of Professors Ping Liu and Shyue Ping Ong, was published in Nature on September 2.

Currently, two materials are used as anodes in most commercially available lithium-ion batteries that power items like cell phones, laptops and electric vehicles. The most common, a graphite anode, is extremely energy dense — a lithium ion battery with a graphite anode can power a car for hundreds of miles without needing to be recharged. However, recharging a graphite anode too quickly can result in fire and explosions due to a process called lithium metal plating. A safer alternative, the lithium titanate anode, can be recharged rapidly but results in a significant decrease in energy density, which means the battery needs to be recharged more frequently.

This new disordered rocksalt anode — Li3V2O5 — sits in an important middle ground: it is safer to use than graphite, yet offers a battery with at least 71% more energy than lithium titanate.

“The capacity and energy will be a little bit lower than graphite, but it’s faster, safer and has a longer life. It has a much lower voltage and therefore much improved energy density over current commercialized fast charging lithium-titanate anodes,” said Haodong Liu, a postdoctoral scholar in Professor Ping Liu’s lab and first author of the paper. “So with this material we can make fast-charging, safe batteries with a long life, without sacrificing too much energy density.”

The researchers formed a company called Tyfast in order to commercialize this discovery. The startup’s first markets will be electric buses and power tools, since the characteristics of the Li3V2O5 disordered rocksalt make it ideal for use in devices where recharging can be easily scheduled.

Researchers in Professor Liu’s lab plan to continue developing this lithium-vanadium oxide anode material, while also optimizing other battery components to develop a commercially viable full cell.

“For a long time, the battery community has been looking for an anode material operating at a potential just above graphite to enable safe, fast charging lithium-ion batteries. This material fills an important knowledge and application gap,” said Ping Liu. “We are excited for its commercial potential since the material can be a drop-in solution for today’s lithium-ion battery manufacturing process.”

Why try this material?

Researchers first experimented with disordered rocksalt as a battery cathode six years ago. Since then, much work has been done to turn the material into an efficient cathode. Haodong Liu said the UC San Diego team decided to test the material as an anode based on a hunch.

“When people use it as a cathode they have to discharge the material to 1.5 volts,” he said. “But when we looked at the structure of the cathode material at 1.5 volts, we thought this material has a special structure that may be able to host more lithium ions — that means it can go to even lower voltage to work as an anode.”

In the study, the team found that their disordered rocksalt anode could reversibly cycle two lithium ions at an average voltage of 0.6 V — higher than the 0.1 V of graphite, eliminating lithium metal plating at a high charge rate which makes the battery safer, but lower than the 1.5 V at which lithium-titanate intercalates lithium, and therefore storing much more energy.

The researchers showed that the Li3V2O5 anode can be cycled for over 6,000 cycles with negligible capacity decay, and can charge and discharge energy rapidly, delivering over 40 percent of its capacity in 20 seconds. The low voltage and high rate of energy transfer are due to a unique redistributive lithium intercalation mechanism with low energy barriers.

Postdoctoral scholar Zhuoying Zhu, from Professor Shyue Ping Ong’s Materials Virtual Lab, performed theoretical calculations to understand why the disordered rocksalt Li3V2O5 anode works as well as it does.

“We discovered that Li3V2O5 operates via a charging mechanism that is different from other electrode materials. The lithium ions rearrange themselves in a way that results in both low voltage as well as fast lithium diffusion,” said Zhuoying Zhu.

“We believe there are other electrode materials waiting to be discovered that operate on a similar mechanism,” added Ong.

The experimental studies at UC San Diego were funded by awards from the UC San Diego startup fund to Ping Liu, while the theoretical studies were funded by the Department of Energy and the National Science Foundation’s Data Infrastructure Building Blocks (DIBBS) Local Spectroscopy Data Infrastructure program, and used resources at the San Diego Supercomputer Center provided under the Extreme Science and Engineering Discovery Environment (XSEDE).

Go to Source


A new method for making a key component of plastics

Scientists have discovered a previously unknown way that some bacteria produce the chemical ethylene — a finding that could lead to new ways to produce plastics without using fossil fuels.

The study, published today (Aug. 27, 2020) in the journal Science, showed that the bacteria created ethylene gas as a byproduct of metabolizing sulfur, which they need to survive.

But the process that the bacteria use to do that could make it very valuable in manufacturing, said Justin North, lead author of the study and a research scientist in microbiology at The Ohio State University.

“We may have cracked a major technological barrier to producing a large amount of ethylene gas that could replace fossil fuel sources in making plastics,” North said.

“There’s still a lot of work to do to develop these strains of bacteria to produce industrially significant quantities of ethylene gas. But this opens the door.”

Researchers from Ohio State worked on the study with colleagues from Colorado State University, Oak Ridge National Laboratory and the Pacific Northwest National Laboratory.

Ethylene is widely used in the chemical industry to make nearly all plastics, North said. It is used more than any other organic compound in manufacturing.

Currently, oil or natural gas are used to create ethylene. Other researchers have discovered bacteria that can also create the chemical, but there had been a technological barrier to using it — the need for oxygen as part of the process, said Robert Tabita, senior author of the study and professor of microbiology at Ohio State.

“Oxygen plus ethylene is explosive, and that is a major hurdle for using it in manufacturing,” said Tabita, who is an Ohio Eminent Scholar.

“But the bacterial system we discovered to produce ethylene works without oxygen and that gives us a significant technological advantage.”

The discovery was made in Tabita’s lab at Ohio State when researchers were studying Rhodospirillum rubrum bacteria. They noticed that the bacteria were acquiring the sulfur they needed to grow from methylthio ethanol.

“We were trying to understand how the bacteria were doing this, because there were no known chemical reactions for how this was occurring,” North said.

That was when he decided to see what gases the bacteria were producing — and discovered ethylene gas was among them.

Working with colleagues from Colorado State and the two national labs, North, Tabita and other Ohio State colleagues were able to identify the previously unknown process that liberated the sulfur the bacteria needed, along with what North called the “happy byproduct” of ethylene.

That wasn’t all: The researchers also discovered the bacteria were using dimethyl sulfide to create methane, a potent greenhouse gas.

All the research was done in the lab, so it remains to be seen exactly how common this process is in the environment, North said.

But the researchers have identified one situation where this newly discovered process of ethylene production may have real-life consequences.

Ethylene is an important natural plant hormone that, in the right amounts, is key to the growth and health of plants. But it is also harmful to plant growth in high quantities, said study co-author Kelly Wrighton, associate professor of soil and crop science at Colorado State University.

“This newly discovered pathway may shed light on many previously unexplained environmental phenomena, including the large amounts of ethylene that accumulates to inhibitory levels in waterlogged soils, causing extensive crop damage,” Wrighton said.

Added North: “Now that we know how it happens, we may be able to circumvent or treat these problems so that ethylene doesn’t accumulate in soils when flooding occurs.”

Tabita said this research is the result of a happy accident.

“This study, involving the collaborative research and expertise of two universities and two national laboratories, is a perfect example of how serendipitous findings often lead to important advances,” Tabita said.

“Initially, our studies involved a totally unrelated research problem that had seemingly no relationship to the findings reported here.”

While studying the role of one particular protein in bacteria sulfur metabolism, the researchers noted an entirely different group of proteins were unexpectedly involved as well. This led to the discovery of novel metabolic reactions and the unexpected production of large quantities of ethylene.

“It was a result we could not predict in a million years,” Tabita said.

“Recognizing the industrial and environmental significance of ethylene, we embarked on these cooperative studies, and subsequently discovered a completely novel complex enzyme system. Who would have believed it?”

The research was supported by the Department of Energy’s Office of Science, the National Cancer Institute and the National Science Foundation.

Go to Source