Categories
ScienceDaily

Microfluidic system with cell-separating powers may unravel how novel pathogens attack

To develop effective therapeutics against pathogens, scientists need to first uncover how they attack host cells. An efficient way to conduct these investigations on an extensive scale is through high-speed screening tests called assays.

Researchers at Texas A&M University have invented a high-throughput cell separation method that can be used in conjunction with droplet microfluidics, a technique whereby tiny drops of fluid containing biological or other cargo can be moved precisely and at high speeds. Specifically, the researchers successfully isolated pathogens attached to host cells from those that were unattached within a single fluid droplet using an electric field.

“Other than cell separation, most biochemical assays have been successfully converted into droplet microfluidic systems that allow high-throughput testing,” said Arum Han, professor in the Department of Electrical and Computer Engineering and principal investigator of the project. “We have addressed that gap, and now cell separation can be done in a high-throughput manner within the droplet microfluidic platform. This new system certainly simplifies studying host-pathogen interactions, but it is also very useful for environmental microbiology or drug screening applications.”

The researchers reported their findings in the August issue of the journal Lab on a Chip.

Microfluidic devices consist of networks of micron-sized channels or tubes that allow for controlled movements of fluids. Recently, microfluidics using water-in-oil droplets have gained popularity for a wide range of biotechnological applications. These droplets, which are picoliters (or a million times less than a microliter) in volume, can be used as platforms for carrying out biological reactions or transporting biological materials. Millions of droplets within a single chip facilitate high-throughput experiments, saving not just laboratory space but the cost of chemical reagents and manual labor.

Biological assays can involve different cell types within a single droplet, which eventually need to be separated for subsequent analyses. This task is extremely challenging in a droplet microfluidic system, Han said.

“Getting cell separation within a tiny droplet is extremely difficult because, if you think about it, first, it’s a tiny 100-micron diameter droplet, and second, within this extremely tiny droplet, multiple cell types are all mixed together,” he said.

To develop the technology needed for cell separation, Han and his team chose a host-pathogen model system consisting of the salmonella bacteria and the human macrophage, a type of immune cell. When both these cell types are introduced within a droplet, some of the bacteria adhere to the macrophage cells. The goal of their experiments was to separate the salmonella that attached to the macrophage from the ones that did not.

For cell separation, Han and his team constructed two pairs of electrodes that generated an oscillating electric field in close proximity to the droplet containing the two cell types. Since the bacteria and the host cells have different shapes, sizes and electrical properties, they found that the electric field produced a different force on each cell type. This force resulted in the movement of one cell type at a time, separating the cells into two different locations within the droplet. To separate the mother droplet into two daughter droplets containing one type of cells, the researchers also made a downstream Y-shaped splitting junction.

Han said although these experiments were carried with a host and pathogen whose interaction is well-established, their new microfluidic system equipped with in-drop separation is most useful when the pathogenicity of bacterial species is unknown. He added that their technology enables quick, high-throughput screening in these situations and for other applications where cell separation is required.

“Liquid handling robotic hands can conduct millions of assays but are extremely costly. Droplet microfluidics can do the same in millions of droplets, much faster and much cheaper,” Han said. “We have now integrated cell separation technology into droplet microfluidic systems, allowing the precise manipulation of cells in droplets in a high-throughput manner, which was not possible before.”

Story Source:

Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Researchers develop new biomaterial that helps bones heal faster

Scientists have developed a new biomaterial that helps bones heal faster by enhancing adults’ stem cell regenerative ability.

The study, led by researchers from RCSI University of Medicine and Health Sciences and CHI at Temple Street, is published in the current edition of Biomaterials, the highest ranked journal in the field of biomaterials science.

The researchers had previously discovered a molecule called JNK3, which is a key driver of children’s stem cells being more sensitive to their environment and regenerating better than adults’. This explains, at least partially, why children’s bones are able to heal more quickly. Building on this knowledge, they created a biomaterial that mimics the structure of bone tissue and incorporates nanoparticles that activate JNK3.

When tested in a pre-clinical model, the biomaterial quickly repaired large bone defects and reduced inflammation after a month of use. The biomaterial also proved to be safer and as effective as other drug-loaded biomaterials for bone repair whose use has been controversially associated with dangerous side-effects, including cancer, infection or off-site bone formation.

“While more testing is needed before we can begin clinical trials, these results are very promising,” said Professor Fergal O’Brien, the study’s principal investigator and RCSI’s Director of Research and Innovation.

“This study has shown that understanding stem cell mechanobiology can help identify alternative therapeutic molecules for repairing large defects in bone, and potentially other body tissues. In a broader sense, this project is a great example of how growing our understanding of mechanobiology can identify new treatments that directly benefit patients — a key goal of what we do here at RCSI.”

The work was carried out by researchers from the Tissue Engineering Research Group (TERG) and SFI AMBER Centre based at RCSI in collaboration with a team from Children’s Health Ireland (CHI) at Temple Street Hospital. The CHI at Temple Street team was led by Mr Dylan Murray, a lead consultant craniofacial, plastic and reconstructive surgeon at the National Paediatric Craniofacial Centre (NPCC), who has collaborated with the RCSI team for a number of years.

“It is very exciting to be part of this translational project in which the participation and consent of the patients of the NPCC at Temple Street -whom donated harvested bone cells- have contributed immensely to this success,” said Mr Murray.

The research was funded by the Children’s Health Foundation Temple Street (RPAC-2013-06), Health Research Board of Ireland under the Health Research Awards — Patient-Oriented Research scheme (HRA-POR-2014-569), European Research Council (ERC) under Horizon 2020 (ReCaP project #788753) and Science Foundation Ireland (SFI) through the Advanced Materials and Bioengineering Research (AMBER) Centre (SFI/12/RC/2278).

“We have now proven that identifying mechanobiology-inspired therapeutic targets can be used to engineer smart biomaterials that recreate children’s superior healing capacity in adults’ stem cells,” said Dr Arlyng Gonzalez Vazquez, the study’s first author and a research fellow in TERG.

“We are using the same strategy to develop a novel biomaterial for cartilage repair in adults. A follow-up project recently funded by Children’s Health Foundation Temple Street is also aiming to utilise a similar scientific approach to identify if the molecular mechanisms found in children diagnosed with craniosynostosis (a condition where the skull fuses to early and in his brain growth) could be used to develop a therapeutic biomaterial that accelerates bone formation and bone healing in adults.”

Story Source:

Materials provided by RCSI. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Neutrinos yield first experimental evidence of catalyzed fusion dominant in many stars

An international team of about 100 scientists of the Borexino Collaboration, including particle physicist Andrea Pocar at the University of Massachusetts Amherst, report in Nature this week detection of neutrinos from the sun, directly revealing for the first time that the carbon-nitrogen-oxygen (CNO) fusion-cycle is at work in our sun.

The CNO cycle is the dominant energy source powering stars heavier than the sun, but it had so far never been directly detected in any star, Pocar explains.

For much of their life, stars get energy by fusing hydrogen into helium, he adds. In stars like our sun or lighter, this mostly happens through the ‘proton-proton’ chains. However, many stars are heavier and hotter than our sun, and include elements heavier than helium in their composition, a quality known as metallicity. The prediction since the 1930’s is that the CNO-cycle will be dominant in heavy stars.

Neutrinos emitted as part of these processes provide a spectral signature allowing scientists to distinguish those from the ‘proton-proton chain’ from those from the ‘CNO-cycle.’ Pocar points out, “Confirmation of CNO burning in our sun, where it operates at only one percent, reinforces our confidence that we understand how stars work.”

Beyond this, CNO neutrinos can help resolve an important open question in stellar physics, he adds. That is, how the sun’s central metallicity, as can only be determined by the CNO neutrino rate from the core, is related to metallicity elsewhere in a star. Traditional models have run into a difficulty — surface metallicity measures by spectroscopy do not agree with the sub-surface metallicity measurements inferred from a different method, helioseismology observations.

Pocar says neutrinos are really the only direct probe science has for the core of stars, including the sun, but they are exceedingly difficult to measure. As many as 420 billion of them hit every square inch of the earth’s surface per second, yet virtually all pass through without interacting. Scientists can only detect them using very large detectors with exceptionally low background radiation levels.

The Borexino detector lies deep under the Apennine Mountains in central Italy at the INFN’s Laboratori Nazionali del Gran Sasso. It detects neutrinos as flashes of light produced when neutrinos collide with electrons in 300-tons of ultra-pure organic scintillator. Its great depth, size and purity make Borexino a unique detector for this type of science, alone in its class for low-background radiation, Pocar says. The project was initiated in the early 1990s by a group of physicists led by Gianpaolo Bellini at the University of Milan, Frank Calaprice at Princeton and the late Raju Raghavan at Bell Labs.

Until its latest detections, the Borexino collaboration had successfully measured components of the ‘proton-proton’ solar neutrino fluxes, helped refine neutrino flavor-oscillation parameters, and most impressively, even measured the first step in the cycle: the very low-energy ‘pp’ neutrinos, Pocar recalls.

Its researchers dreamed of expanding the science scope to also look for the CNO neutrinos — in a narrow spectral region with particularly low background — but that prize seemed out of reach. However, research groups at Princeton, Virginia Tech and UMass Amherst believed CNO neutrinos might yet be revealed using the additional purification steps and methods they had developed to realize the exquisite detector stability required.

Over the years and thanks to a sequence of moves to identify and stabilize the backgrounds, the U.S. scientists and the entire collaboration were successful. “Beyond revealing the CNO neutrinos which is the subject of this week’s Nature article, there is now even a potential to help resolve the metallicity problem as well,” Pocar says.

Before the CNO neutrino discovery, the lab had scheduled Borexino to end operations at the close of 2020. But because the data used in the analysis for the Nature paper was frozen, scientists have continued collecting data, as the central purity has continued to improve, making a new result focused on the metallicity a real possibility, Pocar says. Data collection could extend into 2021 since the logistics and permitting required, while underway, are non-trivial and time-consuming. “Every extra day helps,” he remarks.

Pocar has been with the project since his graduate school days at Princeton in the group led by Frank Calaprice, where he worked on the design, construction of the nylon vessel and the commissioning of the fluid handling system. He later worked with his students at UMass Amherst on data analysis and, most recently, on techniques to characterize the backgrounds for the CNO neutrino measurement.

This work was supported in the U.S. by the National Science Foundation. Borexino is an international collaboration also funded by the Italian National Institute for Nuclear Physics (INFN), and funding agencies in Germany, Russia and Poland.

Go to Source
Author:

Categories
ScienceDaily

Research creates hydrogen-producing living droplets, paving way for alternative future energy source

Scientists have built tiny droplet-based microbial factories that produce hydrogen, instead of oxygen, when exposed to daylight in air.

The findings of the international research team based at the University of Bristol and Harbin Institute of Technology in China, are published today in Nature Communications.

Normally, algal cells fix carbon dioxide and produce oxygen by photosynthesis. The study used sugary droplets packed with living algal cells to generate hydrogen, rather than oxygen, by photosynthesis.

Hydrogen is potentially a climate-neutral fuel, offering many possible uses as a future energy source. A major drawback is that making hydrogen involves using a lot of energy, so green alternatives are being sought and this discovery could provide an important step forward.

The team, comprising Professor Stephen Mann and Dr Mei Li from Bristol’s School of Chemistry together with Professor Xin Huang and colleagues at Harbin Institute of Technology in China, trapped ten thousand or so algal cells in each droplet, which were then crammed together by osmotic compression. By burying the cells deep inside the droplets, oxygen levels fell to a level that switched on special enzymes called hydrogenases that hijacked the normal photosynthetic pathway to produce hydrogen. In this way, around a quarter of a million microbial factories, typically only one-tenth of a millimetre in size, could be prepared in one millilitre of water.

To increase the level of hydrogen evolution, the team coated the living micro-reactors with a thin shell of bacteria, which were able to scavenge for oxygen and therefore increase the number of algal cells geared up for hydrogenase activity.

Although still at an early stage, the work provides a step towards photobiological green energy development under natural aerobic conditions.

Professor Stephen Mann, Co-Director of the Max Planck Bristol Centre for Minimal Biology at Bristol, said: “Using simple droplets as vectors for controlling algal cell organization and photosynthesis in synthetic micro-spaces offers a potentially environmentally benign approach to hydrogen production that we hope to develop in future work.”

Professor Xin Huang at Harbin Institute of Technology added: “Our methodology is facile and should be capable of scale-up without impairing the viability of the living cells. It also seems flexible; for example, we recently captured large numbers of yeast cells in the droplets and used the microbial reactors for ethanol production.”

Story Source:

Materials provided by University of Bristol. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Growing interest in Moon resources could cause tension

An international team of scientists led by the Center for Astrophysics | Harvard & Smithsonian, has identified a problem with the growing interest in extractable resources on the moon: there aren’t enough of them to go around. With no international policies or agreements to decide “who gets what from where,” scientists believe tensions, overcrowding, and quick exhaustion of resources to be one possible future for moon mining projects. The paper published today in the Philosophical Transactions of the Royal Society A.

“A lot of people think of space as a place of peace and harmony between nations. The problem is there’s no law to regulate who gets to use the resources, and there are a significant number of space agencies and others in the private sector that aim to land on the moon within the next five years,” said Martin Elvis, astronomer at the Center for Astrophysics | Harvard & Smithsonian and the lead author on the paper. “We looked at all the maps of the Moon we could find and found that not very many places had resources of interest, and those that did were very small. That creates a lot of room for conflict over certain resources.”

Resources like water and iron are important because they will enable future research to be conducted on, and launched from, the moon. “You don’t want to bring resources for mission support from Earth, you’d much rather get them from the Moon. Iron is important if you want to build anything on the moon; it would be absurdly expensive to transport iron to the moon,” said Elvis. “You need water to survive; you need it to grow food — you don’t bring your salad with you from Earth — and to split into oxygen to breathe and hydrogen for fuel.”

Interest in the moon as a location for extracting resources isn’t new. An extensive body of research dating back to the Apollo program has explored the availability of resources such as helium, water, and iron, with more recent research focusing on continuous access to solar power, cold traps and frozen water deposits, and even volatiles that may exist in shaded areas on the surface of the moon. Tony Milligan, a Senior Researcher with the Cosmological Visionaries project at King’s College London, and a co-author on the paper said, “Since lunar rock samples returned by the Apollo program indicated the presence of Helium-3, the moon has been one of several strategic resources which have been targeted.”

Although some treaties do exist, like the 1967 Outer Space Treaty — prohibiting national appropriation — and the 2020 Artemis Accords — reaffirming the duty to coordinate and notify — neither is meant for robust protection. Much of the discussion surrounding the moon, and including current and potential policy for governing missions to the satellite, have centered on scientific versus commercial activity, and who should be allowed to tap into the resources locked away in, and on, the moon. According to Milligan, it’s a very 20th century debate, and doesn’t tackle the actual problem. “The biggest problem is that everyone is targeting the same sites and resources: states, private companies, everyone. But they are limited sites and resources. We don’t have a second moon to move on to. This is all we have to work with.” Alanna Krolikowski, assistant professor of science and technology policy at Missouri University of Science and Technology (Missouri S&T) and a co-author on the paper, added that a framework for success already exists and, paired with good old-fashioned business sense, may set policy on the right path. “While a comprehensive international legal regime to manage space resources remains a distant prospect, important conceptual foundations already exist and we can start implementing, or at least deliberating, concrete, local measures to address anticipated problems at specific sites today,” said Krolikowski. “The likely first step will be convening a community of prospective users, made up of those who will be active at a given site within the next decade or so. Their first order of business should be identifying worst-case outcomes, the most pernicious forms of crowding and interference, that they seek to avoid at each site. Loss aversion tends to motivate actors.”

There is still a risk that resource locations will turn out to be more scant than currently believed, and scientists want to go back and get a clearer picture of resource availability before anyone starts digging, drilling, or collecting. “We need to go back and map resource hot spots in better resolution. Right now, we only have a few miles at best. If the resources are all contained in a smaller area, the problem will only get worse,” said Elvis. “If we can map the smallest spaces, that will inform policymaking, allow for info-sharing and help everyone to play nice together so we can avoid conflict.”

While more research on these lunar hot spots is needed to inform policy, the framework for possible solutions to potential crowding are already in view. “Examples of analogs on Earth point to mechanisms for managing these challenges. Common-pool resources on Earth, resources over which no single actor can claim jurisdiction or ownership, offer insights to glean. Some of these are global in scale, like the high seas, while other are local like fish stocks or lakes to which several small communities share access,” said Krolikowski, adding that one of the first challenges for policymakers will be to characterize the resources at stake at each individual site. “Are these resources, say, areas of real estate at the high-value Peaks of Eternal Light, where the sun shines almost continuously, or are they units of energy to be generated from solar panels installed there? At what level can they can realistically be exploited? How should the benefits from those activities be distributed? Developing agreement on those questions is a likely precondition to the successful coordination of activities at these uniquely attractive lunar sites.”

Go to Source
Author:

Categories
ScienceDaily

After more than a decade, ChIP-seq may be quantitative after all

For more than a decade, scientists studying epigenetics have used a powerful method called ChIP-seq to map changes in proteins and other critical regulatory factors across the genome. While ChIP-seq provides invaluable insights into the underpinnings of health and disease, it also faces a frustrating challenge: its results are often viewed as qualitative rather than quantitative, making interpretation difficult.

But, it turns out, ChIP-seq may have been quantitative all along, according to a recent report selected as an Editors’ Pick by and featured on the cover of the Journal of Biological Chemistry.

“ChIP-seq is the backbone of epigenetics research. Our findings challenge the belief that additional steps are required to make it quantitative,” said Brad Dickson, Ph.D., a staff scientist at Van Andel Institute and the study’s corresponding author. “Our new approach provides a way to quantify results, thereby making ChIP-seq more precise, while leaving standard protocols untouched.”

Previous attempts to quantify ChIP-seq results have led to additional steps being added to the protocol, including the use of “spike-ins,” which are additives designed to normalize ChIP-seq results and reveal histone changes that otherwise may be obscured. These extra steps increase the complexity of experiments while also adding variables that could interfere with reproducibility. Importantly, the study also identifies a sensitivity issue in spike-in normalization that has not previously been discussed.

Using a predictive physical model, Dickson and his colleagues developed a novel approach called the sans-spike-in method for Quantitative ChIP-sequencing, or siQ-ChIP. It allows researchers to follow the standard ChIP-seq protocol, eliminating the need for spike-ins, and also outlines a set of common measurements that should be reported for all ChIP-seq experiments to ensure reproducibility as well as quantification.

By leveraging the binding reaction at the immunoprecipitation step, siQ-ChIP defines a physical scale for sequencing results that allows comparison between experiments. The quantitative scale is based on the binding isotherm of the immunoprecipitation products.

Story Source:

Materials provided by Van Andel Research Institute. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New insights into memristive devices by combining incipient ferroelectrics and graphene

Scientists are working on new materials to create neuromorphic computers, with a design based on the human brain. A crucial component is a memristive device, the resistance of which depends on the history of the device — just like the response of our neurons depends on previous input. Materials scientists from the University of Groningen analysed the behaviour of strontium titanium oxide, a platform material for memristor research and used the 2D material graphene to probe it. On 11 November 2020, the results were published in the journal ACS Applied Materials and Interfaces.

Computers are giant calculators, full of switches that have a value of either 0 or 1. Using a great many of these binary systems, computers can perform calculations very rapidly. However, in other respects, computers are not very efficient. Our brain uses less energy for recognizing faces or performing other complex tasks than a standard microprocessor. That is because our brain is made up of neurons that can have many values other than 0 and 1 and because the neurons’ output depends on previous input.

Oxygen vacancies

To create memristors, switches with a memory of past events, strontium titanium oxide (STO) is often used. This material is a perovskite, whose crystal structure depends on temperature, and can become an incipient ferroelectric at low temperatures. The ferroelectric behaviour is lost above 105 Kelvin. The domains and domain walls that accompany these phase transitions are the subject of active research. Yet, it is still not entirely clear why the material behaves the way it does. ‘It is in a league of its own,’ says Tamalika Banerjee, Professor of Spintronics of Functional Materials at the Zernike Institute for Advanced Materials, University of Groningen.

The oxygen atoms in the crystal appear to be key to its behaviour. ‘Oxygen vacancies can move through the crystal and these defects are important,’ says Banerjee. ‘Furthermore, domain walls are present in the material and they move when a voltage is applied to it.’ Numerous studies have sought to find out how this happens, but looking inside this material is complicated. However, Banerjee’s team succeeded in using another material that is in a league of its own: graphene, the two-dimensional carbon sheet.

Conductivity

‘The properties of graphene are defined by its purity,’ says Banerjee, ‘whereas the properties of STO arise from imperfections in the crystal structure. We found that combining them leads to new insights and possibilities.’ Much of this work was carried out by Banerjee’s PhD student Si Chen. She placed graphene strips on top of a flake of STO and measured the conductivity at different temperatures by sweeping a gate voltage between positive and negative values. ‘When there is an excess of either electrons or the positive holes, created by the gate voltage, graphene becomes conductive,’ Chen explains. ‘But at the point where there are very small amounts of electrons and holes, the Dirac point, conductivity is limited.’

In normal circumstances, the minimum conductivity position does not change with the sweeping direction of the gate voltage. However, in the graphene strips on top of STO, there is a large separation between the minimum conductivity positions for the forward sweep and the backward sweep. The effect is very clear at 4 Kelvin, but less pronounced at 105 Kelvin or at 150 Kelvin. Analysis of the results, along with theoretical studies carried out at Uppsala University, shows that oxygen vacancies near the surface of the STO are responsible.

Memory

Banerjee: ‘The phase transitions below 105 Kelvin stretch the crystal structure, creating dipoles. We show that oxygen vacancies accumulate at the domain walls and that these walls offer the channel for the movement of oxygen vacancies. These channels are responsible for memristive behaviour in STO.’ Accumulation of oxygen vacancy channels in the crystal structure of STO explains the shift in the position of the minimum conductivity.

Chen also carried out another experiment: ‘We kept the STO gate voltage at -80 V and measured the resistance in the graphene for almost half an hour. In this period, we observed a change in resistance, indicating a shift from hole to electron conductivity.’ This effect is primarily caused by the accumulation of oxygen vacancies at the STO surface.

All in all, the experiments show that the properties of the combined STO/graphene material change through the movement of both electrons and ions, each at different time scales. Banerjee: ‘By harvesting one or the other, we can use the different response times to create memristive effects, which can be compared to short-term or long-term memory effects.’ The study creates new insights into the behaviour of STO memristors. ‘And the combination with graphene opens up a new path to memristive heterostructures combining ferroelectric materials and 2D materials.’

Story Source:

Materials provided by University of Groningen. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Improving quantum dot interactions, one layer at a time

Osaka City University scientists and colleagues in Japan have found a way to control an interaction between quantum dots that could greatly improve charge transport, leading to more efficient solar cells. Their findings were published in the journal Nature Communications.

Nanomaterials engineer DaeGwi Kim led a team of scientists at Osaka City University, RIKEN Center for Emergent Matter Science and Kyoto University to investigate ways to control a property called quantum resonance in layered structures of quantum dots called superlattices.

“Our simple method for fine-tuning quantum resonance is an important contribution to both optical materials and nanoscale material processing,” says Kim.

Quantum dots are nanometer-sized semiconductor particles with interesting optical and electronic properties. When light is shone on them, for example, they emit strong light at room temperature, a property called photoluminescence. When quantum dots are close enough to each other, their electronic states are coupled, a phenomenon called quantum resonance. This greatly improves their ability to transport electrons between them. Scientists have been wanting to manufacture devices using this interaction, including solar cells, display technologies, and thermoelectric devices.

However, they have so far found it difficult to control the distances between quantum dots in 1D, 2D and 3D structures. Current fabrication processes use long ligands to hold quantum dots together, which hinders their interactions.

Kim and his colleagues found they could detect and control quantum resonance by using cadmium telluride quantum dots connected with short N-acetyl-L-cysteine ligands. They controlled the distance between quantum dot layers by placing a spacer layer between them made of oppositely charged polyelectrolytes. Quantum resonance is detected between stacked dots when the spacer layer is thinner than two nanometers. The scientists also controlled the distance between quantum dots in a single layer, and thus quantum resonance, by changing the concentration of quantum dots used in the layering process.

The team next plans to study the optical properties, especially photoluminescence, of quantum dot superlattices made using their layer-by-layer approach. “This is extremely important for realizing new optical electronic devices made with quantum dot superlattices,” says Kim.

Kim adds that their fabrication method can be used with other types of water-soluble quantum dots and nanoparticles. “Combining different types of semiconductor quantum dots, or combining semiconductor quantum dots with other nanoparticles, will expand the possibilities of new material design,” says Kim.

Story Source:

Materials provided by Osaka City University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Three reasons why COVID-19 can cause silent hypoxia

Scientists are still solving the many puzzling aspects of how the novel coronavirus attacks the lungs and other parts of the body. One of the biggest and most life-threatening mysteries is how the virus causes “silent hypoxia,” a condition when oxygen levels in the body are abnormally low, which can irreparably damage vital organs if gone undetected for too long. Now, thanks to computer models and comparisons with real patient data, Boston University biomedical engineers and collaborators from the University of Vermont have begun to crack the mystery.

Despite experiencing dangerously low levels of oxygen, many people infected with severe cases of COVID-19 sometimes show no symptoms of shortness of breath or difficulty breathing. Hypoxia’s ability to quietly inflict damage is why it’s been coined “silent.” In coronavirus patients, it’s thought that the infection first damages the lungs, rendering parts of them incapable of functioning properly. Those tissues lose oxygen and stop working, no longer infusing the blood stream with oxygen, causing silent hypoxia. But exactly how that domino effect occurs has not been clear until now.

“We didn’t know [how this] was physiologically possible,” says Bela Suki, a BU College of Engineering professor of biomedical engineering and of materials science and engineering and one of the authors of the study. Some coronavirus patients have experienced what some experts have described as levels of blood oxygen that are “incompatible with life.” Disturbingly, Suki says, many of these patients showed little to no signs of abnormalities when they underwent lung scans.

To help get to the bottom of what causes silent hypoxia, BU biomedical engineers used computer modeling to test out three different scenarios that help explain how and why the lungs stop providing oxygen to the bloodstream. Their research, which has been published in Nature Communications, reveals that silent hypoxia is likely caused by a combination of biological mechanisms that may occur simultaneously in the lungs of COVID-19 patients, according to biomedical engineer Jacob Herrmann, a research postdoctoral associate in Suki’s lab and the lead author of the new study.

Normally, the lungs perform the life-sustaining duty of gas exchange, providing oxygen to every cell in the body as we breathe in and ridding us of carbon dioxide each time we exhale. Healthy lungs keep the blood oxygenated at a level between 95 and 100 percent — if it dips below 92 percent, it’s a cause for concern and a doctor might decide to intervene with supplemental oxygen. (Early in the coronavirus pandemic, when clinicians first started sounding the alarm about silent hypoxia, oximeters flew off store shelves as many people, worried that they or their family members might have to recover from milder cases of coronavirus at home, wanted to be able to monitor their blood oxygen levels.)

The researchers first looked at how COVID-19 impacts the lungs’ ability to regulate where blood is directed. Normally, if areas of the lung aren’t gathering much oxygen due to damage from infection, the blood vessels will constrict in those areas. This is actually a good thing that our lungs have evolved to do, because it forces blood to instead flow through lung tissue replete with oxygen, which is then circulated throughout the rest of the body.

But according to Herrmann, preliminary clinical data have suggested that the lungs of some COVID-19 patients had lost the ability of restricting blood flow to already damaged tissue, and in contrast, were potentially opening up those blood vessels even more — something that is hard to see or measure on a CT scan.

Using a computational lung model, Herrmann, Suki, and their team tested that theory, revealing that for blood oxygen levels to drop to the levels observed in COVID-19 patients, blood flow would indeed have to be much higher than normal in areas of the lungs that can no longer gather oxygen — contributing to low levels of oxygen throughout the entire body, they say.

Next, they looked at how blood clotting may impact blood flow in different regions of the lung. When the lining of blood vessels get inflamed from COVID-19 infection, tiny blood clots too small to be seen on medical scans can form inside the lungs. They found, using computer modeling of the lungs, that this could incite silent hypoxia, but alone it is likely not enough to cause oxygen levels to drop as low as the levels seen in patient data.

Last, the researchers used their computer model to find out if COVID-19 interferes with the normal ratio of air-to-blood flow that the lungs need to function normally. This type of mismatched air-to-blood flow ratio is something that happens in many respiratory illnesses, such as with asthma patients, Suki says, and it can be a possible contributor to the severe, silent hypoxia that has been observed in COVID-19 patients. Their models suggest that for this to be a cause of silent hypoxia, the mismatch must be happening in parts of the lung that don’t appear injured or abnormal on lung scans.

Altogether, their findings suggest that a combination of all three factors are likely to be responsible for the severe cases of low oxygen in some COVID-19 patients. By having a better understanding of these underlying mechanisms, and how the combinations could vary from patient to patient, clinicians can make more informed choices about treating patients using measures like ventilation and supplemental oxygen. A number of interventions are currently being studied, including a low-tech intervention called prone positioning that flips patients over onto their stomachs, allowing for the back part of the lungs to pull in more oxygen and evening out the mismatched air-to-blood ratio.

“Different people respond to this virus so differently,” says Suki. For clinicians, he says it’s critical to understand all the possible reasons why a patient’s blood oxygen might be low, so that they can decide on the proper form of treatment, including medications that could help constrict blood vessels, bust blood clots, or correct a mismatched air-to-blood flow ratio.

Go to Source
Author:

Categories
ScienceDaily

Blue Ring Nebula: 16-year-old cosmic mystery solved, revealing stellar missing link

In 2004, scientists with NASA’s space-based Galaxy Evolution Explorer (GALEX) spotted an object unlike any they’d seen before in our Milky Way galaxy: a large, faint blob of gas with a star at its center. Though it doesn’t actually emit light visible to the human eye, GALEX captured the blob in ultraviolet (UV) light and thus appeared blue in the images; subsequent observations also revealed a thick ring structure within it. So the team nicknamed it the Blue Ring Nebula. Over the next 16 years, they studied it with multiple Earth- and space-based telescopes, including W. M. Keck Observatory on Maunakea in Hawaii, but the more they learned, the more mysterious it seemed.

A new study published online on Nov. 18 in the journal Nature may have cracked the case. By applying cutting-edge theoretical models to the slew of data that has been collected on this object, the authors posit the nebula — a cloud of gas in space — is likely composed of debris from two stars that collided and merged into a single star.

While merged star systems are thought to be fairly common, they are nearly impossible to study immediately after they form because they’re obscured by debris kicked up by the collision. Once the debris has cleared — at least hundreds of thousands of years later — they’re challenging to identify because they resemble non-merged stars. The Blue Ring Nebula appears to be the missing link: astronomers are seeing the star system only a few thousand years after the merger, when evidence of the union is still plentiful. It appears to be the first known example of a merged star system at this stage.

Operated between 2003 and 2013 and managed by NASA’s Jet Propulsion Laboratory in Southern California, GALEX was designed to help study the history of star formation by observing young star populations in UV light. Most objects seen by GALEX radiated both near-UV (represented as yellow in GALEX images) and far-UV (represented as blue), but the Blue Ring Nebula stood out because it emitted only far-UV light.

The object’s size was similar to that of a supernova remnant, which forms when a massive star runs out of fuel and explodes, or a planetary nebula, the puffed-up remains of a star the size of our Sun. But the Blue Ring Nebula had a living star at its center. Furthermore, supernova remnants and planetary nebulas radiate in multiple light wavelengths outside the UV range, whereas the Blue Ring Nebula did not.

PHANTOM PLANET

In 2006, the GALEX team looked at the nebula with the 5.1-meter Hale telescope at the Palomar Observatory in San Diego County, California, and then with the even more powerful 10-meter Keck Observatory telescopes. They found evidence of a shockwave in the nebula using Keck Observatory’s Low Resolution Imaging Spectrometer (LRIS), suggesting the gas composing the Blue Ring Nebula had indeed been expelled by some kind of violent event around the central star.

“Keck’s LRIS spectra of the shock front was invaluable for nailing down how the Blue Ring Nebula came to be,” said Keri Hoadley, an astrophysicist at Caltech and lead author of the study. “Its velocity was moving too fast for a typical planetary nebula yet too slow to be a supernova. This unusual, in-between speed gave us a strong clue that something else must have happened to create the nebula.”

Data from Keck Observatory’s High-Resolution Echelle Spectrometer (HIRES) also suggested the star was pulling a large amount of material onto its surface. But where was the material coming from?

“The HIRES observations at Keck gave us the first evidence that the system was accreting material,” said co-author Mark Seibert, an astrophysicist with the Carnegie Institution for Science and a member of the GALEX team at Caltech, which manages JPL. “For quite a long time we thought that maybe there was a planet several times the mass of Jupiter being torn apart by the star, and that was throwing all that gas out of the system. Though the HIRES data appeared to support this theory, it also told us to be wary of that interpretation, suggesting the accretion may have something to do with motions in the atmosphere of the central star.”

To gather more data, in 2012, the GALEX team used NASA’s Wide-field Infrared Survey Explorer (WISE), a space telescope that studied the sky in infrared light, and identified a disk of dust orbiting closely around the star. Archival data from three other infrared observatories also spotted the disk. The finding didn’t rule out the possibility that a planet was also orbiting the star, but eventually the team would show that the disk and the material expelled into space came from something larger than even a giant planet. Then in 2017, the Hobby-Eberly Telescope in Texas confirmed there was no compact object orbiting the star.

More than a decade after discovering the Blue Ring Nebula, the team had gathered data on the system from four space telescopes, four ground-based telescopes, historical observations of the star going back to 1895 (in order to look for changes in its brightness over time), and the help of citizen scientists through the American Association of Variable Star Observers (AAVSO). But an explanation for what had created the nebula still eluded them.

STELLAR SLEUTHING

When Hoadley began working with the GALEX science team in 2017, “the group had kind of hit a wall” with the Blue Ring Nebula, she said. But Hoadley was fascinated by the thus-far unexplainable object and its bizarre features, so she accepted the challenge of trying to solve the mystery. It seemed likely that the solution would not come from more observations of the system, but from cutting-edge theories that could make sense of the existing data. So Chris Martin, principal investigator for GALEX at Caltech, reached out to Brian Metzger of Columbia University for help.

As a theoretical astrophysicist, Metzger makes mathematical and computational models of cosmic phenomena, which can be used to predict how those phenomena will look and behave. He specializes in cosmic mergers — collisions between a variety of objects, whether they be planets and stars or two black holes.

“It wasn’t just that Brian could explain the data we were seeing; he was essentially predicting what we had observed before he saw it,” said Hoadley. “He’d say, ‘If this is a stellar merger, then you should see X,’ and it was like, ‘Yes! We see that!'”

The team concluded the nebula was the product of a relatively fresh stellar merger that likely occurred between a star similar to our Sun and another only about one tenth that size (or about 100 times the mass of Jupiter). Nearing the end of its life, the Sun-like star began to swell, creeping closer to its companion. Eventually, the smaller star fell into a downward spiral toward its larger companion. Along the way, the larger star tore the smaller star apart, wrapping itself in a ring of debris before swallowing the smaller star entirely.

This was the violent event that led to the formation of the Blue Ring Nebula. The merger launched a cloud of hot debris into space that was sliced in two by the gas disk. This created two cone-shaped debris clouds, their bases moving away from the star in opposite directions and getting wider as they travel outward. The base of one cone is coming almost directly toward Earth and the other almost directly away. They are too faint to see alone, but the area where the cones overlap (as seen from Earth) forms the central blue ring GALEX observed.

Millennia passed, and the expanding debris cloud cooled and formed molecules and dust, including hydrogen molecules that collided with the interstellar medium, the sparse collection of atoms and energetic particles that fill the space between stars. The collisions excited the hydrogen molecules, causing them to radiate in a specific wavelength of far-UV light. Over time, the glow became just bright enough for GALEX to see.

Stellar mergers may occur as often as once every 10 years in our Milky Way galaxy, meaning it’s possible that a sizeable population of the stars we see in the sky were once two.

“We see plenty of two-star systems that might merge someday, and we think we’ve identified stars that merged maybe millions of years ago. But we have almost no data on what happens in between,” said Metzger. “We think there are probably plenty of young remnants of stellar mergers in our galaxy, and the Blue Ring Nebula might show us what they look like so we can identify more of them.”

Though this is likely the conclusion of a 16-year-old mystery, it may also be the beginning of a new chapter in the study of stellar mergers.

“It’s amazing that GALEX was able to find this really faint object that we weren’t looking for but that turns out to be something really interesting to astronomers,” said Seibert. “It just reiterates that when you look at the universe in a new wavelength or in a new way, you find things you never imagined you would.”

Go to Source
Author: