Categories
ScienceDaily

Sensing internal organ temperature with shining lights

A cheap, biocompatible white powder that luminesces when heated could be used for non-invasively monitoring the temperature of specific organs within the body. Tohoku University scientists conducted preliminary tests to demonstrate the applicability of this concept and published their findings in the journal Scientific Reports.

Thermometers measure temperature at the body’s surface, but clinicians need to be able to monitor and manage core body temperatures in some critically ill patients, such as following head injuries or heart attacks. Until now, this is most often done by inserting a tiny tube into the heart and blood vessels. But scientists are looking for less invasive means to monitor temperature from within the body.

Applied physicist Takumi Fujiwara of Tohoku University and colleagues in Japan investigated the potential of a white powder called zirconia for this purpose.

Zirconia is a synthetic powder that is easily accessible, chemically stable, and non-toxic. When heated, its crystals become excited, releasing electrons. These electrons then recombine with ‘holes’ in the crystal molecular structure, a process that causes the crystals to emit light, or luminesce. Because of this material’s advantageous properties for use in the human body, the scientists wanted to test and see if its luminescence could be used for monitoring temperature.

The team heated zirconia under an ultraviolet lamp, and found that as zirconia’s temperature rose, its luminescence intensified. The same thing happened when a near-infrared laser light was shone on the material. This demonstrated that both heat and light could be used to induce luminescence in zirconia.

The scientists next showed that zirconia luminescence was visible with the naked eye when placed behind a piece of bone and illuminated using a near-infrared laser.

Together, the demonstrations suggest zirconia could potentially monitor internal body temperature by injecting it and then shining a near-infrared laser light on a targeted location, such as the brain. The intensity and longevity of the material’s luminescence will depend on the surrounding temperature.

“While this fundamental study leaves some important issues unresolved, this work is a novel and promising application of [synthetic luminescent substances] in the medical field,” the researchers conclude. Going forward, the researchers hope to discover a method that makes the wavelength of luminescence from zirconia in the region of red to near-infrared since it makes for better transmissibility for human tissues; thus, allowing for clearer information to be obtained.

Story Source:

Materials provided by Tohoku University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Compact beam steering studies to revolutionize autonomous navigation, AR, neuroscience

While beam steering systems have been used for many years for applications such as imaging, display, and optical trapping, they require bulky mechanical mirrors and are overly sensitive to vibrations. Compact optical phased arrays (OPAs), which change the angle of an optical beam by changing the beam’s phase profile, are a promising new technology for many emerging applications. These include ultra-small solid-state LiDAR on autonomous vehicles, much smaller and lighter AR/VR displays, large-scale trapped-ion quantum computer to address ion qubits, and optogenetics, an emerging research field that uses light and genetic engineering to study the brain.

Long-range, high-performance OPAs require a large beam emission area densely packed with thousands of actively phase-controlled, power-hungry light-emitting elements. To date, such large-scale phased arrays, for LiDAR, have been impractical since the technologies in current use would have to operate at untenable electrical power levels.

Researchers led by Columbia Engineering Professor Michal Lipson have developed a low-power beam steering platform that is a non-mechanical, robust, and scalable approach to beam steering. The team is one of the first to demonstrate low-power large-scale optical phased array at near infrared and the first to demonstrate optical phased array technology on-chip at blue wavelength for autonomous navigation and augmented reality, respectively. In collaboration with Adam Kepecs’ group at Washington University in St. Louis, the team has also developed an implantable photonic chip based on an optical switch array at blue wavelengths for precise optogenetic neural stimulation. The research has been recently published in three separate papers in Optica, Nature Biomedical Engineering, and Optics Letters.

“This new technology that enables our chip-based devices to point the beam anywhere we want opens the door wide for transforming a broad range of areas,” says Lipson, Eugene Higgins Professor of Electrical Engineering and Professor of Applied Physics. “These include, for instance, the ability to make LiDAR devices as small as a credit card for a self-driving car, or a neural probe that controls micron scale beams to stimulate neurons for optogenetics neuroscience research, or a light delivery method to each individual ion in a system for general quantum manipulations and readout.”

Lipson’s team has designed a multi-pass platform that reduces the power consumption of an optical phase shifter while maintaining both its operation speed and broadband low loss for enabling scalable optical systems. They let the light signal recycle through the same phase shifter multiple times so that the total power consumption is reduced by the same factor it recycles. They demonstrated a silicon photonic phased array containing 512 actively controlled phase shifters and optical antenna, consuming very low power while performing 2D beam steering over a wide field of view. Their results are a significant advance towards building scalable phased arrays containing thousands of active elements.

Phased array devices were initially developed at larger electromagnetic wavelengths. By applying different phases at each antenna, researchers can form a very directional beam by designing constructive interference in one direction and destructive in other directions. In order to steer or turn the beam’s direction, they can delay light in one emitter or shift a phase relative to another.

Current visible light applications for OPAs have been limited by bulky table-top devices that have a limited field of view due to their large pixel width. Previous OPA research done at the near-infrared wavelength, including work from the Lipson Nanophotonics Group, faced fabrication and material challenges in doing similar work at the visible wavelength.

“As the wavelength becomes smaller, the light becomes more sensitive to small changes such as fabrication errors,” says Min Chul Shin, a PhD student in the Lipson group and co-lead author of the Optics Letter paper. “It also scatters more, resulting in higher loss if fabrication is not perfect — and fabrication can never be perfect.”

It was only three years ago that Lipson’s team showed a low-loss material platform by optimizing fabrication recipes with silicon nitride. They leveraged this platform to realize their new beam steering system in the visible wavelength — the first chip-scale phased array operating at blue wavelengths using a silicon nitride platform.

A major challenge for the researchers was working in the blue range, which has the smallest wavelength in the visible spectrum and scatters more than other colors because it travels as shorter, smaller waves. Another challenge in demonstrating a phased array in blue was that to achieve a wide angle, the team had to overcome the challenge of placing emitters half a wavelength apart or at least smaller than a wavelength — 40 nm spacing, 2500 times smaller than human hair — which was very difficult to achieve. In addition, in order to make optical phased array useful for practical applications, they needed many emitters. Scaling this up to a large system would be extremely difficult.

“Not only is this fabrication really hard, but there would also be a lot of optical crosstalk with the waveguides that close,” says Shin. “We can’t have independent phase control plus we’d see all the light coupled to each other, not forming a directional beam.”

Solving these issues for blue meant that the team could easily do this for red and green, which have longer wavelengths. “This wavelength range enables us to address new applications such as optogenetic neural stimulation,” notes Aseema Mohanty, a postdoctoral research scientist and co-lead author of the Optics Letter and Nature Biomedical Engineering papers. “We used the same chip-scale technology to control an array of micron-scale beams to precisely probe neurons within the brain.”

The team is now collaborating with Applied Physics Professor Nanfang Yu’s group to optimize the electrical power consumption because low-power operation is crucial for lightweight head-mounted AR displays and optogenetics.

“We are very excited because we’ve basically designed a reconfigurable lens on a tiny chip on which we can steer the visible beam and change focus,” explains Lipson. “We have an aperture where we can synthesize any visible pattern we want every few tens of microseconds. This requires no moving parts and could be achieved at chip-scale. Our new approach means that we’ll be able to revolutionize augmented reality, optogenetics and many more technologies of the future.”

Go to Source
Author:

Categories
ScienceDaily

Shedding light on optimal materials for harvesting sunlight underwater

There may be many overlooked organic and inorganic materials that could be used to harness sunlight underwater and efficiently power autonomous submersible vehicles, report researchers at New York University. Their research, publishing March 18 in the journal Joule, develops guidelines for optimal band gap values at a range of watery depths, demonstrating that various wide-band gap semiconductors, rather than the narrow-band semiconductors used in traditional silicon solar cells, are best equipped for underwater use.

“So far, the general trend has been to use traditional silicon cells, which we show are far from ideal once you go to a significant depth since silicon absorbs a large amount of red and infrared light, which is also absorbed by water — especially at large depths,” says Jason A. Röhr, a postdoctoral research associate in Prof. André D. Taylor’s Transformative Materials and Devices laboratory at the Tandon School of Engineering at New York University and an author on the study. “With our guidelines, more optimal materials can be developed.”

Underwater vehicles, such as those used to explore the abyssal ocean, are currently limited by onshore power or inefficient on-board batteries, preventing travel over longer distances and periods of time. But while solar cell technology that has already taken off on land and in outer space could give these submersibles more freedom to roam, the watery world presents unique challenges. Water scatters and absorbs much of the visible light spectrum, soaking up red solar wavelengths even at shallow depths before silicon-based solar cells would have a chance to capture them.

Most previous attempts to develop underwater solar cells have been constructed from silicon or amorphous silicon, which each have narrow band gaps best suited for absorbing light on land. However, blue and yellow light manages to penetrate deep into the water column even as other wavelengths diminish, suggesting semiconductors with wider band gaps not found in traditional solar cells may provide a better fit for supplying energy underwater.

To better understand the potential of underwater solar cells, Röhr and colleagues assessed bodies of water ranging from the clearest regions of the Atlantic and Pacific oceans to a turbid Finnish lake, using a detailed-balance model to measure the efficiency limits for solar cells at each location. Solar cells were shown to harvest energy from the sun down to depths of 50 meters in Earth’s clearest bodies of water, with chilly waters further boosting the cells’ efficiency.

The researchers’ calculations revealed that solar cell absorbers would function best with an optimum band gap of about 1.8 electronvolts at a depth of two meters and about 2.4 electronvolts at a depth of 50 meters. These values remained consistent across all water sources studied, suggesting the solar cells could be tailored to specific operating depths rather than water locations.

Röhr notes that cheaply produced solar cells made from organic materials, which are known to perform well under low-light conditions, as well as alloys made with elements from groups three and five on the periodic table could be ideal in deep waters. And while the substance of the semiconductors would differ from solar cells used on land, the overall design would remain relatively similar.

“While the sun-harvesting materials would have to change, the general design would not necessarily have to change all that much,” says Röhr. “Traditional silicon solar panels, like the ones you can find on your roof, are encapsulated to prohibit damage from the environment. Studies have shown that these panels can be immersed and operated in water for months without sustaining significant damage to the panels. Similar encapsulation methods could be employed for new solar panels made from optimal materials.” Now that they have uncovered what makes effective underwater solar cells tick, the researchers plan to begin developing optimal materials.

“This is where the fun begins!” says Röhr. “We have already investigated unencapsulated organic solar cells which are highly stable in water, but we still need to show that these cells can be made more efficient than traditional cells. Given how capable our colleagues around the world are, we are sure that we will see these new and exciting solar cells on the market in the near future.”

Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Drones could still be a threat to public safety — New research improves drone detection

Unmanned aerial vehicles (UAV), commonly known as drones, are widely used in mapping, aerial photography, rescue operations, shipping, law enforcement, agriculture, among other things. Despite great potential for improving public safety, use of drones can also lead to very undesirable situations, such as privacy and safety violations, or property damage. There is also the highly concerning matter of drones being harnessed to carry out terrorist attacks, which means a threat to public safety and national security.

Radar technology is one of the solutions to monitor the presence of drones and prevent possible threats. Due to their varying sizes, shapes and composite materials, drones can be challenging to detect.

Researchers from Aalto University (Finland), UCLouvain (Belgium), and New York University (USA) have gathered extensive radar measurement data, aiming to improve the detection and identification of drones. Researchers measured various commercially available and custom-built drone models’ Radar Cross Section (RCS), which indicates how the target reflects radio signals. The RCS signature can help to identify the size, shape and the material of the drone.

‘We measured drones’ RCS at multiple 26-40 GHz millimetre-wave frequencies to better understand how drones can be detected, and to investigate the difference between drone models and materials in terms of scattering radio signals. We believe that our results will be a starting point for a future uniform drone database. Therefore, all results are publicly available along with our research paper,’ says the author, researcher D. Sc. Vasilii Semkin.

The publicly accessible measurement data can be utilised in the development of radar systems, as well as machine learning algorithms for more complex identification. This would increase the probability of detecting drones and reducing fault detections.

‘There is an urgent need to find better ways to monitor drone use. We aim to continue this work and extend the measurement campaign to other frequency bands, as well as for a larger variety of drones and different real-life environments,’ Vasilii Semkin says.

Researchers suggest that 5G base stations could made in the future for surveillance.

‘We are developing millimetre-wave wireless communication technology, which could also be used in sensing the environment like a radar. With this technology, 5G-base stations could detect drones, among other things,’ says professor Ville Viikari from Aalto University.

Story Source:

Materials provided by Aalto University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Health Gorilla API now Enables Electronic Ordering for COVID-19 Testing

Health Gorilla, a provider of clinical data interoperability, announced that its API can now be used to place COVID-19 test orders and receive test results from a number of diagnostic vendors. The lab network API is used by thousands of providers and health organizations worldwide to enable electronic lab ordering and results.

The Health Gorilla Diagnostic Network APITrack this API allows providers to submit orders for laboratory or radiology tests and receive the results electronically from vendors such as Labcorp, Quest Diagnostics and Bioreference.

“In order to control the spread of COVID-19, we urgently need to increase access to testing,” said Steve Yaskin, Co-founder and CEO of Health Gorilla. “Starting today, we’re making COVID-19 test ordering available immediately via API to all of our developer and EMR clients, as well as through our web application for any physician user.”

Orders placed using the API are free and Health Gorilla plans to expand electronic COVID-19 test ordering to other national and regional labs in the coming weeks. Interested developers can learn more about the API at the Health Gorilla API reference page.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">wsantos</a>

Categories
3D Printing Industry

Sigma Labs signs new contracts with Northwestern University, Materialise

Two pieces of news show how Sigma Labs’ PrintRite3D software will be used by Materialise and Northwestern University. Santa Fe-based Sigma Labs has signed a contract to implement its quality assurance software for metal additive manufacturing at Northwestern University for the first time. Furthermore, Sigma Labs has signed a joint sales agreement with Belgium headquartered Materialise, advancing an earlier MoU to develop an integrated in-situ […]

Go to Source
Author: Kubi Sertoglu

Categories
3D Printing Industry

Copper3D develops a HIV inactivating 3D printed device for breastfeeding

Antimicrobial material developer, Copper3D, has developed a new 3D printed device to be used during breastfeeding. The device, made of Copper3D’s trademarked PLACTIVE material, aims to act as an interface between mother and child during breastfeeding, inactivating HIV found in breastmilk. The novel material uses nano-copper based additives to give standard PLA antimicrobial properties. Previous […]

Go to Source
Author: Kubi Sertoglu

Categories
ScienceDaily

Psychiatry: Five clearly defined patterns

Psychiatrists led by Nikolaos Koutsouleris from Ludwig-Maximilians-Universitaet (LMU) in Munich have used a computer-based approach to assign psychotic patients diagnosed as bipolar or schizophrenic to five different subgroups. The method could lead to better therapies for psychoses.

Diagnostic methods capable of discriminating between the various types of psychoses recognized by psychiatrists remain inadequate. Up to now, doctors have assigned psychotic patients to one of two broad classes — bipolar disorder or schizophrenia — essentially on a symptomatic basis, focusing on shared elements of their psychiatric history, the range of symptoms displayed and the overall pattern of disease progression. This categorization remains a fundamental feature of both clinical practice and psychiatric research, although detailed observations indicate that psychotic illnesses, and the underlying genetic risk factors, are more heterogeneous than the conventional diagnostic dichotomy suggests. Now researchers led by LMU psychiatrist Nikolaos Koutsouleris has carried out a longitudinal cohort study on a sample of 1223 patients over a period of 18 months. The results obtained enabled the team to divide patients into five well defined subgroups, thus providing a more differentiated picture of the pathology of psychoses, which has implications for therapeutic interventions.

Data from 756 of the 1223 patients enrolled in the study were used to establish the new classification scheme, which was then independently validated for the remaining subset of participants. All 1223 patients had been diagnosed with classical psychoses, based on assessment of a total of 188 clinical variables relating to the trajectory of the individual’s condition, symptoms, ability to cope with the challenges of everyday life (‘functioning’), and cognitive performance. The study set out to determine whether their high-dimensional clinical dataset covering a wide spectrum of psychoses could be decomposed into defined subgroups based on clustering of statistically correlated variables. The data-driven analytical strategy adopted is based on machine learning, which can discover patterns that reveal ‘hidden structure’ in large collections of multifactorial data. These patterns may in turn point to differences in causal relationships that are of diagnostic relevance. “Our study shows that computer-based analyses can indeed help us to re-evaluate how persons with proven symptoms of psychoses can be differentiated diagnostically,” says LMU psychologist Dominic Dwyer, first author of the study, which appears in the journal JAMA Psychiatry.

The analysis ultimately led to the recognition of five clearly defined subgroups among the experimental population. “In addition to differences in their symptomatic and functional course, patients assigned to the different subgroups could also be distinguished on the basis of defined clinical fingerprints,” says Nikolaos Koutsouleris, who led the study. Members of one of the subgroups were also differentiated from all the others on the basis of their low scores for educational attainment, which is known to be a potential risk factor for psychotic illness.

The researchers used a mathematical approach known as non-negative matrix factorization to detect patterns in their statistical data. Using this procedure, they were able to reduce the starting dataset, comprised of 188 variables, to five subgroups defined by core factors. These factors encode hitherto unrecognized relationships between variables and uncover the functional links that connect them. “By evaluating the relative significance of these factors in individual cases, it is possible to assign patients to different groups on the basis of their overall scores,” Dwyer explains. In this way, the authors of the study were able to define the following five subgroups of psychoses: affective psychosis, suicidal psychosis, depressive psychosis, high-functioning psychosis and severe psychosis.

“Each of these subgroups can be clearly delimited from all the others on the basis of the clinical data,” says Koutsouleris. For instance, patients assigned to Group 5 are characterized by the core factors: schizophrenia diagnosis, significantly lower levels of educational attainment and low verbal intelligence. Most of the patients in this category were males and displayed marked symptoms of psychosis, but no indications of depression or mania. In Group 2, on the other hand, suicidal tendencies were clearly present. The results of the classification of this experimental population which provided the underlying data for the construction of the statistical model were confirmed for an independent group of 458 subjects.

The analyses suggested that unbiased, data-driven clustering may be used to stratify individuals into groups that have different clinical signatures, illness trajectories, and genetic underpinnings. In the future, such computer-assisted categorisations may be integrated into clinical routine through the use of online tools. Koutsouleris and his team have developed a prototype of such an online tool that can be used to stratify new individuals into the same groups and predict outcomes that can be tested at http://www.proniapredictors.eu.

Story Source:

Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

How newborn stars prepare for the birth of planets

An international team of astronomers used two of the most powerful radio telescopes in the world to create more than three hundred images of planet-forming disks around very young stars in the Orion Clouds. These images reveal new details about the birthplaces of planets and the earliest stages of star formation.

Most of the stars in the universe are accompanied by planets. These planets are born in rings of dust and gas, called protoplanetary disks. Even very young stars are surrounded by these disks. Astronomers want to know exactly when these disks start to form, and what they look like. But young stars are very faint, and there are dense clouds of dust and gas surrounding them in stellar nurseries. Only highly sensitive radio telescope arrays can spot the tiny disks around these infant stars amidst the densely packed material in these clouds.

For this new research, astronomers pointed both the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) and the Atacama Large Millimeter/submillimeter Array (ALMA) to a region in space where many stars are born: the Orion Molecular Clouds. This survey, called VLA/ALMA Nascent Disk and Multiplicity (VANDAM), is the largest survey of young stars and their disks to date.

Very young stars, also called protostars, form in clouds of gas and dust in space. The first step in the formation of a star is when these dense clouds collapse due to gravity. As the cloud collapses, it begins to spin — forming a flattened disk around the protostar. Material from the disk continues to feed the star and make it grow. Eventually, the left-over material in the disk is expected to form planets.

Many aspects about these first stages of star formation, and how the disk forms, are still unclear. But this new survey provides some missing clues as the VLA and ALMA peered through the dense clouds and observed hundreds of protostars and their disks in various stages of their formation.

Young planet-forming disks

“This survey revealed the average mass and size of these very young protoplanetary disks,” said John Tobin of the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia, and leader of the survey team. “We can now compare them to older disks that have been studied intensively with ALMA as well.”

What Tobin and his team found, is that very young disks can be similar in size, but are on average much more massive than older disks. “When a star grows, it eats away more and more material from the disk. This means that younger disks have a lot more raw material from which planets could form. Possibly bigger planets already start to form around very young stars.”

Four special protostars

Among hundreds of survey images, four protostars looked different than the rest and caught the scientists’ attention. “These newborn stars looked very irregular and blobby,” said team member Nicole Karnath of the University of Toledo, Ohio (now at SOFIA Science Center). “We think that they are in one of the earliest stages of star formation and some may not even have formed into protostars yet.”

It is special that the scientists found four of these objects. “We rarely find more than one such irregular object in one observation,” added Karnath, who used these four infant stars to propose a schematic pathway for the earliest stages of star formation. “We are not entirely sure how old they are, but they are probably younger than ten thousand years.”

To be defined as a typical (class 0) protostar, stars should not only have a flattened rotating disk surrounding them, but also an outflow — spewing away material in opposite directions — that clears the dense cloud surrounding the stars and makes them optically visible. This outflow is important, because it prevents stars from spinning out of control while they grow. But when exactly these outflows start to happen, is an open question in astronomy.

One of the infant stars in this study, called HOPS 404, has an outflow of only two kilometers (1.2 miles) per second (a typical protostar-outflow of 10-100 km/s or 6-62 miles/s). “It is a big puffy sun that is still gathering a lot of mass, but just started its outflow to lose angular momentum to be able to keep growing,” explained Karnath. “This is one of the smallest outflows that we have seen and it supports our theory of what the first step in forming a protostar looks like.”

Combining ALMA and VLA

The exquisite resolution and sensitivity provided by both ALMA and the VLA were crucial to understand both the outer and inner regions of protostars and their disks in this survey. While ALMA can examine the dense dusty material around protostars in great detail, the images from the VLA made at longer wavelengths were essential to understand the inner structures of the youngest protostars at scales smaller than our solar system.

“The combined use of ALMA and the VLA has given us the best of both worlds,” said Tobin. “Thanks to these telescopes, we start to understand how planet formation begins.”

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Go to Source
Author:

Categories
ScienceDaily

Water-conducting membrane allows carbon dioxide to transform into fuel more efficiently

Methanol is a versatile and efficient chemical used as fuel in the production of countless products. Carbon dioxide (CO2), on the other hand, is a greenhouse gas that is the unwanted byproduct of many industrial processes.

Converting CO2 to methanol is one way to put CO2 to good use. In research published today in Science, chemical engineers from Rensselaer Polytechnic Institute demonstrated how to make that conversion process from CO2 to methanol more efficient by using a highly effective separation membrane they produced. This breakthrough, the researchers said, could improve a number of industry processes that depend on chemical reactions where water is a byproduct.

For example, the chemical reaction responsible for the transformation of CO2 into methanol also produces water, which severely restricts the continued reaction. The Rensselaer team set out to find a way to filter out the water as the reaction is happening, without losing other essential gas molecules.

The researchers assembled a membrane made up of sodium ions and zeolite crystals that was able to carefully and quickly permeate water through small pores — known as water-conduction nanochannels — without losing gas molecules.

“The sodium can actually regulate, or tune, gas permeation,” said Miao Yu, an endowed chair professor of chemical and biological engineering and a member of the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer, who led this research. “It’s like the sodium ions are standing at the gate and only allow water to go through. When the inert gas comes in, the ions will block the gas.”

In the past, Yu said, this type of membrane was susceptible to defects that would allow other gas molecules to leak out. His team developed a new strategy to optimize the assembly of the crystals, which eliminated those defects.

When water was effectively removed from the process, Yu said, the team found that the chemical reaction was able to happen very quickly.

“When we can remove the water, the equilibrium shifts, which means more CO2 will be converted and more methanol will be produced,” said Huazheng Li, a postdoctoral researcher at Rensselaer and first author on the paper.

“This research is a prime example of the significant contributions Professor Yu and his team are making to address interdisciplinary challenges in the area of water, energy, and the environment,” said Deepak Vashishth, director of CBIS. “Development and deployment of such tailored membranes by Professor Yu’s group promise to be highly effective and practical.”

The team is now working to develop a scalable process and a startup company that would allow this membrane to be used commercially to produce high purity methanol.

Yu said this membrane could also be used to improve a number of other reactions.

“In industry there are so many reactions limited by water,” Yu said. “This is the only membrane that can work highly efficiently under the harsh reaction conditions.”

Story Source:

Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length.

Go to Source
Author: