Categories
ProgrammableWeb

Google Cloud’s Enhanced Interactive Docs Earn Editors Choice Award for DX

There are many facets that go into the leading API portals and ProgrammableWeb has created a series of articles that help you understand what best practices are being used by real-world API providers. The series was kicked off with a comprehensive checklist of the criteria needed to build a world-class API developer portal. Subsequent Editors Choice articles including this one will provide a more in-depth look at how individual providers have executed on the various criteria.

Google recently announced a feature update to its Google Cloud Storage documentation; placeholder variables that can be replaced within the code samples. This new feature lets a developer replace a placeholder variable within a code sample with their own custom variable. 

Figure 1 Placeholder variables allow you to use custom parameters in the code sample

This feature allows a developer to use parameters specific to their own instance thereby giving them a better understanding of how well the API will meet their needs. It takes the idea of interactive samples one step further by allowing for nearly bespoke code samples.

Another neat feature can be seen when a documentation page has multiple code samples that each have the same placeholder variable.

Figure 2 Changing the placeholder variable once, will change the variable for all instances across multiples samples on a page

As you can see in the image above, if you change the variable in one place, it will be replaced anywhere else it appears on the page, including within other code samples. Not only is this a time saver, it ensures consistency for developers when they run the samples.

Google’s use of placeholder variables is a forward step that helps make its documentation more relevant and easier to understand for anyone new to its APIs. For this reason, Google has earned a ProgrammableWeb Editor’s Choice award for Developer Experience.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">wsantos</a>

Categories
ScienceDaily

Modern theory from ancient impacts

Around 4 billion years ago, the solar system was far less hospitable than we find it now. Many of the large bodies we know and love were present, but probably looked considerably different, especially the Earth. We know from a range of sources, including ancient meteorites and planetary geology, that around this time there were vastly more collisions between, and impacts from, asteroids originating in the Mars-Jupiter asteroid belt.

Knowledge of these events is especially important to us, as the time period in question is not only when the surface of our planet was taking on a more recognizable form, but was also when life was just getting started. With more accurate details of Earth’s rocky history, it could help researchers answer some long-standing questions concerning the mechanisms responsible for life, as well as provide information for other areas of life science.

“Meteorites provide us with the earliest history of ourselves,” said Professor Yuji Sano from the Atmosphere and Ocean Research Institute at the University of Tokyo. “This is what fascinated me about them. By studying properties, such as radioactive decay products, of meteorites that fell to Earth, we can deduce when they came and where they came from. For this study we examined meteorites that came from Vesta, the second-largest asteroid after the dwarf planet Ceres.”

Sano and his team found evidence that Vesta was hit by multiple impacting bodies around 4.4 billion to 4.15 billion years ago. This is earlier than 3.9 billion years ago, which is when the late heavy bombardment (LHB) is thought to have occurred. Current evidence for the LHB comes from lunar rocks collected during the Apollo moon missions of the 1970s, as well as other sources. But these new studies are improving upon previous models and will pave the way for an up-to-date database of early solar impact records.

“That Vesta-origin meteorites clearly show us impacts earlier than the LHB raises the question, ‘Did the late heavy bombardment truly occur?'” said Sano. “It seems to us that early solar system impacts peaked sooner than the LHB and reduced smoothly with time. It may not have been the cataclysmic period of chaos that current models describe.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Bioactive nano-capsules to hijack cell behavior

Many diseases are caused by defects in signaling pathways of body cells. In the future, bioactive nanocapsules could become a valuable tool for medicine to control these pathways. Researchers from the University of Basel have taken an important step in this direction: They succeed in having several different nanocapsules work in tandem to amplify a natural signaling cascade and influence cell behavior.

Cells constantly communicate with each other and have ways to pick up signals and process them — similar to humans who need ears to hear sounds and knowledge of language to process their meaning. Controlling the cell’s own signaling pathways is of great interest for medicine in order to treat various diseases.

A research team of the Department of Chemistry at the University of Basel and the NCCR Molecular Systems Engineering develops bioactive materials that could be suitable for this purpose. To achieve this, the researchers led by Professor Cornelia Palivan combine nanomaterials with natural molecules and cells.

In the journal ACS Nano, they now report how enzyme loaded nano-capsules can enter cells and be integrated into their native signaling processes. By functionally coupling several nano-capsules, they are able to amplify a natural signaling pathway.

Protecting the cargo

In order to protect the enzymes from degradation in a cellular environment the research team loaded them into polymeric small capsules. Molecules can enter the compartment through biological pores specifically inserted in its synthetic wall and react with the enzymes inside.

The researchers conducted experiments with nano-capsules harboring different enzymes that worked in tandem: the product of the first enzymatic reaction entered a second capsule and started the second reaction inside. These nano-capsuled could stay operative for days and actively participated in natural reactions in mammalian cells.

Tiny loudspeakers and ears

One of the many signals that cells receive and process is nitric oxide (NO). It is a well-studied cellular mechanism since defects in the NO signaling pathway are involved in the emergence of cardiovascular diseases, but also in muscular and retinal dystrophies. The pathway encompasses the production of NO by an enzyme family called nitric oxide synthases (NOS). The NO can then diffuse to other cells where it is sensed by another enzyme named soluble guanylate cyclase (sGC). The activation of sGC starts a cascade reaction, regulating a plethora of different processes such as smooth muscle relaxation and the processing of light by sensory cells, among others.

The researchers lead by Palivan produced capsules harboring NOS and sGC, which are naturally present in cells, but at much lower concentrations: the NOS-capsules, producing NO, act similarly to loudspeakers, “shouting” their signal loud and clear; the sGC-capsules, act as “ears,” sensing and processing the signal to amplify the response.

Using the intracellular concentration of calcium, which depends on the action of sGC, as an indicator, the scientists showed that the combination of both NOS and sGC loaded capsules makes the cells much more reactive, with an 8-fold increase in the intracellular calcium level.

A new strategy for enzyme replacement therapy

“It’s a new strategy to stimulate such changes in cellular physiology by combining nanoscience with biomolecules,” comments Dr. Andrea Belluati, the first author of the study. “We just had to incubate our enzyme-loaded capsules with the cells, and they were ready to act at a moment’s notice.”

“This proof of concept is an important step in the field of enzyme replacement therapy for diseases where biochemical pathways malfunction, such as cardiovascular diseases or several dystrophies,” adds Cornelia Palivan.

Story Source:

Materials provided by University of Basel. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

The widespread footprint of blue jean microfibers

With many people working from home during the COVID-19 pandemic, blue jeans are a more popular wardrobe choice than ever. But most people don’t think about microscopic remnants of their comfy jeans and other clothing that are shed during laundering. Now, researchers reporting in ACS’ Environmental Science & Technology Letters have detected indigo denim microfibers not only in wastewater effluent, but also in lakes and remote Arctic marine sediments.

Over the past 100 years, the popularity of denim blue jeans has grown immensely, with many people wearing this type of clothing almost every day. Studies have shown that washing denim and other fabrics releases microfibers — tiny, elongated particles — to wastewater. Although most microfibers are removed by wastewater treatment plants, some could still enter the environment through wastewater discharge, also known as effluent. Blue jean denim is composed of natural cotton cellulose fibers, processed with synthetic indigo dye and other chemical additives to improve performance and durability. Miriam Diamond, Samantha Athey and colleagues wondered whether blue jeans were a major source of anthropogenic cellulose microfibers to the aquatic environment.

The researchers used a combination of microscopy and Raman spectroscopy to identify and count indigo denim microfibers in various water samples collected in Canada. Indigo denim made up 23, 12 and 20% of all microfibers in sediments from the Great Lakes, shallow suburban lakes near Toronto, Canada, and the Canadian Arctic Archipelago, respectively. Despite a high abundance of denim microfibers in Great Lake sediments, the team detected only a single denim microfiber in the digestive tract of a type of fish called rainbow smelt. Based on the levels of microfibers found in wastewater effluent, the researchers estimated that the wastewater treatment plants in the study discharged about 1 billion indigo denim microfibers per day. In laundering experiments, the researchers found that a single pair of used jeans could release about 50,000 microfibers per wash cycle. Although the team doesn’t know the effects, if any, that the microfibers have on aquatic life, a practical way to reduce denim microfiber pollution would be for consumers to wash their jeans less frequently, they say. Moreover, finding microfibers from blue jeans in the Arctic is a potent indicator of humans’ impact on the environment, the researchers add.

Story Source:

Materials provided by American Chemical Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Giant leap for molecular measurements

Spectroscopy is an important tool of observation in many areas of science and industry. Infrared spectroscopy is especially important in the world of chemistry where it is used to analyze and identify different molecules. The current state-of-the-art method can make approximately 1 million observations per second. UTokyo researchers have greatly surpassed this figure with a new method about 100 times faster.

From climate science to safety systems, manufacture to quality control of foodstuffs, infrared spectroscopy is used in so many academic and industrial fields that it’s a ubiquitous, albeit invisible, part of everyday life. In essence, infrared spectroscopy is a way to identify what molecules are present in a sample of a substance with a high degree of accuracy. The basic idea has been around for decades and has undergone improvements along the way.

In general, infrared spectroscopy works by measuring infrared light transmitted or reflected from molecules in a sample. The samples’ inherent vibrations alter the characteristics of the light in very specific ways, essentially providing a chemical fingerprint, or spectra, which is read by a detector and analyzer circuit or computer. Fifty years ago the best tools could measure one spectra per second, and for many applications this was more than adequate.

More recently, a technique called dual-comb spectroscopy achieved a measurement rate of 1 million spectra per second. However, in many instances, more rapid observations are required in order to produce fine-grain data. For example some researchers wish to explore the stages of certain chemical reactions that happen on very short time scales. This drive prompted Associate Professor Takuro Ideguchi from the Institute for Photon Science and Technology, at the University of Tokyo, and his team to look into and create the fastest infrared spectroscopy system to date.

“We developed the world’s fastest infrared spectrometer, which runs at 80 million spectra per second,” said Ideguchi. “This method, time-stretch infrared spectroscopy, is about 100 times faster than dual-comb spectroscopy, which had reached an upper speed limit due to issues of sensitivity.” Given there are around 30 million seconds in a year, this new method can achieve in one second what 50 years ago would have taken over two years.

Time-stretch infrared spectroscopy works by stretching a very short pulse of laser light transmitted from a sample. As the transmitted pulse is stretched, it becomes easier for a detector and accompanying electronic circuitry to accurately analyze. A key high-speed component that makes it possible is something called a quantum cascade detector, developed by one of the paper’s authors, Tatsuo Dougakiuchi from Hamamatsu Photonics.

“Natural science is based on experimental observations. Therefore, new measurement techniques can open up new scientific fields,” said Ideguchi. “Researchers in many fields can build on what we’ve done here and use our work to enhance their own understanding and powers of observation.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Intelligent software tackles plant cell jigsaw puzzle

Imagine working on a jigsaw puzzle with so many pieces that even the edges seem indistinguishable from others at the puzzle’s centre. The solution seems nearly impossible. And, to make matters worse, this puzzle is in a futuristic setting where the pieces are not only numerous, but ever-changing. In fact, you not only must solve the puzzle, but “un-solve” it to parse out how each piece brings the picture wholly into focus.

That’s the challenge molecular and cellular biologists face in sorting through cells to study an organism’s structural origin and the way it develops, known as morphogenesis. If only there was a tool that could help. An eLife paper out this week shows there now is.

An EMBL research group led by Anna Kreshuk, a computer scientist and expert in machine learning, joined the DFG-funded FOR2581 consortium of plant biologists and computer scientists to develop a tool that could solve this cellular jigsaw puzzle. Starting with computer code and moving on to a more user-friendly graphical interface called PlantSeg, the team built a simple open-access method to provide the most accurate and versatile analysis of plant tissue development to date. The group included expertise from EMBL, Heidelberg University, the Technical University of Munich, and the Max Planck Institute for Plant Breeding Research in Cologne.

“Building something like PlantSeg that can take a 3D perspective of cells and actually separate them all is surprisingly hard to do, considering how easy it is for humans,” Kreshuk says. “Computers aren’t as good as humans when it comes to most vision-related tasks, as a rule. With all the recent development in deep learning and artificial intelligence at large, we are closer to solving this now, but it’s still not solved — not for all conditions. This paper is the presentation of our current approach, which took some years to build.”

If researchers want to look at morphogenesis of tissues at the cellular level, they need to image individual cells. Lots of cells means they also have to separate or “segment” them to see each cell individually and analyse the changes over time.

“In plants, you have cells that look extremely regular that in a cross-section looks like rectangles or cylinders,” Kreshuk says. “But you also have cells with so-called ‘high lobeness’ that have protrusions, making them look more like puzzle pieces. These are more difficult to segment because of their irregularity.”

Kreshuk’s team trained PlantSeg on 3D microscope images of reproductive organs and developing lateral roots of a common plant model, Arabidopsis thaliana, also known as thale cress. The algorithm needed to factor in the inconsistencies in cell size and shape. Sometimes cells were more regular, sometimes less. As Kreshuk points out, this is the nature of tissue.

A beautiful side of this research came from the microscopy and images it provided to the algorithm. The results manifested themselves in colourful renderings that delineated the cellular structures, making it easier to truly “see” segmentation.

“We have giant puzzle boards with thousands of cells and then we’re essentially colouring each one of these puzzle pieces with a different colour,” Kreshuk says.

Plant biologists have long needed this kind of tool, as morphogenesis is at the crux of many developmental biology questions. This kind of algorithm allows for all kinds of shape-related analysis, for example, analysis of shape changes through development or under a change in environmental conditions, or between species. The paper gives some examples, such as characterising developmental changes in ovules, studying the first asymmetric cell division which initiates the formation of the lateral root, and comparing and contrasting the shape of leaf cells between two different plant species.

While this tool currently targets plants specifically, Kreshuk points out that it could be tweaked to be used for other living organisms as well.

Machine learning-based algorithms, like the ones used at the core of PlantSeg, are trained from correct segmentation examples. The group has trained PlantSeg on many plant tissue volumes, so that now it generalises quite well to unseen plant data. The underlying method is, however, applicable to any tissue with cell boundary staining and one could easily retrain it for animal tissue.

“If you have tissue where you have a boundary staining, like cell walls in plants or cell membranes in animals, this tool can be used,” Kreshuk says. “With this staining and at high enough resolution, plant cells look very similar to our cells, but they are not quite the same. The tool right now is really optimised for plants. For animals, we would probably have to retrain parts of it, but it would work.”

Currently, PlantSeg is an independent tool but one that Kreshuk’s team will eventually merge into another tool her lab is working on, ilastik Multicut workflow.

Go to Source
Author:

Categories
ScienceDaily

Researchers develop a yeast-based platform to boost production of rare natural molecules

Many modern medicines, including analgesics and opioids, are derived from rare molecules found in plants and bacteria. While they are effective against a host of ailments, a number of these molecules have proven to be difficult to produce in large quantities. Some are so labour intensive that it is uneconomical for pharmaceutical companies to produce them in sufficient amounts to bring them to market.

In a new study published in Nature Communications, Vincent Martin outlines a method to synthesize complex bioactive molecules much more quickly and efficiently.

One of the principal ingredients in this new technique developed by the biology professor and Concordia University Research Chair in Microbial Engineering and Synthetic Biology is simple baker’s yeast.

The single-cell organism has cellular processes that are similar to those of humans, giving biologists an effective substitute in drug development research. Using cutting-edge synthetic biology approaches, Martin and his colleagues in Berkeley, California were able to produce a large amount of benzylisoquinoline alkaloid (BIA) to synthesize an array of natural and new-to-nature chemical structures in a yeast-based platform.

This, he says, can provide a blueprint for the large-scale production of thousands of products, including the opioid analgesics morphine and codeine. The same is true for opioid antagonists naloxone and naltrexone, used to treat overdose and dependence.

A long journey from gene to market

Martin has been working toward this outcome for most of the past two decades. He began with researching the genetic code plants use to produce the molecules used as drugs by the pharmaceutical industry. Then came transplanting their genes and enzymes into yeast to see if production was possible outside a natural setting. The next step is industrial production.

“We showed in previous papers that we can get milligrams of these molecules fairly easily, but you’re only going to be able to commercialize the process if you get grams of it,” Martin explains. “In principle, we now have a technology platform where we can produce them on that scale.”

This, he says, can have huge implications for a country like Canada, which has to import most of the rare molecules used in drugs from overseas. That’s especially relevant now, in the midst of a global pandemic, when fragile supply chains are at risk of being disrupted.

“To me, this really highlights the importance of finding alternative biotech-type processes that can be developed into a homemade, Canadian pharmaceutical industry,” he adds. “Many of the ingredients we use today are not very difficult to make. But if we don’t have a reliable supply process in Canada, we have a problem.”

Healthy savings

Martin admits he is curious to see where the technology leads us. He believes researchers can and will use the new platform for the commercialization and discovery of new drugs.

“We demonstrate that by using this platform, we can start building what is called new-to-nature molecules,” he says. “By experimenting with enzymes and genes and the way we grow things, we can begin making these into tools that can be used in the drug discovery process. We can access a whole new structural space.”

This study was financially supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Industrial Biocatalysis Grant, an NSERC Discovery Grant and by River Stone Biotech ApS.

Story Source:

Materials provided by Concordia University. Original written by Patrick Lejtenyi. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Mathematical tool helps calculate properties of quantum materials more quickly

Many quantum materials have been nearly impossible to simulate mathematically because the computing time required is too long. Now engineers have demonstrated a way to considerably reduce the computing time. This could accelerate the development of materials for energy-efficient IT technologies of the future.

Supercomputers around the world work around the clock on research problems. In principle, even novel materials can be simulated in computers in order to calculate their magnetic and thermal properties as well as their phase transitions. The gold standard for this kind of modelling is known as the quantum Monte Carlo method.

Wave-Particle Dualism

However, this method has an intrinsic problem: due to the physical wave-particle dualism of quantum systems, each particle in a solid-state compound not only possesses particle-like properties such as mass and momentum, but also wave-like properties such as phase. Interference causes the “waves” to be superposed on each other, so that they either amplify (add) or cancel (subtract) each other locally. This makes the calculations extremely complex. It is referred to the sign problem of the quantum Monte Carlo method.

Minimisation of the problem

“The calculation of quantum material characteristics costs about one million hours of CPU on mainframe computers every day,” says Prof. Jens Eisert, who heads the joint research group at Freie Universität Berlin and the HZB. “This is a very considerable proportion of the total available computing time.” Together with his team, the theoretical physicist has now developed a mathematical procedure by which the computational cost of the sign problem can be greatly reduced. “We show that solid-state systems can be viewed from very different perspectives. The sign problem plays a different role in these different perspectives. It is then a matter of dealing with the solid-state system in such a way that the sign problem is minimised,” explains Dominik Hangleiter, first author of the study that has now been published in Science Advances.

From simple spin systems to more complex ones

For simple solid-state systems with spins, which form what are known as Heisenberg ladders, this approach has enabled the team to considerably reduce the computational time for the sign problem. However, the mathematical tool can also be applied to more complex spin systems and promises faster calculation of their properties.

“This provides us with a new method for accelerated development of materials with special spin properties,” says Eisert. These types of materials could find application in future IT technologies for which data must be processed and stored with considerably less expenditure of energy.

Story Source:

Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

An AI algorithm to help identify homeless youth at risk of substance abuse

While many programs and initiatives have been implemented to address the prevalence of substance abuse among homeless youth in the United States, they don’t always include data-driven insights about environmental and psychological factors that could contribute to an individual’s likelihood of developing a substance use disorder.

Now, an artificial intelligence (AI) algorithm developed by researchers at the College of Information Sciences and Technology at Penn State could help predict susceptibility to substance use disorder among young homeless individuals, and suggest personalized rehabilitation programs for highly susceptible homeless youth.

“Proactive prevention of substance use disorder among homeless youth is much more desirable than reactive mitigation strategies such as medical treatments for the disorder and other related interventions,” said Amulya Yadav, assistant professor of information sciences and technology and principal investigator on the project. “Unfortunately, most previous attempts at proactive prevention have been ad-hoc in their implementation.”

“To assist policymakers in devising effective programs and policies in a principled manner, it would be beneficial to develop AI and machine learning solutions which can automatically uncover a comprehensive set of factors associated with substance use disorder among homeless youth,” added Maryam Tabar, a doctoral student in informatics and lead author on the project paper that will be presented at the Knowledge Discovery in Databases (KDD) conference in late August.

In that project, the research team built the model using a dataset collected from approximately 1,400 homeless youth, ages 18 to 26, in six U.S. states. The dataset was collected by the Research, Education and Advocacy Co-Lab for Youth Stability and Thriving (REALYST), which includes Anamika Barman-Adhikari, assistant professor of social work at the University of Denver and co-author of the paper.

The researchers then identified environmental, psychological and behavioral factors associated with substance use disorder among them — such as criminal history, victimization experiences and mental health characteristics. They found that adverse childhood experiences and physical street victimization were more strongly associated with substance use disorder than other types of victimization (such as sexual victimization) among homeless youth. Additionally, PTSD and depression were found to be more strongly associated with substance use disorder than other mental health disorders among this population, according to the researchers.

Next, the researchers divided their dataset into six smaller datasets to analyze geographical differences. The team trained a separate model to predict substance abuse disorder among homeless youth in each of the six states — which have varying environmental conditions, drug legalization policies and gang associations. The team observed several location-specific variations in the association level of some factors, according to Tabar.

“By looking at what the model has learned, we can effectively find out factors which may play a correlational role with people suffering from substance abuse disorder,” said Yadav. “And once we know these factors, we are much more accurately able to predict whether somebody suffers from substance use.”

He added, “So if a policy planner or interventionist were to develop programs that aim to reduce the prevalence of substance abuse disorder, this could provide useful guidelines.”

Other authors on the KDD paper include Dongwon Lee, associate professor, and Stephanie Winkler, doctoral student, both in the Penn State College of Information Sciences and Technology; and Heesoo Park of Sungkyunkwan University.

Yadav and Barman-Adhikari are collaborating on a similar project through which they have developed a software agent that designs personalized rehabilitation programs for homeless youth suffering from opioid addiction. Their simulation results show that the software agent — called CORTA (Comprehensive Opioid Response Tool Driven by Artificial Intelligence) — outperforms baselines by approximately 110% in minimizing the number of homeless youth suffering from opioid addiction.

“We wanted to understand what the causative issues are behind people developing opiate addiction,” said Yadav. “And then we wanted to assign these homeless youth to the appropriate rehabilitation program.”

Yadav explained that data collected by more than 1,400 homeless youth in the U.S. was used to build AI models to predict the likelihood of opioid addiction among this population. After examining issues that could be the underlying cause of opioid addiction — such as foster care history or exposure to street violence — CORTA solves novel optimization formulations to assign personalized rehabilitation programs.

“For example, if a person developed an opioid addiction because they were isolated or didn’t have a social circle, then perhaps as part of their rehabilitation program they should talk to a counselor,” explained Yadav. “On the other hand, if someone developed an addiction because they were depressed because they couldn’t find a job or pay their bills, then a career counselor should be a part of the rehabilitation plan.”

Yadav added, “If you just treat the condition medically, once they go back into the real world, since the causative issue still remains, they’re likely to relapse.”

Yadav and Barman-Adhikari will present their paper on CORTA, “Optimal and Non-Discriminative Rehabilitation Program Design for Opioid Addiction Among Homeless Youth,” at the International Joint Conference on Artificial Intelligence-Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI), which was to be held in July 2020 but is being rescheduled due to the novel coronavirus pandemic.

Other collaborators on the CORTA project include Penn State doctoral students Roopali Singh (statistics), Nikolas Siapoutis (statistics) and Yu Liang (informatics).

Go to Source
Author:

Categories
ScienceDaily

Turning carbon dioxide into liquid fuel

Catalysts speed up chemical reactions and form the backbone of many industrial processes. For example, they are essential in transforming heavy oil into gasoline or jet fuel. Today, catalysts are involved in over 80 percent of all manufactured products.

A research team, led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory in collaboration with Northern Illinois University, has discovered a new electrocatalyst that converts carbon dioxide (CO2) and water into ethanol with very high energy efficiency, high selectivity for the desired final product and low cost. Ethanol is a particularly desirable commodity because it is an ingredient in nearly all U.S. gasoline and is widely used as an intermediate product in the chemical, pharmaceutical and cosmetics industries.

“The process resulting from our catalyst would contribute to the circular carbon economy, which entails the reuse of carbon dioxide,” said Di-Jia Liu, senior chemist in Argonne’s Chemical Sciences and Engineering division and a UChicago CASE scientist in the Pritzker School of Molecular Engineering, University of Chicago. This process would do so by electrochemically converting the CO2 emitted from industrial processes, such as fossil fuel power plants or alcohol fermentation plants, into valuable commodities at reasonable cost.

The team’s catalyst consists of atomically dispersed copper on a carbon-powder support. By an electrochemical reaction, this catalyst breaks down CO2 and water molecules and selectively reassembles the broken molecules into ethanol under an external electric field. The electrocatalytic selectivity, or “Faradaic efficiency,” of the process is over 90 percent, much higher than any other reported process. What is more, the catalyst operates stably over extended operation at low voltage.

“With this research, we’ve discovered a new catalytic mechanism for converting carbon dioxide and water into ethanol,” said Tao Xu, a professor in physical chemistry and nanotechnology from Northern Illinois University. “The mechanism should also provide a foundation for development of highly efficient electrocatalysts for carbon dioxide conversion to a vast array of value-added chemicals.”

Because CO2 is a stable molecule, transforming it into a different molecule is normally energy intensive and costly. However, according to Liu, “We could couple the electrochemical process of CO2-to-ethanol conversion using our catalyst to the electric grid and take advantage of the low-cost electricity available from renewable sources like solar and wind during off-peak hours.” Because the process runs at low temperature and pressure, it can start and stop rapidly in response to the intermittent supply of the renewable electricity.

The team’s research benefited from two DOE Office of Science User Facilities at Argonne — the Advanced Photon Source (APS) and Center for Nanoscale Materials (CNM) — as well as Argonne’s Laboratory Computing Resource Center (LCRC). “Thanks to the high photon flux of the X-ray beams at the APS, we have captured the structural changes of the catalyst during the electrochemical reaction,” said Tao Li, an assistant professor in the Department of Chemistry and Biochemistry at Northern Illinois University and an assistant scientist in Argonne’s X-ray Science division. These data along with high-resolution electron microscopy at CNM and computational modeling using the LCRC revealed a reversible transformation from atomically dispersed copper to clusters of three copper atoms each on application of a low voltage. The CO2-to-ethanol catalysis occurs on these tiny copper clusters. This finding is shedding light on ways to further improve the catalyst through rational design.

“We have prepared several new catalysts using this approach and found that they are all highly efficient in converting CO2 to other hydrocarbons,” said Liu. “We plan to continue this research in collaboration with industry to advance this promising technology.”

Story Source:

Materials provided by DOE/Argonne National Laboratory. Original written by Joseph E. Harmon. Note: Content may be edited for style and length.

Go to Source
Author: