Categories
ScienceDaily

Shape matters for light-activated nanocatalysts

Points matter when designing nanoparticles that drive important chemical reactions using the power of light.

Researchers at Rice University’s Laboratory for Nanophotonics (LANP) have long known that a nanoparticle’s shape affects how it interacts with light, and their latest study shows how shape affects a particle’s ability to use light to catalyze important chemical reactions.

In a comparative study, LANP graduate students Lin Yuan and Minhan Lou and their colleagues studied aluminum nanoparticles with identical optical properties but different shapes. The most rounded had 14 sides and 24 blunt points. Another was cube-shaped, with six sides and eight 90-degree corners. The third, which the team dubbed “octopod,” also had six sides, but each of its eight corners ended in a pointed tip.

All three varieties have the ability to capture energy from light and release it periodically in the form of super-energetic hot electrons that can speed up catalytic reactions. Yuan, a chemist in the research group of LANP director Naomi Halas, conducted experiments to see how well each of the particles performed as photocatalysts for hydrogen dissociation reaction. The tests showed octopods had a 10 times higher reaction rate than the 14-sided nanocrystals and five times higher than the nanocubes. Octopods also had a lower apparent activation energy, about 45% lower than nanocubes and 49% lower than nanocrystals.

“The experiments demonstrated that sharper corners increased efficiencies,” said Yuan, co-lead author of the study, which is published in the American Chemical Society journal ACS Nano. “For the octopods, the angle of the corners is about 60 degrees, compared to 90 degrees for the cubes and more rounded points on the nanocrystals. So the smaller the angle, the greater the increase in reaction efficiencies. But how small the angle can be is limited by chemical synthesis. These are single crystals that prefer certain structures. You cannot make infinitely more sharpness.”

Lou, a physicist and study co-lead author in the research group of LANP’s Peter Nordlander, verified the results of the catalytic experiments by developing a theoretical model of the hot electron energy transfer process between the light-activated aluminum nanoparticles and hydrogen molecules.

“We input the wavelength of light and particle shape,” Lou said. “Using these two aspects, we can accurately predict which shape will produce the best catalyst.”

The work is part of an ongoing green chemistry effort by LANP to develop commercially viable light-activated nanocatalysts that can insert energy into chemical reactions with surgical precision. LANP has previously demonstrated catalysts for ethylene and syngas production, the splitting of ammonia to produce hydrogen fuel and for breaking apart “forever chemicals.”

“This study shows that photocatalyst shape is another design element engineers can use to create photocatalysts with the higher reaction rates and lower activation barriers,” said Halas, Rice’s Stanley C. Moore Professor of Electrical and Computer Engineering, director of Rice’s Smalley-Curl Institute and a professor of chemistry, bioengineering, physics and astronomy, and materials science and nanoengineering.

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Physicists develop basic principles for mini-labs on chips

Colloidal particles have become increasingly important for research as vehicles of biochemical agents. In future, it will be possible to study their behaviour much more efficiently than before by placing them on a magnetised chip. A research team from the University of Bayreuth reports on these new findings in the journal Nature Communications. The scientists have discovered that colloidal rods can be moved on a chip quickly, precisely, and in different directions, almost like chess pieces. A pre-programmed magnetic field even enables these controlled movements to occur simultaneously.

For the recently published study, the research team, led by Prof. Dr. Thomas Fischer, Professor of Experimental Physics at the University of Bayreuth, worked closely with partners at the University of Poznán and the University of Kassel. To begin with, individual spherical colloidal particles constituted the building blocks for rods of different lengths. These particles were assembled in such a way as to allow the rods to move in different directions on a magnetised chip like upright chess figures — as if by magic, but in fact determined by the characteristics of the magnetic field.

In a further step, the scientists succeeded in eliciting individual movements in various directions simultaneously. The critical factor here was the “programming” of the magnetic field with the aid of a mathematical code, which in encrypted form, outlines all the movements to be performed by the figures. When these movements are carried out simultaneously, they take up to one tenth of the time needed if they are carried out one after the other like the moves on a chessboard.

“The simultaneity of differently directed movements makes research into colloidal particles and their dynamics much more efficient,” says Adrian Ernst, doctoral student in the Bayreuth research team and co-author of the publication. “Miniaturised laboratories on small chips measuring just a few centimetres in size are being used more and more in basic physics research to gain insights into the properties and dynamics of materials. Our new research results reinforce this trend. Because colloidal particles are in many cases very well suited as vehicles for active substances, our research results could be of particular benefit to biomedicine and biotechnology,” says Mahla Mirzaee-Kakhki, first author and Bayreuth doctoral student.

Story Source:

Materials provided by Universität Bayreuth. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Giant leap for molecular measurements

Spectroscopy is an important tool of observation in many areas of science and industry. Infrared spectroscopy is especially important in the world of chemistry where it is used to analyze and identify different molecules. The current state-of-the-art method can make approximately 1 million observations per second. UTokyo researchers have greatly surpassed this figure with a new method about 100 times faster.

From climate science to safety systems, manufacture to quality control of foodstuffs, infrared spectroscopy is used in so many academic and industrial fields that it’s a ubiquitous, albeit invisible, part of everyday life. In essence, infrared spectroscopy is a way to identify what molecules are present in a sample of a substance with a high degree of accuracy. The basic idea has been around for decades and has undergone improvements along the way.

In general, infrared spectroscopy works by measuring infrared light transmitted or reflected from molecules in a sample. The samples’ inherent vibrations alter the characteristics of the light in very specific ways, essentially providing a chemical fingerprint, or spectra, which is read by a detector and analyzer circuit or computer. Fifty years ago the best tools could measure one spectra per second, and for many applications this was more than adequate.

More recently, a technique called dual-comb spectroscopy achieved a measurement rate of 1 million spectra per second. However, in many instances, more rapid observations are required in order to produce fine-grain data. For example some researchers wish to explore the stages of certain chemical reactions that happen on very short time scales. This drive prompted Associate Professor Takuro Ideguchi from the Institute for Photon Science and Technology, at the University of Tokyo, and his team to look into and create the fastest infrared spectroscopy system to date.

“We developed the world’s fastest infrared spectrometer, which runs at 80 million spectra per second,” said Ideguchi. “This method, time-stretch infrared spectroscopy, is about 100 times faster than dual-comb spectroscopy, which had reached an upper speed limit due to issues of sensitivity.” Given there are around 30 million seconds in a year, this new method can achieve in one second what 50 years ago would have taken over two years.

Time-stretch infrared spectroscopy works by stretching a very short pulse of laser light transmitted from a sample. As the transmitted pulse is stretched, it becomes easier for a detector and accompanying electronic circuitry to accurately analyze. A key high-speed component that makes it possible is something called a quantum cascade detector, developed by one of the paper’s authors, Tatsuo Dougakiuchi from Hamamatsu Photonics.

“Natural science is based on experimental observations. Therefore, new measurement techniques can open up new scientific fields,” said Ideguchi. “Researchers in many fields can build on what we’ve done here and use our work to enhance their own understanding and powers of observation.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

World record: Plasma accelerator operates right around the clock

A team of researchers at DESY has reached an important milestone on the road to the particle accelerator of the future. For the first time, a so-called laser plasma accelerator has run for more than a day while continuously producing electron beams. The LUX beamline, jointly developed and operated by DESY and the University of Hamburg, achieved a run time of 30 hours. “This brings us a big step closer to the steady operation of this innovative particle accelerator technology,” says DESY’s Andreas R. Maier, the leader of the group. The scientists are reporting on their record in the journal Physical Review X. “The time is ripe to move laser plasma acceleration from the laboratory to practical applications,” adds the director of DESY’s Accelerator Division, Wim Leemans.

Physicists hope that the technique of laser plasma acceleration will lead to a new generation of powerful and compact particle accelerators offering unique properties for a wide range of applications. In this technique, a laser or energetic particle beam creates a plasma wave inside a fine capillary. A plasma is a gas in which the gas molecules have been stripped of their electrons. LUX uses hydrogen as the gas.

“The laser pulses plough their way through the gas in the form of narrow discs, stripping the electrons from the hydrogen molecules and sweeping them aside like a snow plough,” explains Maier, who works at the Centre for Free-Electron Laser Science (CFEL), a joint enterprise between DESY, the University of Hamburg and the Max Planck Society. “Electrons in the wake of the pulse are accelerated by the positively charged plasma wave in front of them — much like a wakeboarder rides the wave behind the stern of a boat.”

This phenomenon allows laser plasma accelerators to achieve acceleration strengths that are up to a thousand times greater than what could be provided by today’s most powerful machines. Plasma accelerators will enable more compact and powerful systems for a wide range of applications, from fundamental research to medicine. A number of technical challenges still need to be overcome before these devices can be put to practical use. “Now that we are able to operate our beamline for extended periods of time, we will be in a better position to tackle these challenges,” explains Maier.

During the record-breaking nonstop operation, the physicists accelerated more than 100,000 electron bunches, one every second. Thanks to this large dataset, the properties of the accelerator, the laser and the bunches can be correlated and analysed much more precisely. “Unwanted variations in the electron beam can be traced back to specific points in the laser, for example, so that we now know exactly where we need to start in order to produce an even better particle beam,” says Maier. “This approach lays the foundations for an active stabilisation of the beams, such as is deployed on every high performance accelerator in the world,” explains Leemans.

According to Maier, the key to success was combining expertise from two different fields: plasma acceleration and know-how in stable accelerator operation.” Both are available at DESY, which is unparalleled in the world in this respect,” Maier emphasises. According to him, numerous factors contributed to the accelerator’s stable long-term operation, from vacuum technology and laser expertise to a comprehensive and sophisticated control system. “In principle, the system could have kept running for even longer, but we stopped it after 30 hours,” reports Maier. “Since then, we have repeated such runs three more times.”

“This work demonstrates that laser plasma accelerators can generate a reproducible and controllable output. This provides a concrete basis for developing this technology further, in order to build future accelerator-based light sources at DESY and elsewhere,” Leemans summarises.

Story Source:

Materials provided by Deutsches Elektronen-Synchrotron DESY. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Stack and twist: Physicists accelerate the hunt for revolutionary new materials

Scientists at the University of Bath have taken an important step towards understanding the interaction between layers of atomically thin materials arranged in stacks. They hope their research will speed up the discovery of new, artificial materials, leading to the design of electronic components that are far tinier and more efficient than anything known today.

Smaller is always better in the world of electronic circuitry, but there’s a limit to how far you can shrink a silicon component without it overheating and falling apart, and we’re close to reaching it. The researchers are investigating a group of atomically thin materials that can be assembled into stacks. The properties of any final material depend both on the choice of raw materials and on the angle at which one layer is arranged on top of another.

Dr Marcin Mucha-Kruczynski who led the research from the Department of Physics, said: “We’ve found a way to determine how strongly atoms in different layers of a stack are coupled to each other, and we’ve demonstrated the application of our idea to a structure made of graphene layers.”

The Bath research, published in Nature Communications, is based on earlier work into graphene — a crystal characterised by thin sheets of carbon atoms arranged in a honeycomb design. In 2018, scientists at the Massachusetts Institute of Technology (MIT) found that when two layers of graphene are stacked and then twisted relative to each other by the ‘magic’ angle of 1.1°, they produce a material with superconductive properties. This was the first time scientists had created a super-conducting material made purely from carbon. However, these properties disappeared with the smallest change of angle between the two layers of graphene.

Since the MIT discovery, scientists around the world have been attempting to apply this ‘stacking and twisting’ phenomenon to other ultra-thin materials, placing together two or more atomically different structures in the hope of forming entirely new materials with special qualities.

“In nature, you can’t find materials where each atomic layer is different,” said Dr Mucha-Kruczynski. “What’s more, two materials can normally only be put together in one specific fashion because chemical bonds need to form between layers. But for materials like graphene, only the chemical bonds between atoms on the same plane are strong. The forces between planes — known as van der Waals interactions — are weak, and this allows for layers of material to be twisted with respect to each other.”

The challenge for scientists now is to make the process of discovering new, layered materials as efficient as possible. By finding a formula that allows them to predict the outcome when two or more materials are stacked, they will be able to streamline their research enormously.

It is in this area that Dr Mucha-Kruczynski and his collaborators at the University of Oxford, Peking University and ELETTRA Synchrotron in Italy expect to make a difference.

“The number of combinations of materials and the number of angles at which they can be twisted is too large to try out in the lab, so what we can predict is important,” said Dr Mucha-Kruczynski.

The researchers have shown that the interaction between two layers can be determined by studying a three-layer structure where two layers are assembled as you might find in nature, while the third is twisted. They used angle-resolved photoemission spectroscopy — a process in which powerful light ejects electrons from the sample so that the energy and momentum from the electrons can be measured, thus providing insight into properties of the material — to determine how strongly two carbon atoms at a given distance from each other are coupled. They have also demonstrated that their result can be used to predict properties of other stacks made of the same layers, even if the twists between layers are different.

The list of known atomically thin materials like graphene is growing all the time. It already includes dozens of entries displaying a vast range of properties, from insulation to superconductivity, transparency to optical activity, brittleness to flexibility. The latest discovery provides a method for experimentally determining the interaction between layers of any of these materials. This is essential for predicting the properties of more complicated stacks and for the efficient design of new devices.

Dr Mucha-Kruczynski believes it could be 10 years before new stacked and twisted materials find a practical, everyday application. “It took a decade for graphene to move from the laboratory to something useful in the usual sense, so with a hint of optimism, I expect a similar timeline to apply to new materials,” he said.

Building on the results of his latest study, Dr Mucha-Kruczynski and his team are now focusing on twisted stacks made from layers of transition metal dichalcogenides (a large group of materials featuring two very different types of atoms — a metal and a chalcogen, such as sulphur). Some of these stacks have shown fascinating electronic behaviour which the scientists are not yet able to explain.

“Because we’re dealing with two radically different materials, studying these stacks is complicated,” explained Dr Mucha-Kruczynski. “However, we’re hopeful that in time we’ll be able to predict the properties of various stacks, and design new multifunctional materials.”

Go to Source
Author:

Categories
ScienceDaily

Recovering data: Neural network model finds small objects in dense images

In efforts to automatically capture important data from scientific papers, computer scientists at the National Institute of Standards and Technology (NIST) have developed a method that can accurately detect small, geometric objects such as triangles within dense, low-quality plots contained in image data. Employing a neural network approach designed to detect patterns, the NIST model has many possible applications in modern life.

NIST’s neural network model captured 97% of objects in a defined set of test images, locating the objects’ centers to within a few pixels of manually selected locations.

“The purpose of the project was to recover the lost data in journal articles,” NIST computer scientist Adele Peskin explained. “But the study of small, dense object detection has a lot of other applications. Object detection is used in a wide range of image analyses, self-driving cars, machine inspections, and so on, for which small, dense objects are particularly hard to locate and separate.”

The researchers took the data from journal articles dating as far back as the early 1900s in a database of metallic properties at NIST’s Thermodynamics Research Center (TRC). Often the results were presented only in graphical format, sometimes drawn by hand and degraded by scanning or photocopying. The researchers wanted to extract the locations of data points to recover the original, raw data for additional analysis. Until now such data have been extracted manually.

The images present data points with a variety of different markers, mainly circles, triangles, and squares, both filled and open, of varying size and clarity. Such geometrical markers are often used to label data in a scientific graph. Text, numbers and other symbols, which can falsely appear to be data points, were manually removed from a subset of the figures with graphics editing software before training the neural networks.

Accurately detecting and localizing the data markers was a challenge for several reasons. The markers are inconsistent in clarity and exact shape; they may be open or filled and are sometimes fuzzy or distorted. Some circles appear extremely circular, for example, whereas others do not have enough pixels to fully define their shape. In addition, many images contain very dense patches of overlapping circles, squares, and triangles.

The researchers sought to create a network model that identified plot points at least as accurately as manual detection — within 5 pixels of the actual location on a plot size of several thousand pixels per side.

As described in a new journal paper, NIST researchers adopted a network architecture originally developed by German researchers for analyzing biomedical images, called U-Net. First the image dimensions are contracted to reduce spatial information, and then layers of feature and context information are added to build up precise, high-resolution results.

To help train the network to classify marker shapes and locate their centers, the researchers experimented with four ways of marking the training data with masks, using different-sized center markings and outlines for each geometric object.

The researchers found that adding more information to the masks, such as thicker outlines, increased the accuracy of classifying object shapes but reduced the accuracy of pinpointing their locations on the plots. In the end, the researchers combined the best aspects of several models to get the best classification and smallest location errors. Altering the masks turned out to be the best way to improve network performance, more effective than other approaches such as small changes at the end of the network.

The network’s best performance — an accuracy of 97% in locating object centers — was possible only for a subset of images in which plot points were originally represented by very clear circles, triangles, and squares. The performance is good enough for the TRC to use the neural network to recover data from plots in newer journal papers.

Although NIST researchers currently have no plans for follow-up studies, the neural network model “absolutely” could be applied to other image analysis problems, Peskin said.

Go to Source
Author:

Categories
ScienceDaily

Methanol synthesis: Insights into the structure of an enigmatic catalyst

Methanol is one of the most important basic chemicals used, for example, to produce plastics or building materials. To render the production process even more efficient, it would be helpful to know more about the copper/zinc oxide/aluminium oxide catalyst deployed in methanol production. To date, however, it hasn’t been possible to analyse the structure of its surface under reaction conditions. A team from Ruhr-Universität Bochum (RUB) and the Max Planck Institute for Chemical Energy Conversion (MPI CEC) has now succeeded in gaining insights into the structure of its active site. The researchers describe their findings in the journal Nature Communications from 4 August 2020.

In a first, the team showed that the zinc component of the active site is positively charged and that the catalyst has as many as two copper-based active sites. “The state of the zinc component at the active site has been the subject of controversial discussion since the catalyst was introduced in the 1960s. Based on our findings, we can now derive numerous ideas on how to optimise the catalyst in the future,” outlines Professor Martin Muhler, Head of the Department of Industrial Chemistry at RUB and Max Planck Fellow at MPI CEC. For the project, he collaborated with Bochum-based researcher Dr. Daniel Laudenschleger and Mülheim-based researcher Dr. Holger Ruland.

Sustainable methanol production

The study was embedded in the Carbon-2-Chem project, the aim of which is to reduce CO2 emissions by utilising metallurgical gases produced during steel production for the manufacture of chemicals. In combination with electrolytically produced hydrogen, metallurgical gases could also serve as a starting material for sustainable methanol synthesis. As part of the Carbon-2-Chem project, the research team recently examined how impurities in metallurgical gases, such as are produced in coking plants or blast furnaces, affect the catalyst. This research ultimately paved the way for insights into the structure of the active site.

Active site deactivated for analysis

The researchers had identified nitrogen-containing molecules- ammonia and amines — as impurities that act as catalyst poisons. They deactivated the catalyst, but not permanently: if the impurities disappear, the catalyst recovers by itself. Using a unique research apparatus that was developed in-house, i.e. a continuously operated flow apparatus with an integrated high-pressure pulse unit, the researchers passed ammonia and amines over the catalyst surface, temporarily deactivating the active site with a zinc component. Despite the zinc component being deactivated, another reaction still took place on the catalyst: namely the conversion of ethene to ethane. The researchers thus detected a second active site operating in parallel, which contains metallic copper but doesn’t have a zinc component.

Since ammonia and the amines are bound to positively charged metal ions on the surface, it was evident that zinc, as part of the active site, carries a positive charge.

Story Source:

Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Scientists discover new class of semiconducting entropy-stabilized materials

Semiconductors are important materials in numerous functional applications such as digital and analog electronics, solar cells, LEDs, and lasers. Semiconducting alloys are particularly useful for these applications since their properties can be engineered by tuning the mixing ratio or the alloy ingredients. However, the synthesis of multicomponent semiconductor alloys has been a big challenge due to thermodynamic phase segregation of the alloy into separate phases. Recently, University of Michigan researchers Emmanouil (Manos) Kioupakis and Pierre F. P. Poudeu, both in the Materials Science and Engineering Department, utilized entropy to stabilize a new class of semiconducting materials, based on GeSnPbSSeTe high-entropy ourchalcogenide alloys, a discovery that paves the way for wider adoption of entropy-stabilized semiconductors in functional applications.

Entropy, a thermodynamic quantity that quantifies the degree of disorder in a material, has been exploited to synthesize a vast array of novel materials by mixing eachcomponent in an equimolar fashion, from high-entropy metallic alloys to entropy-stabilized ceramics. Despite having a large enthalpy of mixing, these materials can surprisingly crystalize in a single crystal structure, enabled by the large configurational entropy in the lattice. Kioupakis and Poudeu hypothesized that this principle of entropy stabilization can be applied to overcome the synthesis challenges of semiconducting alloys that prefer to segregation into thermodynamically more stable compounds. They tested their hypothesis on a 6-component II-VI chalcogenide alloy derived from the PbTe structure by mixing Ge, Sn, and Pb on the cation site, and S, Se, and Te on the anion site.

Using high throughput first-principles calculations, Kioupakis uncovered the complex interplay between the enthalpy and entropy in GeSnPbSSeTe high-entropy chalcogenide alloys. He found that the large configurational entropy from both anion and cation sublattices stabilizes the alloys into single-phase rocksalt solid solutions at the growth temperature. Despite being metastable at room temperature, these solid solutions can be preserved by fast cooling under ambient conditions. Poudeu later verified the theory predictions by synthesizing the equimolar composition (Ge1/3Sn1/3Pb1/3S1/3Se1/3Te1/3) by a two-step solid-state reaction followed by fast quenching in liquid nitrogen. The synthesized power showed well-defined XRD patterns corresponding to a pure rocksalt structure. Furthermore, they observed reversible phase transition between single-phase solid solution and multiple-phase segregation from DSC analysis and temperature dependent XRD, which is a key feature of entropy stabilization.

What makes high-entropy chalcogenide intriguing is their functional properties. Previously discovered high-entropy materials are either conducting metals or insulating ceramics, with a clear dearth in the semiconducting regime. Kioupakis and Poudeu found that. the equimolar GeSnPbSSeTe is an ambipolarly dopable semiconductor, with evidence from a calculated band gap of 0.86 eV and sign reversal of the measured Seebeck coefficient upon p-type doping with Na acceptors and n-type doping with Bi donors. The alloy also exhibits an ultralow thermal conductivity that is nearly independent of temperature. These fascinating functional properties make GeSnPbSSeTe a promising new material to be deployed in electronic, optoelectronic, photovoltaic, and thermoelectric devices.

Entropy stabilization is a general and powerful method to realize a vast array of materials compositions. The discovery of entropy stabilization in semiconducting chalcogenide alloys by the team at UM is only the tip of the iceberg that can pave the way for novel functional applications of entropy-stabilized materials.

Story Source:

Materials provided by University of Michigan College of Engineering. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Portable DNA device can detect tree pests in under two hours

Asian gypsy moths feed on a wide range of important plants and trees. White pine blister rust can kill young trees in only a couple of years. But it’s not always easy to detect the presence of these destructive species just by looking at spots and bumps on a tree, or on the exterior of a cargo ship.

Now a new rapid DNA detection method developed at the University of British Columbia can identify these pests and pathogens in less than two hours, without using complicated processes or chemicals — a substantial time savings compared to the several days it currently takes to send samples to a lab for testing.

“Sometimes, a spot is just a spot,” explains forestry professor Richard Hamelin, who designed the system with collaborators from UBC, Natural Resources Canada and the Canadian Food Inspection Agency. “Other times, it’s a deadly fungus or an exotic bug that has hitched a ride on a shipping container and has the potential to decimate local parks, forests and farms. So you want to know as soon as possible what you’re looking at, so that you can collect more samples to assess the extent of the invasion or begin to formulate a plan of action.”

Hamelin’s research focuses on using genomics to design better detection and monitoring methods for invasive pests and pathogens that threaten forests. For almost 25 years, he’s been looking for a fast, accurate, inexpensive DNA test that can be performed even in places, like forests, without fast Internet or steady power supply.

He may have found it. The method, demonstrated in a preview last year for forestry policymakers in Ottawa, is straightforward. Tiny samples like parts of leaves or branches, or insect parts like wings and antennae, are dropped into a tube and popped into a small, battery-powered device (the Franklin thermo cycler, made by Philadelphia-based Biomeme). The device checks to see if these DNA fragments match the genomic material of the target species and generates a signal that can be visualized on a paired smartphone.

“With this system, we can tell with nearly 100 per cent accuracy if it is a match or not, if we’re looking at a threatening invasive species or one that’s benign,” said Hamelin. “We can analyze up to nine samples from the same or different species at a time, and it’s all lightweight enough — the thermocycler weighs only 1.3 kilos — to fit into your backpack with room to spare.”

The method relies on PCR testing, the method that is currently also the gold standard for COVID-19. PCR testing effectively analyzes even tiny amounts of DNA by amplifying (through applying heating and cooling cycles) a portion of the genetic material to a level where it can be detected.

Hamelin’s research was supported by Genome Canada, Genome BC and Genome Quebec and published in PLOS One. The UBC team, including lead author Arnaud Capron, tested this approach on species such as the Asian gypsy moth, white pine blister rust and sudden oak death pathogen, which are listed among the most destructive invasive pests worldwide.

“Our forestry, agriculture and horticulture are vital industries contributing billions of dollars to Canada’s economy so it’s essential that we protect them from their enemies,” added Hamelin. “With early detection and steady surveillance, we can ensure that potential problems are nipped, so to speak, in the bud.”

Story Source:

Materials provided by University of British Columbia. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

How colliding neutron stars could shed light on universal mysteries

An important breakthrough in how we can understand dead star collisions and the expansion of the Universe has been made by an international team, led by the University of East Anglia.

They have discovered an unusual pulsar — one of deep space’s magnetized spinning neutron-star ‘lighthouses’ that emits highly focused radio waves from its magnetic poles.

The newly discovered pulsar (known as PSR J1913+1102) is part of a binary system — which means that it is locked in a fiercely tight orbit with another neutron star.

Neutron stars are the dead stellar remnants of a supernova. They are made up of the most dense matter known — packing hundreds of thousands of times the Earth’s mass into a sphere the size of a city.

In around half a billion years the two neutron stars will collide, releasing astonishing amounts of energy in the form of gravitational waves and light.

But the newly discovered pulsar is unusual because the masses of its two neutron stars are quite different — with one far larger than the other.

This asymmetric system gives scientists confidence that double neutron star mergers will provide vital clues about unsolved mysteries in astrophysics — including a more accurate determination of the expansion rate of the Universe, known as the Hubble constant.

The discovery, published today in the journal Nature, was made using the Arecibo radio telescope in Puerto Rico.

Lead researcher Dr Robert Ferdman, from UEA’s School of Physics, said: “Back in 2017, scientists at the Laser Interferometer Gravitational-Wave Observatory (LIGO) first detected the merger of two neutron stars.

“The event caused gravitational-wave ripples through the fabric of space time, as predicted by Albert Einstein over a century ago.”

Known as GW170817, this spectacular event was also seen with traditional telescopes at observatories around the world, which identified its location in a distant galaxy, 130 million light years from our own Milky Way.

Dr Ferdman said: “It confirmed that the phenomenon of short gamma-ray bursts was due to the merger of two neutron stars. And these are now thought to be the factories that produce most of the heaviest elements in the Universe, such as gold.”

The power released during the fraction of a second when two neutron stars merge is enormous — estimated to be tens of times larger than all stars in the Universe combined.

So the GW170817 event was not surprising. But the enormous amount of matter ejected from the merger and its brightness was an unexpected mystery.

Dr Ferdman said: “Most theories about this event assumed that neutron stars locked in binary systems are very similar in mass.

“Our new discovery changes these assumptions. We have uncovered a binary system containing two neutron stars with very different masses.

“These stars will collide and merge in around 470 million years, which seems like a long time, but it is only a small fraction of the age of the Universe.

“Because one neutron star is significantly larger, its gravitational influence will distort the shape of its companion star — stripping away large amounts of matter just before they actually merge, and potentially disrupting it altogether.

“This ‘tidal disruption’ ejects a larger amount of hot material than expected for equal-mass binary systems, resulting in a more powerful emission.

“Although GW170817 can be explained by other theories, we can confirm that a parent system of neutron stars with significantly different masses, similar to the PSR J1913+1102 system, is a very plausible explanation.

“Perhaps more importantly, the discovery highlights that there are many more of these systems out there — making up more than one in 10 merging double neutron star binaries.”

Co-author Dr Paulo Freire from the Max Planck Institute for Radio Astronomy in Bonn, Germany, said: “Such a disruption would allow astrophysicists to gain important new clues about the exotic matter that makes up the interiors of these extreme, dense objects.

“This matter is still a major mystery — it’s so dense that scientists still don’t know what it is actually made of. These densities are far beyond what we can reproduce in Earth-based laboratories.”

The disruption of the lighter neutron star would also enhance the brightness of the material ejected by the merger. This means that along with gravitational-wave detectors such as the US-based LIGO and the Europe-based Virgo detector, scientists will also be able to observe them with conventional telescopes.

Dr Ferdman said: “Excitingly, this may also allow for a completely independent measurement of the Hubble constant — the rate at which the Universe is expanding. The two main methods for doing this are currently at odds with each other, so this is a crucial way to break the deadlock and understand in more detail how the Universe evolved.”

Go to Source
Author: