New possibilities for working with quantum information

Small particles can have an angular momentum that points in a certain direction — the spin. This spin can be manipulated by a magnetic field. This principle, for example, is the basic idea behind magnetic resonance imaging as used in hospitals. An international research team has now discovered a surprising effect in a system that is particularly well suited for processing quantum information: the spins of phosphorus atoms in a piece of silicon, coupled to a microwave resonator. If these spins are cleverly excited with microwave pulses, a so-called spin echo signal can be detected after a certain time — the injected pulse signal is re-emitted as a quantum echo. Surprisingly, this spin echo does not occur only once, but a whole series of echoes can be detected. This opens up new possibilities of how information can be processed with quantum systems.

The experiments were carried out at the Walther-Meissner-Institute in Garching by researchers from the Bavarian Academy of Sciences and Humanities and the Technical University of Munich, the theoretical explanation was developed at TU Wien (Vienna). Now the joint work has been published in the journal Physical Review Letters.

The echo of quantum spins

“Spin echoes have been known for a long time, this is nothing unusual,” says Prof. Stefan Rotter from TU Wien (Vienna). First, a magnetic field is used to make sure that the spins of many atoms point in the same magnetic direction. Then the atoms are irradiated with an electromagnetic pulse, and suddenly their spins begin to change direction.

However, the atoms are embedded in slightly different environments. It is therefore possible that slightly different forces act on their spins. “As a result, the spin does not change at the same speed for all atoms,” explains Dr. Hans Hübl from the Bavarian Academy of Sciences and Humanities. “Some particles change their spin direction faster than others, and soon you have a wild jumble of spins with completely different orientations.”

But it is possible to rewind this apparent chaos — with the help of another electromagnetic pulse. A suitable pulse can reverse the previous spin rotation so that the spins all come together again. “You can imagine it’s a bit like running a marathon,” says Stefan Rotter. “At the start signal, all the runners are still together. As some runners are faster than others, the field of runners is pulled further and further apart over time. However, if all runners were now given the signal to return to the start, all runners would return to the start at about the same time, although faster runners have to cover a longer distance back than slower ones.”

In the case of spins, this means that at a certain point in time all particles have exactly the same spin direction again — and this is called the “spin echo.” “Based on our experience in this field, we had already expected to be able to measure a spin echo in our experiments,” says Hans Hübl. “The remarkable thing is that we were not only able to measure a single echo, but a series of several echoes.”

The spin that influences itself

At first, it was unclear how this novel effect comes about. But a detailed theoretical analysis now made it possible to understand the phenomenon: It is due to the strong coupling between the two components of the experiment — the spins and the photons in a microwave resonator, an electrical circuit in which microwaves can only exist at certain wavelengths. “This coupling is the essence of our experiment: You can store information in the spins, and with the help of the microwave photons in the resonator you can modify it or read it out,” says Hans Hübl.

The strong coupling between the atomic spins and the microwave resonator is also responsible for the multiple echoes: If the spins of the atoms all point in the same direction in the first echo, this produces an electromagnetic signal. “Thanks to the coupling to the microwave resonator, this signal acts back on the spins, and this leads to another echo — and on and on,” explains Stefan Rotter. “The spins themselves cause the electromagnetic pulse, which is responsible for the next echo.”

The physics of the spin echo has great significance for technical applications — it is an important basic principle behind magnetic resonance imaging. The new possibilities offered by the multiple echo, such as the processing of quantum information, will now be examined in more detail. “For sure, multiple echos in spin ensembles coupled strongly to the photons of a resonator are an exciting new tool. It will not only find useful applications in quantum information technology, but also in spin-based spectroscopy methods,” says Rudolf Gross, co-author and director of the Walther-Meissner-Institute.

Story Source:

Materials provided by Vienna University of Technology. Original written by Florian Aigner. Note: Content may be edited for style and length.

Go to Source


A cheaper, faster COVID-19 test

Researchers at Karolinska Institutet have developed a method for fast, cheap, yet accurate testing for COVID-19 infection. The method simplifies and frees the testing from expensive reaction steps, enabling upscaling of the diagnostics. This makes the method particularly attractive for places and situations with limited resources. It is equally interesting for repeated testing and for moving resources from expensive diagnostics to other parts of the care chain. The study is published in Nature Communications.

“We started working on the issue of developing a readily available testing method as soon as we saw the developments in Asia and southern Europe, and before the situation reached crisis point in Sweden,” says principal investigator Bjorn Reinius, research leader at the Department of Medical Biochemistry and Biophysics at Karolinska Institutet. “Our method was effectively finished already by the end of April, and we then made all the data freely available online.”

The spread of the new coronavirus at the end of 2019 in China’s Wuhan region quickly escalated into a global pandemic. The relatively high transmission rate and the large number of asymptomatic infections led to a huge, world-wide need for fast, affordable and effective diagnostic tests that could be performed in clinical as well as non-clinical settings.

Established diagnostic tests for COVID-19 are based on the detection of viral RNA in patient samples, such as nasal and throat swabs, from which RNA molecules must then be extracted and purified. RNA purification constitutes a major bottleneck for the testing process, requiring a great deal of equipment and logistics as well as expensive chemical compounds.

Making the current methods simpler without markedly compromising their accuracy means that more and faster testing can be carried out, which would help to reduce the rate of transmission and facilitate earlier-stage care.

The cross-departmental research group at Karolinska Institutet has now developed methods that completely circumvent the RNA-extraction procedure, so that once the patient sample has been inactivated by means of heating, rendering the virus particles no longer infectious, it can pass straight to the diagnostic reaction that detects the presence of the virus.

According to the researchers, the most important keys to the method’s success are both the above virus inactivation procedure and a new formulation of the solution used to collect and transport the sample material taken from the patients.

“By replacing the collection buffer with simple and inexpensive buffer formulations, we can enable viral detection with high sensitivity directly from the original clinical sample, without any intermediate steps,” says Dr Reinius.

Institutions and research groups around the world have shown great interest in the method since a first version of the scientific article was published on the preprint server medRxiv. The article was read more than 15,000 times even before it was peer-reviewed by other researchers in the field and officially published in Nature Communications.

“Thanks to the low cost and the simplicity of the method, it becomes a particularly attractive option at sites and in situations with limited resources but a pressing need to test for COVID-19,” he says and adds: “I would certainly like to see that this test used in Sweden too, for example for cheap periodic testing of asymptomatic people to eliminate the spread of infection.”

The study was supported by grants from the Wallenberg Foundations via the SciLifeLab/KAW National COVID-19 Research Program and from the Ragnar Soderberg Foundation.

Story Source:

Materials provided by Karolinska Institutet. Note: Content may be edited for style and length.

Go to Source


Parylene photonics enable future optical biointerfaces

Carnegie Mellon University’s Maysam Chamanzar and his team have invented an optical platform that will likely become the new standard in optical biointerfaces. He’s labeled this new field of optical technology “Parylene photonics,” demonstrated in a recent paper in Nature Microsystems and Nanoengineering.

There is a growing and unfulfilled demand for optical systems for biomedical applications. Miniaturized and flexible optical tools are needed to enable reliable ambulatory and on-demand imaging and manipulation of biological events in the body. Integrated photonic technology has mainly evolved around developing devices for optical communications. The advent of silicon photonics was a turning point in bringing optical functionalities to the small form-factor of a chip.

Research in this field boomed in the past couple of decades. However, silicon is a dangerously rigid material for interacting with soft tissue in biomedical applications. This increases the risk for patients to undergo tissue damage and scarring, especially due to the undulation of soft tissue against the inflexible device caused by respiration and other processes.

Chamanzar, an Assistant Professor of Electrical and Computer Engineering (ECE) and Biomedical Engineering, saw the pressing need for an optical platform tailored to biointerfaces with both optical capability and flexibility. His solution, Parylene photonics, is the first biocompatible and fully flexible integrated photonic platform ever made.

To create this new photonic material class, Chamanzar’s lab designed ultracompact optical waveguides by fabricating silicone (PDMS), an organic polymer with a low refractive index, around a core of Parylene C, a polymer with a much higher refractive index. The contrast in refractive index allows the waveguide to pipe light effectively, while the materials themselves remain extremely pliant. The result is a platform that is flexible, can operate over a broad spectrum of light, and is just 10 microns thick — about 1/10 the thickness of a human hair.

“We were using Parylene C as a biocompatible insulation coating for electrical implantable devices, when I noticed that this polymer is optically transparent. I became curious about its optical properties and did some basic measurements,” said Chamanzar. “I found that Parylene C has exceptional optical properties. This was the onset of thinking about Parylene photonics as a new research direction.”

Chamanzar’s design was created with neural stimulation in mind, allowing for targeted stimulation and monitoring of specific neurons within the brain. Crucial to this, is the creation of 45-degree embedded micromirrors. While prior optical biointerfaces have stimulated a large swath of the brain tissue beyond what could be measured, these micromirrors create a tight overlap between the volume being stimulated and the volume recorded. These micromirrors also enable integration of external light sources with the Parylene waveguides.

ECE alumna Maya Lassiter (MS, ’19), who was involved in the project, said, “Optical packaging is an interesting problem to solve because the best solutions need to be practical. We were able to package our Parylene photonic waveguides with discrete light sources using accessible packaging methods, to realize a compact device.”

The applications for Parylene photonics range far beyond optical neural stimulation, and could one day replace current technologies in virtually every area of optical biointerfaces. These tiny flexible optical devices can be inserted into the tissue for short-term imaging or manipulation. They can also be used as permanent implantable devices for long-term monitoring and therapeutic interventions.

Additionally, Chamanzar and his team are considering possible uses in wearables. Parylene photonic devices placed on the skin could be used to conform to difficult areas of the body and measure pulse rate, oxygen saturation, blood flow, cancer biomarkers, and other biometrics. As further options for optical therapeutics are explored, such as laser treatment for cancer cells, the applications for a more versatile optical biointerface will only continue to grow.

“The high index contrast between Parylene C and PDMS enables a low bend loss,” said ECE Ph.D. candidate Jay Reddy, who has been working on this project. “These devices retain 90% efficiency as they are tightly bent down to a radius of almost half a millimeter, conforming tightly to anatomical features such as the cochlea and nerve bundles.”

Another unconventional possibility for Parylene photonics is actually in communication links, bringing Chamanzar’s whole pursuit full circle. Current chip-to-chip interconnects usually use rather inflexible optical fibers, and any area in which flexibility is needed requires transferring the signals to the electrical domain, which significantly limits bandwidth. Flexible Parylene photonic cables, however, provide a promising high bandwidth solution that could replace both types of optical interconnects and enable advances in optical interconnect design.

“So far, we have demonstrated low-loss, fully flexible Parylene photonic waveguides with embedded micromirrors that enable input/output light coupling over a broad range of optical wavelengths,” said Chamanzar. “In the future, other optical devices such as microresonators and interferometers can also be implemented in this platform to enable a whole gamut of new applications.”

With Chamanzar’s recent publication marking the debut of Parylene photonics, it’s impossible to say just how far reaching the effects of this technology could be. However, the implications of this work are more than likely to mark a new chapter in the development of optical biointerfaces, similar to what silicon photonics enabled in optical communications and processing.

Go to Source


Chemists make cellular forces visible at the molecular scale

Scientists have developed a new technique using tools made of luminescent DNA, lit up like fireflies, to visualize the mechanical forces of cells at the molecular level. Nature Methods published the work, led by chemists at Emory University, who demonstrated their technique on human blood platelets in laboratory experiments.

“Normally, an optical microscope cannot produce images that resolve objects smaller than the length of a light wave, which is about 500 nanometers,” says Khalid Salaita, Emory professor of chemistry and senior author of the study. “We found a way to leverage recent advances in optical imaging along with our molecular DNA sensors to capture forces at 25 nanometers. That resolution is akin to being on the moon and seeing the ripples caused by raindrops hitting the surface of a lake on the Earth.”

Almost every biological process involves a mechanical component, from cell division to blood clotting to mounting an immune response. “Understanding how cells apply forces and sense forces may help in the development of new therapies for many different disorders,” says Salaita, whose lab is a leader in devising ways to image and map bio-mechanical forces.

The first authors of the paper, Joshua Brockman and Hanquan Su, did the work as Emory graduate students in the Salaita lab. Both recently received their PhDs.

The researchers turned strands of synthetic DNA into molecular tension probes that contain hidden pockets. The probes are attached to receptors on a cell’s surface. Free-floating pieces of DNA tagged with fluorescence serve as imagers. As the unanchored pieces of DNA whizz about they create streaks of light in microscopy videos.

When the cell applies force at a particular receptor site, the attached probes stretch out causing their hidden pockets to open and release tendrils of DNA that are stored inside. The free-floating pieces of DNA are engineered to dock onto these DNA tendrils. When the florescent DNA pieces dock, they are briefly demobilized, showing up as still points of light in the microscopy videos.

Hours of microscopy video are taken of the process, then speeded up to show how the points of light change over time, providing the molecular-level view of the mechanical forces of the cell.

The researchers use a firefly analogy to describe the process.

“Imagine you’re in a field on a moonless night and there is a tree that you can’t see because it’s pitch black out,” says Brockman, who graduated from the Wallace H. Coulter Department of Biomedical Engineering, a joint program of Georgia Tech and Emory, and is now a post-doctoral fellow at Harvard. “For some reason, fireflies really like that tree. As they land on all the branches and along the trunk of the tree, you could slowly build up an image of the outline of the tree. And if you were really patient, you could even detect the branches of the tree waving in the wind by recording how the fireflies change their landing spots over time.”

“It’s extremely challenging to image the forces of a living cell at a high resolution,” says Su, who graduated from Emory’s Department of Chemistry and is now a post-doctoral fellow in the Salaita lab. “A big advantage of our technique is that it doesn’t interfere with the normal behavior or health of a cell.”

Another advantage, he adds, is that DNA bases of A, G, T and C, which naturally bind to one another in particular ways, can be engineered within the probe-and-imaging system to control specificity and map multiple forces at one time within a cell.

“Ultimately, we may be able to link various mechanical activities of a cell to specific proteins or to other parts of cellular machinery,” Brockman says. “That may allow us to determine how to alter the cell to change and control its forces.”

By using the technique to image and map the mechanical forces of platelets, the cells that control blood clotting at the site of a wound, the researchers discovered that platelets have a concentrated core of mechanical tension and a thin rim that continuously contracts. “We couldn’t see this pattern before but now we have a crisp image of it,” Salaita says. “How do these mechanical forces control thrombosis and coagulation? We’d like to study them more to see if they could serve as a way to predict a clotting disorder.”

Just as increasingly high-powered telescopes allow us to discover planets, stars and the forces of the universe, higher-powered microscopy allows us to make discoveries about our own biology.

“I hope this new technique leads to better ways to visualize not just the activity of single cells in a laboratory dish, but to learn about cell-to-cell interactions in actual physiological conditions,” Su says. “It’s like opening a new door onto a largely unexplored realm — the forces inside of us.”

Co-authors of the study include researchers from Children’s Healthcare of Atlanta, Ludwig Maximilian University in Munich, the Max Planck Institute and the University of Alabama at Birmingham. The work was funded by grants from the National Institutes of Health, the National Science Foundation, the Naito Foundation and the Uehara Memorial Foundation.

Go to Source


Water on exoplanet cloud tops could be found with hi-tech instrumentation

University of Warwick astronomers have shown that water vapour can potentially be detected in the atmospheres of exoplanets by peering literally over the tops of their impenetrable clouds.

By applying the technique to models based upon known exoplanets with clouds the team has demonstrated in principle that high resolution spectroscopy can be used to examine the atmospheres of exoplanets that were previously too difficult to characterise due to clouds that are too dense for sufficient light to pass through.

Their technique is described in a paper for the Monthly Notices of the Royal Astronomical Society and provides another method for detecting the presence of water vapour in an exoplanet’s atmosphere — as well as other chemical species that could be used in future to assess potential signs of life. The research received funding from the Science and Technologies Facilities Council (STFC), part of UK Research and Innovation (UKRI).

Astronomers use light from a planet’s host star to learn what its atmosphere is composed of. As the planet passes in front of the star they observe the transmission of the stellar light as it skims through the upper atmosphere and alters its spectrum. They can then analyse this spectrum to look at wavelengths that have spectral signatures for specific chemicals. These chemicals, such as water vapour, methane and ammonia, are only present in trace quantities in these hydrogen and helium rich planets.

However, dense clouds can block that light from passing through the atmosphere, leaving astronomers with a featureless spectrum. High resolution spectroscopy is a relatively recent technique that is being used in ground-based observatories to observe exoplanets in greater detail, and the Warwick researchers wanted to explore whether this technology could be used to detect the trace chemicals present in the thin atmospheric layer right above those clouds.

While astronomers have been able to characterise the atmospheres of many larger and hotter exoplanets that orbit close to their stars, termed ‘hot Jupiters’, smaller exoplanets are now being discovered at cooler temperatures (less than 700°C). Many of these planets, which are the size of Neptune or smaller, have shown much thicker cloud.

They modelled two previously known ‘warm Neptunes’ and simulated how the light from their star would be detected by a high resolution spectrograph. GJ3470b is a cloudy planet that astronomers had previously been able to characterise, while GJ436b has been harder to characterise due to a much thicker cloud layer. Both simulations demonstrated that at high resolution you can detect chemicals such as water vapour, ammonia and methane easily with just a few nights of observations with a ground-based telescope.

The technique works differently from the method recently used to detect phosphine on Venus, but could potentially be used to search for any type of molecule in the clouds of a planet outside of our solar system, including phosphine.

Lead author Dr Siddharth Gandhi of the Department of Physics at the University of Warwick said: “We have been investigating whether ground-based high resolution spectroscopy can help us to constrain the altitude in the atmosphere where we have clouds, and constrain chemical abundances despite those clouds.

“What we are seeing is that a lot of these planets have got water vapour on them, and we’re starting to see other chemicals as well, but the clouds are preventing us from seeing these molecules clearly. We need a way to detect these species and high resolution spectroscopy is a potential way of doing that, even if there is a cloudy atmosphere.

“The chemical abundances can tell you quite a lot about how the planet may have formed because it leaves its chemical fingerprint on the molecules in the atmosphere. Because these are gas giants, detecting the molecules at the top of the atmosphere also offers a window into the internal structure as the gases mix with the deeper layers.”

The majority of observations of exoplanets have been done using space-based telescopes such as Hubble or Spitzer, and their resolution is too low to detect sufficient signal from above the clouds. High resolution spectroscopy’s advantage is that it is capable of probing a wider range of altitudes.

Dr Gandhi adds: “Quite a lot of these cooler planets are far too cloudy to get any meaningful constraints with the current generation of space telescopes. Presumably as we find more and more planets there’s going to be more cloudy planets, so it’s becoming really important to detect what’s on them. Ground based high resolution spectroscopy as well as the next generation of space telescopes will be able to detect these trace species on cloudy planets, offering exciting potential for biosignatures in the future.”

Story Source:

Materials provided by University of Warwick. Note: Content may be edited for style and length.

Go to Source


Researchers combine photoacoustic and fluorescence imaging in tiny package

Researchers have demonstrated a new endoscope that uniquely combines photoacoustic and fluorescent imaging in a device about the thickness of a human hair. The device could one day provide new insights into the brain by enabling blood dynamics to be measured at the same time as neuronal activity.

“Combining these imaging modalities could improve our understanding of the brain’s structure and behavior in specific conditions such as after treatment with a targeted drug,” said research team leader Emmanuel Bossy from the CNRS/ Université Grenobe Alpes Laboratoire Interdisciplinaire de Physique. “The endoscope’s small size helps minimize damage to tissue when inserting it into the brains of small animals for imaging.”

In The Optical Society (OSA) journal Biomedical Optics Express, Bossy’s research team, in collaboration with Paul C. Beard’s team from University College London, describe their new multi-modality endoscope and show that it can acquire photoacoustic and fluorescent images of red blood cells and fluorescent beads.

Two images are better than one

Acquiring fluorescence and photoacoustic images with the same device provides automatically co-registered images with complementary information. Fluorescent signals, which are created when a fluorescent marker absorbs light and re-emits it with a different wavelength, are most useful for labeling specific regions of tissue. On the other hand, photoacoustic images, which capture an acoustic wave generated after the absorption of light, do not require labels and thus can be used to image blood dynamics, for example.

The new endoscope uses a technique called optical wavefront shaping to create a focused spot of light at the imaging tip of a very small multi-mode optical fiber. “Light propagating into a multi-mode fiber is scrambled, making it impossible to see through the fiber,” said Bossy. “However, this type of fiber is advantageous for endoscopy because it is extremely small compared to the bundles of imaging fibers used for many medical endoscopic devices.”

To see through the multi-mode optical fiber, the researchers used the spatial light modulator to send specific light patterns through the fiber and create a focus spot at the imaging end. When the focus spot hits the sample, it creates a signal that can be used to build up an image point by point by raster scanning the spot over the sample. Although other researchers have used multimode fibers for fluorescence endoscopy, the new work represents the first time that photoacoustic imaging has been incorporated into this type of endoscope design.

Adding sound sensitivity

The researchers added photoacoustic imaging by incorporating an additional, very thin optical fiber with a special sensor tip that is sensitive to sound. Because commercially available fiber optic acoustic sensors are not sensitive or small enough for this application, the researchers used a very sensitive fiber optic sensor recently developed by Beard’s research team.

“The focused spot of light allows us to build the image pixel by pixel while also increasing the strength of fluorescence and photoacoustic signals because it concentrates the light at the focal spot,” explained Bossy. “This concentrated light combined with a sensitive detector made it possible to obtain images using only one laser pulse per pixel, whereas commercial fiber optic acoustic sensors would have required many laser pulses.”

The researchers fabricated a prototype microendoscope that measured just 250 by 125 microns squared and used it to image fluorescent beads and blood cells using both imaging modalities. They successfully detected multiple 1-micron fluorescent beads and individual 6-micron red blood cells.

Because fluorescence endoscopy in rodent’s brain has been performed by other scientists, the researchers are confident that their dual modality device will work in similar conditions. They are now continuing work to increase the device’s acquisition speed, with a goal of acquiring a few images per second.

Story Source:

Materials provided by The Optical Society. Note: Content may be edited for style and length.

Go to Source


Astronomers discover an Earth-sized ‘pi planet’ with a 3.14-day orbit

In a delightful alignment of astronomy and mathematics, scientists at MIT and elsewhere have discovered a “pi Earth” — an Earth-sized planet that zips around its star every 3.14 days, in an orbit reminiscent of the universal mathematics constant.

The researchers discovered signals of the planet in data taken in 2017 by the NASA Kepler Space Telescope’s K2 mission. By zeroing in on the system earlier this year with SPECULOOS, a network of ground-based telescopes, the team confirmed that the signals were of a planet orbiting its star. And indeed, the planet appears to still be circling its star today, with a pi-like period, every 3.14 days.

“The planet moves like clockwork,” says Prajwal Niraula, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), who is the lead author of a paper published today in the Astronomical Journal.

“Everyone needs a bit of fun these days,” says co-author Julien de Wit, of both the paper title and the discovery of the pi planet itself.

Planet extraction

The new planet is labeled K2-315b; it’s the 315th planetary system discovered within K2 data — just one system shy of an even more serendipitous place on the list.

The researchers estimate that K2-315b has a radius of 0.95 that of Earth’s, making it just about Earth-sized. It orbits a cool, low-mass star that is about one-fifth the size of the sun. The planet circles its star every 3.14 days, at a blistering 81 kilometers per second, or about 181,000 miles per hour.

While its mass is yet to be determined, scientists suspect that K2-315b is terrestrial, like the Earth. But the pi planet is likely not habitable, as its tight orbit brings the planet close enough to its star to heat its surface up to 450 kelvins, or around 350 degrees Fahrenheit — perfect, as it turns out, for baking actual pie.

“This would be too hot to be habitable in the common understanding of the phrase,” says Niraula, who adds that the excitement around this particular planet, aside from its associations with the mathematical constant pi, is that it may prove a promising candidate for studying the characteristics of its atmosphere.

“We now know we can mine and extract planets from archival data, and hopefully there will be no planets left behind, especially these really important ones that have a high impact,” says de Wit, who is an assistant professor in EAPS, and a member of MIT’s Kavli Institute for Astrophysics and Space Research.

Niraula and de Wit’s MIT co-authors include Benjamin Rackham and Artem Burdanov, along with a team of international collaborators.

Dips in the data

The researchers are members of SPECULOOS, an acronym for The Search for habitable Planets EClipsing ULtra-cOOl Stars, and named for a network of four 1-meter telescopes in Chile’s Atacama Desert, which scan the sky across the southern hemisphere. Most recently, the network added a fifth telescope, which is the first to be located in the northern hemisphere, named Artemis — a project that was spearheaded by researchers at MIT.

The SPECULOOS telescopes are designed to search for Earth-like planets around nearby, ultracool dwarfs — small, dim stars that offer astronomers a better chance of spotting an orbiting planet and characterizing its atmosphere, as these stars lack the glare of much larger, brighter stars.

“These ultracool dwarfs are scattered all across the sky,” Burdanov says. “Targeted ground-based surveys like SPECULOOS are helpful because we can look at these ultracool dwarfs one by one.”

In particular, astronomers look at individual stars for signs of transits, or periodic dips in a star’s light, that signal a possible planet crossing in front of the star, and briefly blocking its light.

Earlier this year, Niraula came upon a cool dwarf, slightly warmer than the commonly accepted threshold for an ultracool dwarf, in data collected by the K2 campaign — the Kepler Space Telescope’s second observing mission, which monitored slivers of the sky as the spacecraft orbited around the sun.

Over several months in 2017, the Kepler telescope observed a part of the sky that included the cool dwarf, labeled in the K2 data as EPIC 249631677. Niraula combed through this period and found around 20 dips in the light of this star, that seemed to repeat every 3.14 days.

The team analyzed the signals, testing different potential astrophysical scenarios for their origin, and confirmed that the signals were likely of a transiting planet, and not a product of some other phenomena such as a binary system of two spiraling stars.

The researchers then planned to get a closer look at the star and its orbiting planet with SPECULOOS. But first, they had to identify a window of time when they would be sure to catch a transit.

“Nailing down the best night to follow up from the ground is a little bit tricky,” says Rackham, who developed a forecasting algorithm to predict when a transit might next occur. “Even when you see this 3.14 day signal in the K2 data, there’s an uncertainty to that, which adds up with every orbit.”

With Rackham’s forecasting algorithm, the group narrowed in on several nights in February 2020 during which they were likely to see the planet crossing in front of its star. They then pointed SPECULOOS’ telescopes in the direction of the star and were able to see three clear transits: two with the network’s Southern Hemisphere telescopes, and the third from Artemis, in the Northern Hemisphere.

The researchers say the new pi planet may be a promising candidate to follow up with the James Webb Space Telescope (JWST), to see details of the planet’s atmosphere. For now, the team is looking through other datasets, such as from NASA’s TESS mission, and are also directly observing the skies with Artemis and the rest of the SPECULOOS network, for signs of Earthlike planets.

“There will be more interesting planets in the future, just in time for JWST, a telescope designed to probe the atmosphere of these alien worlds,” says Niraula. “With better algorithms, hopefully one day, we can look for smaller planets, even as small as Mars.”

This research was supported in part by the Heising-Simons Foundation, and the European Research Council.

Go to Source


A computer predicts your thoughts, creating images based on them

Researchers at the University of Helsinki have developed a technique in which a computer models visual perception by monitoring human brain signals. In a way, it is as if the computer tries to imagine what a human is thinking about. As a result of this imagining, the computer is able to produce entirely new information, such as fictional images that were never before seen.

The technique is based on a novel brain-computer interface. Previously, similar brain-computer interfaces have been able to perform one-way communication from brain to computer, such as spell individual letters or move a cursor.

As far as is known, the new study is the first where both the computer’s presentation of the information and brain signals were modelled simultaneously using artificial intelligence methods. Images that matched the visual characteristics that participants were focusing on were generated through interaction between human brain responses and a generative neural network.

The study was published in the Scientific Reports journal in September. Scientific Reports is an online multidisciplinary, open-access journal from the publishers of Nature.

Neuroadaptive generative modelling

The researchers call this method neuroadaptive generative modelling. A total of 31 volunteers participated in a study that evaluated the effectiveness of the technique. Participants were shown hundreds of AI-generated images of diverse-looking people while their EEG was recorded.

The subjects were asked to concentrate on certain features, such as faces that looked old or were smiling. While looking at a rapidly presented series of face images, the EEGs of the subjects were fed to a neural network, which inferred whether any image was detected by the brain as matching what the subjects were looking for.

Based on this information, the neural network adapted its estimation as to what kind of faces people were thinking of. Finally, the images generated by the computer were evaluated by the participants and they nearly perfectly matched with the features the participants were thinking of. The accuracy of the experiment was 83 per cent.

“The technique combines natural human responses with the computer’s ability to create new information. In the experiment, the participants were only asked to look at the computer-generated images. The computer, in turn, modelled the images displayed and the human reaction toward the images by using human brain responses. From this, the computer can create an entirely new image that matches the user’s intention,” says Tuukka Ruotsalo, Academy of Finland Research Fellow at the University of Helsinki, Finland and Associate Professor at the University of Copenhagen, Denmark.

Unconscious attitudes may be exposed

Generating images of the human face is only one example of the technique’s potential uses. One practical benefit of the study may be that computers can augment human creativity.

“If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create,” Ruotsalo says. However, the researchers believe that the technique may be used to gain understanding of perception and the underlying processes in our mind.

“The technique does not recognise thoughts but rather responds to the associations we have with mental categories. Thus, while we are not able to find out the identity of a specific ‘old person’ a participant was thinking of, we may gain an understanding of what they associate with old age. We, therefore, believe it may provide a new way of gaining insight into social, cognitive and emotional processes,” says Senior Researcher Michiel Spapé.

According to Spapé, this is also interesting from a psychological perspective.

“One person’s idea of an elderly person may be very different from another’s. We are currently uncovering whether our technique might expose unconscious associations, for example by looking if the computer always renders old people as, say, smiling men.”

Story Source:

Materials provided by University of Helsinki. Original written by Aino Pekkarinen. Note: Content may be edited for style and length.

Go to Source


Why there is no speed limit in the superfluid universe

Physicists from Lancaster University have established why objects moving through superfluid helium-3 lack a speed limit in a continuation of earlier Lancaster research.

Helium-3 is a rare isotope of helium, in which one neutron is missing. It becomes superfluid at extremely low temperatures, enabling unusual properties such as a lack of friction for moving objects.

It was thought that the speed of objects moving through superfluid helium-3 was fundamentally limited to the critical Landau velocity, and that exceeding this speed limit would destroy the superfluid. Prior experiments in Lancaster have found that it is not a strict rule and objects can move at much greater speeds without destroying the fragile superfluid state.

Now scientists from Lancaster University have found the reason for the absence of the speed limit: exotic particles that stick to all surfaces in the superfluid.

The discovery may guide applications in quantum technology, even quantum computing, where multiple research groups already aim to make use of these unusual particles.

To shake the bound particles into sight, the researchers cooled superfluid helium-3 to within one ten thousandth of a degree from absolute zero (0.0001K or -273.15°C). They then moved a wire through the superfluid at a high speed, and measured how much force was needed to move the wire. Apart from an extremely small force related to moving the bound particles around when the wire starts to move, the measured force was zero.

Lead author Dr Samuli Autti said: “Superfluid helium-3 feels like vacuum to a rod moving through it, although it is a relatively dense liquid. There is no resistance, none at all. I find this very intriguing.”

PhD student Ash Jennings added: “By making the rod change its direction of motion we were able to conclude that the rod will be hidden from the superfluid by the bound particles covering it, even when its speed is very high.” “The bound particles initially need to move around to achieve this, and that exerts a tiny force on the rod, but once this is done, the force just completely disappears,” said Dr Dmitry Zmeev, who supervised the project.

The Lancaster researchers included Samuli Autti, Sean Ahlstrom, Richard Haley, Ash Jennings, George Pickett, Malcolm Poole, Roch Schanen, Viktor Tsepelin, Jakub Vonka, Tom Wilcox, Andrew Woods and Dmitry Zmeev. The results are published in Nature Communications.

Story Source:

Materials provided by Lancaster University. Note: Content may be edited for style and length.

Go to Source


Biologists create new genetic systems to neutralize gene drives

In the past decade, researchers have engineered an array of new tools that control the balance of genetic inheritance. Based on CRISPR technology, such gene drives are poised to move from the laboratory into the wild where they are being engineered to suppress devastating diseases such as mosquito-borne malaria, dengue, Zika, chikungunya, yellow fever and West Nile. Gene drives carry the power to immunize mosquitoes against malarial parasites, or act as genetic insecticides that reduce mosquito populations.

Although the newest gene drives have been proven to spread efficiently as designed in laboratory settings, concerns have been raised regarding the safety of releasing such systems into wild populations. Questions have emerged about the predictability and controllability of gene drives and whether, once let loose, they can be recalled in the field if they spread beyond their intended application region.

Now, scientists at the University of California San Diego and their colleagues have developed two new active genetic systems that address such risks by halting or eliminating gene drives in the wild. On Sept.18, 2020 in the journal Molecular Cell, research led by Xiang-Ru Xu, Emily Bulger and Valentino Gantz in the Division of Biological Sciences offers two new solutions based on elements developed in the common fruit fly.

“One way to mitigate the perceived risks of gene drives is to develop approaches to halt their spread or to delete them if necessary,” said Distinguished Professor Ethan Bier, the paper’s senior author and science director for the Tata Institute for Genetics and Society. “There’s been a lot of concern that there are so many unknowns associated with gene drives. Now we have saturated the possibilities, both at the genetic and molecular levels, and developed mitigating elements.”

The first neutralizing system, called e-CHACR (erasing Constructs Hitchhiking on the Autocatalytic Chain Reaction) is designed to halt the spread of a gene drive by “shooting it with its own gun.” e-CHACRs use the CRISPR enzyme Cas9 carried on a gene drive to copy itself, while simultaneously mutating and inactivating the Cas9 gene. Xu says an e-CHACR can be placed anywhere in the genome.

“Without a source of Cas9, it is inherited like any other normal gene,” said Xu. “However, once an e-CHACR confronts a gene drive, it inactivates the gene drive in its tracks and continues to spread across several generations ‘chasing down’ the drive element until its function is lost from the population.”

The second neutralizing system, called ERACR (Element Reversing the Autocatalytic Chain Reaction), is designed to eliminate the gene drive altogether. ERACRs are designed to be inserted at the site of the gene drive, where they use the Cas9 from the gene drive to attack either side of the Cas9, cutting it out. Once the gene drive is deleted, the ERACR copies itself and replaces the gene-drive.

“If the ERACR is also given an edge by carrying a functional copy of a gene that is disrupted by the gene drive, then it races across the finish line, completely eliminating the gene drive with unflinching resolve,” said Bier.

The researchers rigorously tested and analyzed e-CHACRs and ERACRs, as well as the resulting DNA sequences, in meticulous detail at the molecular level. Bier estimates that the research team, which includes mathematical modelers from UC Berkeley, spent an estimated combined 15 years of effort to comprehensively develop and analyze the new systems. Still, he cautions there are unforeseen scenarios that could emerge, and the neutralizing systems should not be used with a false sense of security for field-implemented gene drives.

“Such braking elements should just be developed and kept in reserve in case they are needed since it is not known whether some of the rare exceptional interactions between these elements and the gene drives they are designed to corral might have unintended activities,” he said.

According to Bulger, gene drives have enormous potential to alleviate suffering, but responsibly deploying them depends on having control mechanisms in place should unforeseen consequences arise. ERACRs and eCHACRs offer ways to stop the gene drive from spreading and, in the case of the ERACR, can potentially revert an engineered DNA sequence to a state much closer to the naturally-occurring sequence.

“Because ERACRs and e-CHACRs do not possess their own source of Cas9, they will only spread as far as the gene drive itself and will not edit the wild type population,” said Bulger. “These technologies are not perfect, but we now have a much more comprehensive understanding of why and how unintended outcomes influence their function and we believe they have the potential to be powerful gene drive control mechanisms should the need arise.”

Go to Source