Unraveling a spiral stream of dusty embers from a massive binary stellar forge

With almost two decades of mid-infrared (IR) imaging from the largest observatories around the world including the Subaru Telescope, a team of astronomers was able to capture the spiral motion of newly formed dust streaming from the massive and evolved binary star system Wolf-Rayet (WR) 112. Massive binary star systems, as well as supernova explosions, are regarded as sources of dust in the Universe from its early history, but the process of dust production and the amount of the ejected dust are still open questions. WR 112 is a binary system composed of a massive star in the very late stage of stellar evolution losing a large amount of mass and another massive star at the main sequence. Dust is expected to be formed in the region where stellar winds from these two stars are colliding. The study reveals the motion of the dusty outflow from the system and identifies WR 112 as a highly efficient dust factory that produces an entire Earth mass of dust every year.

Dust formation, which is typically seen in the gentle outflows from cool stars with a Sun-like mass, is somewhat unusual in the extreme environment around massive stars and their violent winds. However, interesting things happen when the fast winds of two massive stars in a binary interact.

“When the two winds collide, all Hell breaks loose, including the release of copious shocked-gas X-rays, but also the (at first blush surprising) creation of copious amounts of carbon-based aerosol dust particles in those binaries in which one of the stars has evolved to He-burning, which produces 40% C in their winds,” says co-author Anthony Moffat (University of Montreal). This dust formation process is exactly what is occurring in WR 112.

This binary dust formation phenomenon has been revealed in other systems such as WR 104 by co-author Peter Tuthill (University of Sydney). WR 104, in particular, reveals an elegant trail of dust resembling a ‘pinwheel’ that traces the orbital motion of the central binary star system.

However, the dusty nebula around WR 112 is far more complex than a simple pinwheel pattern. Decades of multi-wavelength observations presented conflicting interpretations of the dusty outflow and orbital motion of WR 112. After almost 20 years uncertainty on WR 112, images from the COMICS instrument on the Subaru Telescope taken in Oct 2019 provided the final — and unexpected — piece to the puzzle.

“We published a study in 2017 on WR 112 that suggested the dusty nebula was not moving at all, so I thought our COMICS observation would confirm this,” explained lead author Ryan Lau (ISAS/JAXA). “To my surprise, the COMCIS image revealed that the dusty shell had definitely moved since the last image we took with the VLT in 2016. It confused me so much that I couldn’t sleep after the observing run — I kept flipping through the images until it finally registered in my head that the spiral looked like it was tumbling towards us.”

Lau collaborated with researchers at the University of Sydney including Prof. Peter Tuthill and undergraduate Yinuo Han, who are experts at modeling and interpreting the motion of the dusty spirals from binary systems like WR 112. “I shared the images of WR 112 with Peter and Yinuo, and they were able to produce an amazing preliminary model that confirmed that the dusty spiral stream is revolving in our direction along our line of sight,” said Lau.

The animation above shows a comparison between the models of WR 112 created by the research team alongside the actual mid-IR observations. The appearance of the model images shows a remarkable agreement with the real images of WR 112. The models and the series of imaging observations revealed that the rotation period of this dusty “edge-on” spiral (and the orbital period of the central binary system) is 20 years.

With the revised picture of WR 112, the research team was able to deduce how much dust this binary system is forming. “Spirals are repetitive patterns, so since we understand how much time it takes to form one full dusty spiral turn (~20 years), we can actually trace the age of dust produced by the binary stars at the center of the spiral,” says Lau. He points out that “there is freshly formed dust at the very central core of the spiral, while the dust we see that’s 4 spiral turns away is about 80 years old. Therefore, we can essentially trace out an entire human lifetime along the dusty spiral stream revealed in our observations. So I could actually pinpoint on the images the dust that was formed when I was born (right now, it’s somewhere in between the first and second spiral turns).”

To their surprise, the team found WR 112 is a highly efficient dust factory that outputs dust at a rate of 3×10-6 solar mass per year, which is equivalent to producing an entire Earth mass of dust every year. This was unusual given WR 112’s 20-yr orbital period — the most efficient dust producers in this type of WR binary star system tend to have shorter orbital periods less than a year like WR 104 with its 220-day period. WR 112 therefore demonstrates the diversity of WR binary systems that are capable of efficiently forming dust and highlights their potential role as significant sources of dust not only in our Galaxy but galaxies beyond our own.

Lastly, these results demonstrate the discovery potential of multi-epoch mid-IR imaging with the MIMIZUKU instrument on the upcoming Tokyo Atacama Observatory (TAO). The mid-IR results from this study notably utilize the largest observatories in the world and set the stage for the next decade of astronomical discoveries with 30-m class telescopes and the upcoming James Webb Space Telescope.

Go to Source


Modern theory from ancient impacts

Around 4 billion years ago, the solar system was far less hospitable than we find it now. Many of the large bodies we know and love were present, but probably looked considerably different, especially the Earth. We know from a range of sources, including ancient meteorites and planetary geology, that around this time there were vastly more collisions between, and impacts from, asteroids originating in the Mars-Jupiter asteroid belt.

Knowledge of these events is especially important to us, as the time period in question is not only when the surface of our planet was taking on a more recognizable form, but was also when life was just getting started. With more accurate details of Earth’s rocky history, it could help researchers answer some long-standing questions concerning the mechanisms responsible for life, as well as provide information for other areas of life science.

“Meteorites provide us with the earliest history of ourselves,” said Professor Yuji Sano from the Atmosphere and Ocean Research Institute at the University of Tokyo. “This is what fascinated me about them. By studying properties, such as radioactive decay products, of meteorites that fell to Earth, we can deduce when they came and where they came from. For this study we examined meteorites that came from Vesta, the second-largest asteroid after the dwarf planet Ceres.”

Sano and his team found evidence that Vesta was hit by multiple impacting bodies around 4.4 billion to 4.15 billion years ago. This is earlier than 3.9 billion years ago, which is when the late heavy bombardment (LHB) is thought to have occurred. Current evidence for the LHB comes from lunar rocks collected during the Apollo moon missions of the 1970s, as well as other sources. But these new studies are improving upon previous models and will pave the way for an up-to-date database of early solar impact records.

“That Vesta-origin meteorites clearly show us impacts earlier than the LHB raises the question, ‘Did the late heavy bombardment truly occur?'” said Sano. “It seems to us that early solar system impacts peaked sooner than the LHB and reduced smoothly with time. It may not have been the cataclysmic period of chaos that current models describe.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source


Using laser to cool polyatomic molecule

After firing the lasers and bombarding the molecules with light, the scientists gathered around the camera to check the results. By seeing how far these cold molecules expanded they would know almost instantly whether they were on the right track or not to charting new paths in quantum science by being the first to cool (aka slow down) a particularly complex, six-atom molecule using nothing but light.

“When we started out on the project we were optimistic but were not sure that we would see something that would show a very dramatic effect,” said Debayan Mitra, a postdoctoral researcher in Harvard’s Doyle Research Group. “We thought that we would need more evidence to prove that we were actually cooling the molecule, but then when we saw the signal, it was like, ‘Yeah, nobody will doubt that.’ It was big and it was right there.”

The study led by Mitra and graduate student Nathaniel B. Vilas is the focus of a new paper published in Science. In it, the group describes using a novel method combining cryogenic technology and direct laser light to cool the nonlinear polyatomic molecule calcium monomethoxide (CaOCH3) to just above absolute zero.

The scientists believe their experiment marks the first time such a large complex molecule has been cooled using laser light and say it unlocks new avenues of study in quantum simulation and computation, particle physics, and quantum chemistry.

“These kinds of molecules have structure that is ubiquitous in chemical and biological systems,” said John M. Doyle, the Henry B. Silsbee Professor of Physics and senior author on the paper. “Controlling perfectly their quantum states is basic research that could shed light on fundamental quantum processes in these building blocks of nature.”

The use of lasers to control atoms and molecules — the eventual building-blocks of a quantum computer — has been used since the 1960s and has since revolutionized atomic, molecular, and optical physics.

The technique essentially works by firing a laser at them, causing the atoms and molecules to absorb the photons from the light and recoil in the opposite direction. This eventually slows them down and even stops them in their tracks. When this happens quantum mechanics becomes the dominant way to describe and study their motion.

“The idea is that on one end of the spectrum there are atoms that have very few quantum states,” Doyle said. Because of this, these atoms are easy to control with light since they often remain in the same quantum state after absorbing and emitting light, he said. “With molecules they have motion that does not occur in atoms — vibrations and rotations. When the molecule absorbs and emits light this process can sometimes make the molecule spin around or vibrate internally. When this happens, it is now in a different quantum state and absorbing and emitting light no longer works [to cool it]. We have to ‘calm the molecule down,’ get rid of its extra vibration before it can interact with the light the way we want.”

Scientists — including those from the Doyle Group which is part of the Harvard Department of Physics and a member of the Harvard-MIT Center for Ultracold Atoms — have been able to cool a number of molecules using light, such as diatomic and triatomic molecules which each have two or three atoms.

Polyatomic molecules, on the other hand, are much more complex and have proven much more difficult to manipulate because of all the vibrations and rotations.

To get around this, the group used a method they pioneered to cool diatomic and triatomic molecules. Researchers set up up a sealed cryogenic chamber where they cooled helium to below four Kelvin (that’s close to 450 degrees below zero in Fahrenheit). This chamber essentially acts as a fridge. It’s this fridge where the scientists created the molecule CaOCH3. Right off the bat, it was already moving at a much slower velocity than it would normally, making it ideal for further cooling.

Next came the lasers. They turned on two beams of light coming at the molecule from opposing directions. These counterpropagating lasers prompted a reaction known as Sisyphus cooling. Mitra says the name is fitting since in Greek mythology Sisyphus is punished by having to roll a giant boulder up a hill for eternity, only for it to roll back down when he nears the top.

The same principle happens here with molecules, Mitra said. When two identical laser beams are firing in opposite directions, they form a standing wave of light. There are places where the light is less intense and there are places where it is stronger. This wave is what forms a metaphorical hill for the molecules.

The “molecule starts at the bottom of a hill formed by the counter-propagating laser beams and it starts climbing that hill just because it has some kinetic energy in it and as it climbs that hill, slowly, the kinetic energy that was its velocity gets converted into potential energy and it slows down and slows down and slows down until it gets to the top of the hill where it’s the slowest,” he said.

At that point, the molecule moves closer to a region where the light intensity is high, where it will more likely absorb a photon and rolls back down to the opposite side. “All they can do is keep doing this again and again and again,” Mitra said.

By looking at images from cameras placed outside the sealed chamber, the scientists then inspect how much a cloud of these molecules expands as it travels through the system. The smaller the width of the cloud, the less kinetic energy it has — therefore the colder it is.

Analyzing the data further, they saw just how cold. They took it from 22 milikelvin to about 1 milikelvin. In other words, just a few thousandths of a decimal above absolute zero.

In the paper, the scientists lay out ways get the molecule even colder and lay out some of the doors it opens in a range of modern physical and chemical research frontiers. The scientists explain, the study is a proof of concept that their method could be used to cool other carefully chosen complex molecules to help advance quantum science.

“What we did here is sort of extending the state of the art,” Mitra said. “It’s always been debated whether we would ever have technology that will be good enough to control complex molecules at the quantum level. This particular experiment is just a stepping stone.”

Go to Source


12 Popular Home Automation APIs

The Internet of Things (IoT) is here and is changing the way we interact with the world around us, starting at home. Home Automation applications enable us to control nearly everything in our homes, including lighting, refrigerators, microwave ovens, sprinkler systems, smells, TV, music, thermostats, automobiles, security systems, and even the kitchen sink.

Developers wishing to get in on this rapidly growing market would need to find suitable APIs for creating applications.

What is a Home Automation API?

A Home Automation API is an Application Programming Interface that developers can utilize to programmatically access smart home software and devices. A great place to discover these APIs is in the Home Automation category on ProgrammableWeb.

Major players in the Home Automation arena include Amazon, Google, and Apple, but there are plenty of other choices. Below are the twelve most popular Home Automation APIs based on web traffic on ProgrammableWeb.

1. Google Chromecast API

Google Chromecast is a Google device that displays mobile entertainment in a TV screen. The APITrack this API offers ways interact with the sender or media content and to work with the receiver or the display screen. The Sender API reference covers parameters for configuration, requests, and all that is related to media (volume, display, TV show). The Receiver API consist of parameters for cast receiver, media playback, and possible properties (connected, inactivity, standby). Additional information about how to use Cast API supported by Google includes sample apps and developer resources.

2. Nest API

Nest provides a variety of smart devices for home automation. The Nest APITrack this API allows users to model a physical home or building as a structure with Nest Learning Thermostats, Nest Protects, and Nest Cams as devices within that structure. Developers can use this API to check that status of devices and to control them programmatically. is part of the Google Store.

3. Apple HomeKit API

The Apple HomeKit APITrack this API allows developers to coordinate and control home automation accessories from multiple vendors using a single interface. This API can be used to discover compatible devices, add compatible devices to the home configuration database, manage data for devices in the database, and communicate with configured accessories to make them perform actions. This API is accessible using SDKs for iOS 8.0+, Mac Catalyst 14.0+ (Beta), tvOS 10.0+, and watchOS 2.0+.

4. API

FarmBot is an open source robotics system for tending backyard farms. The FarmBot APITrack this API does not control FarmBot itself, but instead handles image uploads, email delivery, and data storage, validation, and security. Data is stored in a centralized database to prevent data loss between reflashes. This also allows users to edit information when the device is offline. offers open souce CNC farming devices and interactions via API. Screenshot:

5. Samsung SmartThings API

SmartThings is Samsung’s platform for turning a regular home into a smart home. The Samsung SmartThings APITrack this API allows developers to manage and integrate with IoT devices using the Samsung SmartThings platform. Connected devices can be organized into locations and rooms.

6. Smappee API

The Smappee APITrack this API allows developers to monitor Internet of Things device data in order to manage energy usage in commercial, industrial, and residential settings. This API can monitor the use of electricity, water, and gas as well as retrieve appliance events and a breakdown of energy usage per appliance.

7. Netatmo Security API

Netatmo designs, produces and distributes smart devices for homes. Netatmo Connect API suite includes the Netatmo Security APITrack this API. This API allows developers to get notifications and to retrieve events and timelines of events for Netatmo’s smart indoor cameras, smart outdoor cameras, smart smoke alarms, and smart door and window sensors. The Netatmo Security Streaming APITrack this API is also available for nearly instant notifications about security events.

8. Wink API

Wink is an app that syncs with home automation devices to provide a single point of control for lights, power, security, etc. It can work with multiple brands of devices at once including Nest, GE, Philips, Honeywell, and more. The Wink APITrack this API allows developers to connect devices registered with Wink to users, apps, each other, and the web in general.

9. Domoticz API

Domoticz APITrack this API helps create a central sensor-controlled portal for synchronizing home utility devices ranging from electrical devices, electronic gadgets, water, and gas as well as weather monitoring instruments. It is a RESTful API that generates JSON returns from HTTP requests. The Domoticz API is a free resource that comes with an optional sign-up access.

10. Sinric Pro API

Sinric Pro enables developers to integrate IoT development boards (such as the RaspberryPi) with third-party applications or with Amazon Alexa and Google Home. The Sincric Pro APITrack this API can be used to retrieve device logs, find devices, update devices, and get account details.

11. Google Smart Home API

Google Smart Home API allows users to gain control of connected devices via Google Home Applications and Google Assistant. It is available to manage devices such as smart speakers, phones, cars, TVs, headphones, watches, and more.

12. Home Connect API

Home Connect offers a RESTful APITrack this API to control and monitor enabled home appliances. The Home Connect API requires an open appliance interface that enables users to connect an application to home appliances with Home Connect. The API provides a way to configure, monitor and control programs.

Check out the Home Automation category for listings of more than 75 APIs, 55 SDKs, and 40 Source Code Samples.

Go to Source
Author: <a href="">joyc</a>


Linking sight and movement

To get a better look at the world around them, animals constantly are in motion. Primates and people use complex eye movements to focus their vision (as humans do when reading, for instance); birds, insects, and rodents do the same by moving their heads, and can even estimate distances that way. Yet how these movements play out in the elaborate circuitry of neurons that the brain uses to “see” is largely unknown. And it could be a potential problem area as scientists create artificial neural networks that mimic how vision works in self-driving cars.

To better understand the relationship between movement and vision, a team of Harvard researchers looked at what happens in one of the brain’s primary regions for analyzing imagery when animals are free to roam naturally. The results of the study, published Tuesday in the journal Neuron, suggest that image-processing circuits in the primary visual cortex not only are more active when animals move, but that they receive signals from a movement-controlling region of the brain that is independent from the region that processes what the animal is looking at. In fact, the researchers describe two sets of movement-related patterns in the visual cortex that are based on head motion and whether an animal is in the light or the dark.

The movement-related findings were unexpected, since vision tends to be thought of as a feed-forward computation system in which visual information enters through the retina and travels on neural circuits that operate on a one-way path, processing the information piece by piece. What the researchers saw here is more evidence that the visual system has many more feedback components where information can travel in opposite directions than had been thought.

These results offer a nuanced glimpse into how neural activity works in a sensory region of the brain, and add to a growing body of research that is rewriting the textbook model of vision in the brain.

“It was really surprising to see this type of [movement-related] information in the visual cortex because traditionally people have thought of the visual cortex as something that only processes images,” said Grigori Guitchounts, a postdoctoral researcher in the Neurobiology Department at Harvard Medical School and the study’s lead author. “It was mysterious, at first, why this sensory region would have this representation of the specific types of movements the animal was making.”

While the scientists weren’t able to definitively say why this happens, they believe it has to do with how the brain perceives what’s around it.

“The model explanation for this is that the brain somehow needs to coordinate perception and action,” Guitchounts said. “You need to know when a sensory input is caused by your own action as opposed to when it’s caused by something out there in the world.”

For the study, Guitchounts teamed up with former Department of Molecular and Cellular Biology Professor David Cox, alumnus Javier Masis, M.A. ’15, Ph.D. ’18, and postdoctoral researcher Steffen B.E. Wolff. The work started in 2017 and wrapped up in 2019 while Guitchounts was a graduate researcher in Cox’s lab. A preprint version of the paper published in January.

The typical setup of past experiments on vision worked like this: Animals, like mice or monkeys, were sedated, restrained so their heads were in fixed positions, and then given visual stimuli, like photographs, so researchers could see which neurons in the brain reacted. The approach was pioneered by Harvard scientists David H. Hubel and Torsten N. Wiesel in the 1960s, and in 1981 they won a Nobel Prize in medicine for their efforts. Many experiments since then have followed their model, but it did not illuminate how movement affects the neurons that analyze.

Researchers in this latest experiment wanted to explore that, so they watched 10 rats going about their days and nights. The scientists placed each rat in an enclosure, which doubled as its home, and continuously recorded their head movements. Using implanted electrodes, they measured the brain activity in the primary visual cortex as the rats moved.

Half of the recordings were taken with the lights on. The other half were recorded in total darkness. The researchers wanted to compare what the visual cortex was doing when there was visual input versus when there wasn’t. To be sure the room was pitch black, they taped shut any crevice that could let in light, since rats have notoriously good vision at night.

The data showed that on average, neurons in the rats’ visual cortices were more active when the animals moved than when they rested, even in the dark. That caught the researchers off guard: In a pitch-black room, there is no visual data to process. This meant that the activity was coming from the motor cortex, not an external image.

The team also noticed that the neural patterns in the visual cortex that were firing during movement differed in the dark and light, meaning they weren’t directly connected. Some neurons that were ready to activate in the dark were in a kind of sleep mode in the light.

Using a machine-learning algorithm, the researchers encoded both patterns. That let them not only tell which way a rat was moving its head by just looking at the neural activity in its visual cortex, but also predict the movement several hundred milliseconds before the rat made it.

The researchers confirmed that the movement signals came from the motor area of the brain by focusing on the secondary motor cortex. They surgically destroyed it in several rats, then ran the experiments again. The rats in which this area of the brain was lesioned no longer gave off signals in the visual cortex. However, the researchers were not able to determine if the signal originates in the secondary motor cortex. It could be only where it passes through, they said.

Furthermore, the scientists pointed out some limitations in their findings. For instance, they only measured the movement of the head, and did not measure eye movement. The study is also based on rodents, which are nocturnal. Their visual systems share similarities with humans and primates, but differ in complexity. Still, the paper adds to new lines of research and the findings could potentially be applied to neural networks that control machine vision, like those in autonomous vehicles.

“It’s all to better understand how vision actually works,” Guitchounts said. “Neuroscience is entering into a new era where we understand that perception and action are intertwined loops. … There’s no action without perception and no perception without action. We have the technology now to measure this.”

This work was supported by the Harvard Center for Nanoscale Systems and the National Science Foundation Graduate Research Fellowship.

Go to Source


Spider silk inspires new class of functional synthetic polymers

Synthetic polymers have changed the world around us, and it would be hard to imagine a world without them. However, they do have their problems. It is for instance hard from a synthetic point of view to precisely control their molecular structure. This makes it harder to finely tune some of their properties, such as the ability to transport ions. To overcome this problem, University of Groningen assistant professor Giuseppe Portale decided to take inspiration from nature. The result was published in Science Advances on July 17: a new class of polymers based on protein-like materials that work as proton conductors and might be useful in future bio-electronic devices.

‘I have been working on proton conducting materials on and off since my PhD’, says Portale. ‘I find it fascinating to know what makes a material transport a proton so I worked a lot on optimizing structures at the nanoscale level to get greater conductivity.’ But it was only a few years ago that he considered the possibility of making them from biological, protein-like structures. He came to this idea together with professor Andreas Hermann, a former colleague at the University of Groningen, now working at the DWI — Leibniz Institute for Interactive Materials in Germany. ‘We could immediately see that proton-conducting bio-polymers could be very useful for applications like bio-electronics or sensors’, Portale says.

More active groups, more conductivity

But first, they had to see if the idea would work. Portale: ‘Our first goal was to prove that we could precisely tune the proton conductivity of the protein-based polymers by tuning the number of ionisable groups per polymer chain’. To do this, the researchers prepared a number of unstructured biopolymers that had different numbers of ionisable groups, in this case, carboxylic acid groups. Their proton conductivity scaled linearly with the number of charged carboxylic acid groups per chain. ‘It was not groundbreaking, everybody knows this concept. But we were thrilled that we were able to make something that worked as expected’, Portale says.

For the next step, Portale relied on his expertise in the field of synthetic polymers: ‘I have learned over the years that the nanostructure of a polymer can greatly influence the conductivity. If you have the right nanostructure, it allows the charges to bundle together and increase the local concentration of these ionic groups, which dramatically boosts proton conductivity.’ Since the first batch of biopolymers was completely amorphous, the researchers had to switch to a different material. They decided to use a known protein that had the shape of a barrel. ‘We engineered this barrel-like protein and added strands containing carbocyclic acid to its surface’, Portale explains. ‘This increased conductivity greatly.’

Novel Spider silk polymer

Unfortunately, the barrel-polymer was not very practical. It had no mechanical strength and it was difficult to process, so Portale and his colleagues had to look for an alternative. They landed on a well-known natural polymer: spider silk. ‘This is one of the most fascinating materials in nature, because it is very strong but can also be used in many different ways’, says Portale. ‘I knew spider silk has a fascinating nanostructure, so we engineered a protein-like polymer that has the main structure of spider silk but was modified to host strands of carbocyclic acid.’

The novel material worked like a charm. ‘We found that it self-assembles at the nanoscale similarly to spider silk while creating dense clusters of charged groups, which are very beneficial for the proton conductivity’, Portale explains. ‘And we were able to turn it into a robust centimetre-sized membrane.’ The measured proton conductivity was higher than any previously known biomaterials, but they are not there yet according to Portale: ‘This was mainly fundamental work. In order to apply this material, we really have to improve it and make it processable.’


But even though the work is not yet done, Portale and his co-workers can already dream about applying their polymer: ‘We think this material could be useful as a membrane in fuel cells. Maybe not for the large scale fuel cells that you see in cars and factories, but more on a small scale. There is a growing field of implantable bio-electronic devices, for instance, glucose-powered pacemakers. In the coming years, we hope to find out if our polymer can make a difference there, since it is already bio-compatible.’

For the short term, Portale mainly thinks about sensors. ‘The conductivity we measure in our material is influenced by factors in the environment, like humidity or temperature. So if you want to store something at a certain humidity you can place this polymer between two electrodes and just measure if anything changes.’ However, before all these dreams come true, there are a lot of questions to be answered. ‘I am very proud that we were able to control these new materials on a molecular scale and build them from scratch. But we still have to learn a lot about their capabilities and see if we can improve them even further.’

Story Source:

Materials provided by University of Groningen. Note: Content may be edited for style and length.

Go to Source


Study identifies top reasons for sewer line failure

Concrete sewer pipes around the world are most likely to fail either because their concrete is not strong enough or because they can’t handle the weight of trucks that drive over them, a new study indicates.

The study used a statistical analysis to show that those two factors were the most likely to trigger a problem among 16 common causes of sewer pipe failure they examined.

The study was published online earlier this year in the journal Structure and Infrastructure Engineering.

“There is a vast array of pipes underground that is working every day and if there is disruption — leakage or collapsing in a pipe, for example — not only will there be discomfort for the residents, but also economic, health and environmental consequences,” said Abdollah Shafieezadeh, senior author on the study and an associate professor of civil, environmental and geodetic engineering at The Ohio State University. “The losses can be significant.”

And so can the cost of repairs and maintenance: In 2017, the American Society of Civil Engineers estimated the cost to fix and maintain the U.S. sewer system at $150 billion.

For this study, researchers gathered data and analyzed buried sewer pipes, which, in the United States, are commonly made of concrete. Then, they identified the most likely causes of sewer pipe failure. Those causes included things like the density of soil surrounding sewer pipes, the elasticity and strength of the concrete materials and the weight of trucks that regularly drive over them.

They then built a statistical model that could evaluate the stress caused to the pipes by each variable individually and in combinations, and conducted several statistical analyses using the Ohio Supercomputer Center.

The analyses showed that, statistically, cracks that will eventually influence the structural integrity of sewer pipes are most likely to form when the concrete is made from weak components and not maintained properly, or when heavy trucks regularly drive on roads above the pipes. Cracks that influence structural integrity, the researchers say, are the first signs that a sewer pipe is on its way to collapse.

“It’s a worldwide problem, and part of the issue is that, for many cities around the world, these sewer systems have been installed long ago, and the challenge now is to maintain these old systems,” Shafieezadeh said.

Systems can be complex and expensive to maintain. Cities often have tens of thousands of miles of sewer pipes running beneath them. And, because they are underground, spotting issues is not as simple as finding issues with above-ground infrastructure like roads or power lines, said Soroush Zamanian, a graduate research associate in civil, environmental and geodetic engineering at Ohio State and lead author of the paper.

“In 2017, the American Society of Civil Engineers gave the United States’ overall sewer system a D+ rating,” Zamanian said. “And that’s part of why we are looking at this question and seeing if we could help predict where lines might fail.”

The researchers said future studies should examine the way aging and corrosion of pipes affects the way in which system operators can repair them. And, they said, future studies could analyze other pipe configurations, or analyze more details about the types of soil that surround those pipes.

“The big picture is if we want to design sewer pipes for the future, or if we want to assess the current condition and predict future conditions of these pipes, one key element is to know the important factors contributing to their failures — and how do those factors play out in the real world,” Zamanian said.

Story Source:

Materials provided by Ohio State University. Original written by Laura Arenschield. Note: Content may be edited for style and length.

Go to Source


For next-generation semiconductors, 2D tops 3D

Netflix, which provides an online streaming service around the world, has 42 million videos and about 160 million subscribers in total. It takes just a few seconds to download a 30-minute video clip and you can watch a show within 15 minutes after it airs. As distribution and transmission of high-quality contents are growing rapidly, it is critical to develop reliable and stable semiconductor memories.

To this end, POSTECH research team has developed a memory device using a two-dimensional layered-structure material, unlocking the possibility of commercializing the next-generation memory device that can be stably operated at a low power.

POSTECH research team consisting of Professor Jang-Sik Lee of the Department of Materials Science and Engineering, Professor Donghwa Lee of the Division of Advanced Materials Science, Youngjun Park, and Seong Hun Kim in the PhD course succeeded in designing an optimal halide perovskite material (CsPb2Br5) that can be applied to a ReRAM*1 device by applying the first-principles calculation*2 based on quantum mechanics. The findings were published in Advanced Science.

The ideal next-generation memory device should process information at high speeds, store large amounts of information with non-volatile characteristics where the information does not disappear when power is off, and operate at low power for mobile devices.

The recent discovery of the resistive switching property in halide perovskite materials has led to worldwide active research to apply them to ReRAM devices. However, the poor stability of halide perovskite materials when they are exposed to the atmosphere have been raised as an issue.

The research team compared the relative stability and properties of halide perovskites with various structures using the first principles calculation2. DFT calculations predicted that CsPb2Br5, a two-dimensional layered structure in the form of AB2X5, may have better stability than the three-dimensional structure of ABX3 or other structures (A3B2X7, A2BX4), and that this structure could show improved performance in memory devices.

To verify this result, CsPb2Br5, an inorganic perovskite material with a two-dimensional layered structure, was synthesized and applied to memory devices for the first time. The memory devices with a three-dimensional structure of CsPbBr3 lost their memory characteristics at temperatures higher than 100 °C. However, the memory devices using a two-dimensional layered-structure of CsPb2Br5 maintained their memory characteristics over 140 °C and could be operated at voltages lower than 1V.

Professor Jang-Sik Lee who led the research commented, “Using this materials-designing technique based on the first-principles screening and experimental verification, the development of memory devices can be accelerated by reducing the time spent on searching for new materials. By designing an optimal new material for memory devices through computer calculations and applying it to actually producing them, the material can be applied to memory devices of various electronic devices such as mobile devices that require low power consumption or servers that require reliable operation. This is expected to accelerate the commercialization of next-generation data storage devices.”

Story Source:

Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length.

Go to Source


Learning more about particle collisions with machine learning

The Large Hadron Collider (LHC) near Geneva, Switzerland became famous around the world in 2012 with the detection of the Higgs boson. The observation marked a crucial confirmation of the Standard Model of particle physics, which organizes the subatomic particles into groups similar to elements in the periodic table from chemistry.

The U.S. Department of Energy’s (DOE) Argonne National Laboratory has made many pivotal contributions to the construction and operation of the ATLAS experimental detector at the LHC and to the analysis of signals recorded by the detector that uncover the underlying physics of particle collisions. Argonne is now playing a lead role in the high-luminosity upgrade of the ATLAS detector for operations that are planned to begin in 2027. To that end, a team of Argonne physicists and computational scientists has devised a machine learning-based algorithm that approximates how the present detector would respond to the greatly increased data expected with the upgrade.

As the largest physics machine ever built, the LHC shoots two beams of protons in opposite directions around a 17-mile ring until they approach near the speed of light, smashes them together and analyzes the collision products with gigantic detectors such as ATLAS. The ATLAS instrument is about the height of a six-story building and weighs approximately 7,000 tons. Today, the LHC continues to study the Higgs boson, as well as address fundamental questions on how and why matter in the universe is the way it is.

“Most of the research questions at ATLAS involve finding a needle in a giant haystack, where scientists are only interested in finding one event occurring among a billion others,” said Walter Hopkins, assistant physicist in Argonne’s High Energy Physics (HEP) division.

As part of the LHC upgrade, efforts are now progressing to boost the LHC’s luminosity — the number of proton-to-proton interactions per collision of the two proton beams — by a factor of five. This will produce about 10 times more data per year than what is presently acquired by the LHC experiments. How well the detectors respond to this increased event rate still needs to be understood. This requires running high-performance computer simulations of the detectors to accurately assess known processes resulting from LHC collisions. These large-scale simulations are costly and demand large chunks of computing time on the world’s best and most powerful supercomputers.

The Argonne team has created a machine learning algorithm that will be run as a preliminary simulation before any full-scale simulations. This algorithm approximates, in very fast and less costly ways, how the present detector would respond to the greatly increased data expected with the upgrade. It involves simulation of detector responses to a particle-collision experiment and the reconstruction of objects from the physical processes. These reconstructed objects include jets or sprays of particles, as well as individual particles like electrons and muons.

“The discovery of new physics at the LHC and elsewhere demands ever more complex methods for big data analyses,” said Doug Benjamin, a computational scientist in HEP. “These days that usually means use of machine learning and other artificial intelligence techniques.”

The previously used analysis methods for initial simulations have not employed machine learning algorithms and are time consuming because they involve manually updating experimental parameters when conditions at the LHC change. Some may also miss important data correlations for a given set of input variables to an experiment. The Argonne-developed algorithm learns, in real time while a training procedure is applied, the various features that need to be introduced through detailed full simulations, thereby avoiding the need to handcraft experimental parameters. The method can also capture complex interdependencies of variables that have not been possible before.

“With our stripped-down simulation, you can learn the basics at comparatively little computational cost and time, then you can much more efficiently proceed with full simulations at a later date,” said Hopkins. “Our machine learning algorithm also provides users with better discriminating power on where to look for new or rare events in an experiment,” he added.

The team’s algorithm could prove invaluable not only for ATLAS, but for the multiple experimental detectors at the LHC, as well as other particle physics experiments now being conducted around the world.

Story Source:

Materials provided by DOE/Argonne National Laboratory. Original written by Joseph E. Harmon. Note: Content may be edited for style and length.

Go to Source


Black hole collision may have exploded with light

When two black holes spiral around each other and ultimately collide, they send out ripples in space and time called gravitational waves. Because black holes do not give off light, these events are not expected to shine with any light waves, or electromagnetic radiation. Graduate Center, CUNY astrophysicists K. E. Saavik Ford and Barry McKernan have posited ways in which a black hole merger might explode with light. Now, for the first time, astronomers have seen evidence of one of these light-producing scenarios. Their findings are available in the current issues of Physical Review Letters.

A team consisting of scientists from The Graduate Center, CUNY; Caltech’s Zwicky Transient Facility (ZTF); Borough of Manhattan Community College (BMCC); and The American Museum of Natural History (AMNH) spotted what appears to be a flare of light from a pair of coalescing black holes. The event (called S190521g) was first identified by the National Science Foundation’s (NSF) Laser Interferometer Gravitational-wave Observatory (LIGO) and the European Virgo detector on May 21, 2019. As the black holes merged, jiggling space and time, they sent out gravitational waves. Shortly thereafter, scientists at ZTF — which is located at the Palomar Observatory near San Diego — reviewed their recordings of the same the event and spotted what may be a flare of light coming from the coalescing black holes.

“At the center of most galaxies lurks a supermassive black hole. It’s surrounded by a swarm of stars and dead stars, including black holes,” said study coauthor Ford, a professor with the Graduate Center, BMCC and AMNH. “These objects swarm like angry bees around the monstrous queen bee at the center. They can briefly find gravitational partners and pair up but usually lose their partners quickly to the mad dance. But in a supermassive black hole’s disk, the flowing gas converts the mosh pit of the swarm to a classical minuet, organizing the black holes so they can pair up,” she says.

Once the black holes merge, the new, now-larger black hole experiences a kick that sends it off in a random direction, and it plows through the gas in the disk. “It is the reaction of the gas to this speeding bullet that creates a bright flare, visible with telescopes,” said co-author McKernan, an astrophysics professor with The Graduate Center, BMCC and AMNH.

“This supermassive black hole was burbling along for years before this more abrupt flare,” said the study’s lead author Matthew Graham, a research professor of astronomy at Caltech and the project scientist for ZTF. “The flare occurred on the right timescale, and in the right location, to be coincident with the gravitational-wave event. In our study, we conclude that the flare is likely the result of a black hole merger, but we cannot completely rule out other possibilities.”

“ZTF was specifically designed to identify new, rare, and variable types of astronomical activity like this,” said NSF Division of Astronomical Science Director Ralph Gaume. “NSF support of new technology continues to expand how we can track such events.”

Such a flare is predicted to begin days to weeks after the initial splash of gravitational waves produced during the merger. In this case, ZTF did not catch the event right away, but when the scientists went back and looked through archival ZTF images months later, they found a signal that started days after the May 2019 gravitational-wave event. ZTF observed the flare slowly fade over the period of a month.

The scientists attempted to get a more detailed look at the light of the supermassive black hole, called a spectrum, but by the time they looked, the flare had already faded. A spectrum would have offered more support for the idea that the flare came from merging black holes within the disk of the supermassive black hole. However, the researchers say they were able to largely rule out other possible causes for the observed flare, including a supernova or a tidal disruption event, which occurs when a black hole essentially eats a star.

What is more, the team says it is not likely that the flare came from the usual rumblings of the supermassive black hole, which regularly feeds off its surrounding disk. Using the Catalina Real-Time Transient Survey, led by Caltech, they were able to assess the behavior of the black hole over the past 15 years, and found that its activity was relatively normal until May of 2019, when it suddenly intensified.

“Supermassive black holes like this one have flares all the time. They are not quiet objects, but the timing, size, and location of this flare was spectacular,” said co-author Mansi Kasliwal (MS ’07, PhD ’11), an assistant professor of astronomy at Caltech. “The reason looking for flares like this is so important is that it helps enormously with astrophysics and cosmology questions. If we can do this again and detect light from the mergers of other black holes, then we can nail down the homes of these black holes and learn more about their origins.”

The newly formed black hole should cause another flare in the next few years. The process of merging gave the object a kick that should cause it to enter the supermassive black hole’s disk again, producing another flash of light that ZTF should be able to see.

Go to Source