Enormous planet quickly orbiting a tiny, dying star

Thanks to a bevy of telescopes in space and on Earth — and even a pair of amateur astronomers in Arizona — a University of Wisconsin-Madison astronomer and his colleagues have discovered a Jupiter-sized planet orbiting at breakneck speed around a distant white dwarf star. The system, about 80 light years away, violates all common conventions about stars and planets. The white dwarf is the remnant of a sun-like star, greatly shrunken down to roughly the size of Earth, yet it retains half the sun’s mass. The massive planet looms over its tiny star, which it circles every 34 hours thanks to an incredibly close orbit. In contrast, Mercury takes a comparatively lethargic 90 days to orbit the sun. While there have been hints of large planets orbiting close to white dwarfs in the past, the new findings are the clearest evidence yet that these bizarre pairings exist. That confirmation highlights the diverse ways stellar systems can evolve and may give a glimpse at our own solar system’s fate. Such a white dwarf system could even provide a rare habitable arrangement for life to arise in the light of a dying star.

“We’ve never seen evidence before of a planet coming in so close to a white dwarf and surviving. It’s a pleasant surprise,” says lead researcher Andrew Vanderburg, who recently joined the UW-Madison astronomy department as an assistant professor. Vanderburg completed the work while an independent NASA Sagan Fellow at the University of Texas at Austin.

The researchers published their findings Sept. 16 in the journal Nature. Vanderburg led a large, international collaboration of astronomers who analyzed the data. The contributing telescopes included NASA’s exoplanet-hunting telescope TESS and two large ground-based telescopes in the Canary Islands.

Vanderburg was originally drawn to studying white dwarfs — the remains of sun-sized stars after they exhaust their nuclear fuel — and their planets by accident. While in graduate school, he was reviewing data from TESS’s predecessor, the Kepler space telescope, and noticed a white dwarf with a cloud of debris around it.

“What we ended up finding was that this was a minor planet or asteroid that was being ripped apart as we watched, which was really cool,” says Vanderburg. The planet had been destroyed by the star’s gravity after its transition to a white dwarf caused the planet’s orbit to fall in toward the star.

Ever since, Vanderburg has wondered if planets, especially large ones, could survive the journey in toward an aging star.

By scanning data for thousands of white dwarf systems collected by TESS, the researchers spotted a star whose brightness dimmed by half about every one-and-a-half days, a sign that something big was passing in front of the star on a tight, lightning-fast orbit. But it was hard to interpret the data because the glare from a nearby star was interfering with TESS’s measurements. To overcome this obstacle, the astronomers supplemented the TESS data from higher-resolution ground-based telescopes, including three run by amateur astronomers.

“Once the glare was under control, in one night, they got much nicer and much cleaner data than we got with a month of observations from space,” says Vanderburg. Because white dwarfs are so much smaller than normal stars, large planets passing in front of them block a lot of the star’s light, making detection by ground-based telescopes much simpler.

The data revealed that a planet roughly the size of Jupiter, perhaps a little larger, was orbiting very close to its star. Vanderburg’s team believes the gas giant started off much farther from the star and moved into its current orbit after the star evolved into a white dwarf.

The question became: how did this planet avoid being torn apart during the upheaval? Previous models of white dwarf-planet interactions didn’t seem to line up for this particular star system.

The researchers ran new simulations that provided a potential answer to the mystery. When the star ran out of fuel, it expanded into a red giant, engulfing any nearby planets and destabilizing the Jupiter-sized planet that orbited farther away. That caused the planet to take on an exaggerated, oval orbit that passed very close to the now-shrunken white dwarf but also flung the planet very far away at the orbit’s apex.

Over eons, the gravitational interaction between the white dwarf and its planet slowly dispersed energy, ultimately guiding the planet into a tight, circular orbit that takes just one-and-a-half days to complete. That process takes time — billions of years. This particular white dwarf is one of the oldest observed by the TESS telescope at almost 6 billion years old, plenty of time to slow down its massive planet partner.

While white dwarfs no longer conduct nuclear fusion, they still release light and heat as they cool down. It’s possible that a planet close enough to such a dying star would find itself in the habitable zone, the region near a star where liquid water can exist, presumed to be required for life to arise and survive.

Now that research has confirmed these systems exist, they offer a tantalizing opportunity for searching for other forms of life. The unique structure of white dwarf-planet systems provides an ideal opportunity to study the chemical signatures of orbiting planets’ atmospheres, a potential way to search for signs of life from afar.

“I think the most exciting part of this work is what it means for both habitability in general — can there be hospitable regions in these dead solar systems — and also our ability to find evidence of that habitability,” says Vanderburg.

Go to Source


Intelligent software tackles plant cell jigsaw puzzle

Imagine working on a jigsaw puzzle with so many pieces that even the edges seem indistinguishable from others at the puzzle’s centre. The solution seems nearly impossible. And, to make matters worse, this puzzle is in a futuristic setting where the pieces are not only numerous, but ever-changing. In fact, you not only must solve the puzzle, but “un-solve” it to parse out how each piece brings the picture wholly into focus.

That’s the challenge molecular and cellular biologists face in sorting through cells to study an organism’s structural origin and the way it develops, known as morphogenesis. If only there was a tool that could help. An eLife paper out this week shows there now is.

An EMBL research group led by Anna Kreshuk, a computer scientist and expert in machine learning, joined the DFG-funded FOR2581 consortium of plant biologists and computer scientists to develop a tool that could solve this cellular jigsaw puzzle. Starting with computer code and moving on to a more user-friendly graphical interface called PlantSeg, the team built a simple open-access method to provide the most accurate and versatile analysis of plant tissue development to date. The group included expertise from EMBL, Heidelberg University, the Technical University of Munich, and the Max Planck Institute for Plant Breeding Research in Cologne.

“Building something like PlantSeg that can take a 3D perspective of cells and actually separate them all is surprisingly hard to do, considering how easy it is for humans,” Kreshuk says. “Computers aren’t as good as humans when it comes to most vision-related tasks, as a rule. With all the recent development in deep learning and artificial intelligence at large, we are closer to solving this now, but it’s still not solved — not for all conditions. This paper is the presentation of our current approach, which took some years to build.”

If researchers want to look at morphogenesis of tissues at the cellular level, they need to image individual cells. Lots of cells means they also have to separate or “segment” them to see each cell individually and analyse the changes over time.

“In plants, you have cells that look extremely regular that in a cross-section looks like rectangles or cylinders,” Kreshuk says. “But you also have cells with so-called ‘high lobeness’ that have protrusions, making them look more like puzzle pieces. These are more difficult to segment because of their irregularity.”

Kreshuk’s team trained PlantSeg on 3D microscope images of reproductive organs and developing lateral roots of a common plant model, Arabidopsis thaliana, also known as thale cress. The algorithm needed to factor in the inconsistencies in cell size and shape. Sometimes cells were more regular, sometimes less. As Kreshuk points out, this is the nature of tissue.

A beautiful side of this research came from the microscopy and images it provided to the algorithm. The results manifested themselves in colourful renderings that delineated the cellular structures, making it easier to truly “see” segmentation.

“We have giant puzzle boards with thousands of cells and then we’re essentially colouring each one of these puzzle pieces with a different colour,” Kreshuk says.

Plant biologists have long needed this kind of tool, as morphogenesis is at the crux of many developmental biology questions. This kind of algorithm allows for all kinds of shape-related analysis, for example, analysis of shape changes through development or under a change in environmental conditions, or between species. The paper gives some examples, such as characterising developmental changes in ovules, studying the first asymmetric cell division which initiates the formation of the lateral root, and comparing and contrasting the shape of leaf cells between two different plant species.

While this tool currently targets plants specifically, Kreshuk points out that it could be tweaked to be used for other living organisms as well.

Machine learning-based algorithms, like the ones used at the core of PlantSeg, are trained from correct segmentation examples. The group has trained PlantSeg on many plant tissue volumes, so that now it generalises quite well to unseen plant data. The underlying method is, however, applicable to any tissue with cell boundary staining and one could easily retrain it for animal tissue.

“If you have tissue where you have a boundary staining, like cell walls in plants or cell membranes in animals, this tool can be used,” Kreshuk says. “With this staining and at high enough resolution, plant cells look very similar to our cells, but they are not quite the same. The tool right now is really optimised for plants. For animals, we would probably have to retrain parts of it, but it would work.”

Currently, PlantSeg is an independent tool but one that Kreshuk’s team will eventually merge into another tool her lab is working on, ilastik Multicut workflow.

Go to Source


New model predicts the peaks of the COVID-19 pandemic

As of late May, COVID-19 has killed more than 325,000 people around the world. Even though the worst seems to be over for countries like China and South Korea, public health experts warn that cases and fatalities will continue to surge in many parts of the world. Understanding how the disease evolves can help these countries prepare for an expected uptick in cases.

This week in the journal Frontiers in Physics, researchers describe a single function that accurately describes all existing available data on active cases and deaths — and predicts forthcoming peaks. The tool uses q-statistics, a set of functions and probability distributions developed by Constantino Tsallis, a physicist and member of the Santa Fe Institute’s external faculty. Tsallis worked on the new model together with Ugur Tirnakli, a physicist at Ege University, in Turkey.

“The formula works in all the countries in which we have tested,” says Tsallis.

Neither physicist ever set out to model a global pandemic. But Tsallis says that when he saw the shape of published graphs representing China’s daily active cases, he recognized shapes he’d seen before — namely, in graphs he’d helped produce almost two decades ago to describe the behavior of the stock market.

“The shape was exactly the same,” he says. For the financial data, the function described probabilities of stock exchanges; for COVID-19, it described daily the number of active cases — and fatalities — as a function of time.

Modeling financial data and tracking a global pandemic may seem unrelated, but Tsallis says they have one important thing in common. “They’re both complex systems,” he says, “and in complex systems, this happens all the time.” Disparate systems from a variety of fields — biology, network theory, computer science, mathematics — often reveal patterns that follow the same basic shapes and evolution.

The financial graph appeared in a 2004 volume co-edited by Tsallis and the late Nobelist Murray Gell-Mann. Tsallis developed q-statitics, also known as “Tsallis statistics,” in the late 1980s as a generalization of Boltzmann-Gibbs statistics to complex systems.

In the new paper, Tsallis and Tirnakli used data from China, where the active case rate is thought to have peaked, to set the main parameters for the formula. Then, they applied it to other countries including France, Brazil, and the United Kingdom, and found that it matched the evolution of the active cases and fatality rates over time.

The model, says Tsallis, could be used to create useful tools like an app that updates in real-time with new available data, and can adjust its predictions accordingly. In addition, he thinks that it could be fine-tuned to fit future outbreaks as well.

“The functional form seems to be universal,” he says, “Not just for this virus, but for the next one that might appear as well.”

Story Source:

Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.

Go to Source


How cosmic rays may have shaped life

Before there were animals, bacteria or even DNA on Earth, self-replicating molecules were slowly evolving their way from simple matter to life beneath a constant shower of energetic particles from space.

In a new paper, a Stanford professor and a former post-doctoral scholar speculate that this interaction between ancient proto-organisms and cosmic rays may be responsible for a crucial structural preference, called chirality, in biological molecules. If their idea is correct, it suggests that all life throughout the universe could share the same chiral preference.

Chirality, also known as handedness, is the existence of mirror-image versions of molecules. Like the left and right hand, two chiral forms of a single molecule reflect each other in shape but don’t line up if stacked. In every major biomolecule — amino acids, DNA, RNA — life only uses one form of molecular handedness. If the mirror version of a molecule is substituted for the regular version within a biological system, the system will often malfunction or stop functioning entirely. In the case of DNA, a single wrong handed sugar would disrupt the stable helical structure of the molecule.

Louis Pasteur first discovered this biological homochirality in 1848. Since then, scientists have debated whether the handedness of life was driven by random chance or some unknown deterministic influence. Pasteur hypothesized that, if life is asymmetric, then it may be due to an asymmetry in the fundamental interactions of physics that exist throughout the cosmos.

“We propose that the biological handedness we witness now on Earth is due to evolution amidst magnetically polarized radiation, where a tiny difference in the mutation rate may have promoted the evolution of DNA-based life, rather than its mirror image,” said Noémie Globus lead author of the paper and a former Koret Fellow at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC).

In their paper, published on May 20 in Astrophysical Journal Letters, the researchers detail their argument in favor of cosmic rays as the origin of homochirality. They also discuss potential experiments to test their hypothesis.

Magnetic polarization from space

Cosmic rays are an abundant form of high-energy radiation that originate from various sources throughout the universe, including stars and distant galaxies. After hitting the Earth’s atmosphere, cosmic rays eventually degrade into fundamental particles. At ground level, most of the cosmic rays exist only as particles known as muons.

Muons are unstable particles, existing for a mere 2 millionths of a second, but because they travel near the speed of light, they have been detected more than 700 meters below Earth’s surface. They are also magnetically polarized, meaning, on average, muons all share the same magnetic orientation. When muons finally decay, they produce electrons with the same magnetic polarization. The researchers believe that the muon’s penetrative ability allows it and its daughter electrons to potentially affect chiral molecules on Earth and everywhere else in the universe.

“We are irradiated all the time by cosmic rays,” explained Globus, who is currently a post-doctoral researcher at New York University and the Simons Foundation’s Flatiron Institute. “Their effects are small but constant in every place on the planet where life could evolve, and the magnetic polarization of the muons and electrons is always the same. And even on other planets, cosmic rays would have the same effects.”

The researchers’ hypothesis is that, at the beginning of life of on Earth, this constant and consistent radiation affected the evolution of the two mirror life-forms in different ways, helping one ultimately prevail over the other. These tiny differences in mutation rate would have been most significant when life was beginning and the molecules involved were very simple and more fragile. Under these circumstances, the small but persistent chiral influence from cosmic rays could have, over billions of generations of evolution, produced the single biological handedness we see today.

“This is a little bit like a roulette wheel in Vegas, where you might engineer a slight preference for the red pockets, rather than the black pockets,” said Roger Blandford, the Luke Blossom Professor in the School of Humanities and Sciences at Stanford and an author on the paper. “Play a few games, you would never notice. But if you play with this roulette wheel for many years, those who bet habitually on red will make money and those who bet on black will lose and go away.”

Ready to be surprised

Globus and Blandford suggest experiments that could help prove or disprove their cosmic ray hypothesis. For example, they would like to test how bacteria respond to radiation with different magnetic polarization.

“Experiments like this have never been performed and I am excited to see what they teach us. Surprises inevitably come from further work on interdisciplinary topics,” said Globus.

The researchers also look forward to organic samples from comets, asteroids or Mars to see if they too exhibit a chiral bias.

“This idea connects fundamental physics and the origin of life,” said Blandford, who is also Stanford and SLAC professor of physics and particle physics and former director of KIPAC. “Regardless of whether or not it’s correct, bridging these very different fields is exciting and a successful experiment should be interesting.”

This research was funded by the Koret Foundation, New York University and the Simons Foundation.

Go to Source


Scientists create new recipe for single-atom transistors

Once unimaginable, transistors consisting only of several-atom clusters or even single atoms promise to become the building blocks of a new generation of computers with unparalleled memory and processing power. But to realize the full potential of these tiny transistors — miniature electrical on-off switches — researchers must find a way to make many copies of these notoriously difficult-to-fabricate components.

Now, researchers at the National Institute of Standards and Technology (NIST) and their colleagues at the University of Maryland have developed a step-by-step recipe to produce the atomic-scale devices. Using these instructions, the NIST-led team has become only the second in the world to construct a single-atom transistor and the first to fabricate a series of single electron transistors with atom-scale control over the devices’ geometry.

The scientists demonstrated that they could precisely adjust the rate at which individual electrons flow through a physical gap or electrical barrier in their transistor — even though classical physics would forbid the electrons from doing so because they lack enough energy. That strictly quantum phenomenon, known as quantum tunneling, only becomes important when gaps are extremely tiny, such as in the miniature transistors. Precise control over quantum tunneling is key because it enables the transistors to become “entangled” or interlinked in a way only possible through quantum mechanics and opens new possibilities for creating quantum bits (qubits) that could be used in quantum computing.

To fabricate single-atom and few-atom transistors, the team relied on a known technique in which a silicon chip is covered with a layer of hydrogen atoms, which readily bind to silicon. The fine tip of a scanning tunneling microscope then removed hydrogen atoms at selected sites. The remaining hydrogen acted as a barrier so that when the team directed phosphine gas (PH3) at the silicon surface, individual PH3 molecules attached only to the locations where the hydrogen had been removed (see animation). The researchers then heated the silicon surface. The heat ejected hydrogen atoms from the PH3 and caused the phosphorus atom that was left behind to embed itself in the surface. With additional processing, bound phosphorus atoms created the foundation of a series of highly stable single- or few-atom devices that have the potential to serve as qubits.

Two of the steps in the method devised by the NIST teams — sealing the phosphorus atoms with protective layers of silicon and then making electrical contact with the embedded atoms — appear to have been essential to reliably fabricate many copies of atomically precise devices, NIST researcher Richard Silver said.

In the past, researchers have typically applied heat as all the silicon layers are grown, in order to remove defects and ensure that the silicon has the pure crystalline structure required to integrate the single-atom devices with conventional silicon-chip electrical components. But the NIST scientists found that such heating could dislodge the bound phosphorus atoms and potentially disrupt the structure of the atomic-scale devices. Instead, the team deposited the first several silicon layers at room temperature, allowing the phosphorus atoms to stay put. Only when subsequent layers were deposited did the team apply heat.

“We believe our method of applying the layers provides more stable and precise atomic-scale devices,” said Silver. Having even a single atom out of place can alter the conductivity and other properties of electrical components that feature single or small clusters of atoms.

The team also developed a novel technique for the crucial step of making electrical contact with the buried atoms so that they can operate as part of a circuit. The NIST scientists gently heated a layer of palladium metal applied to specific regions on the silicon surface that resided directly above selected components of the silicon-embedded device. The heated palladium reacted with the silicon to form an electrically conducting alloy called palladium silicide, which naturally penetrated through the silicon and made contact with the phosphorus atoms.

In a recent edition of Advanced Functional Materials, Silver and his colleagues, who include Xiqiao Wang, Jonathan Wyrick, Michael Stewart Jr. and Curt Richter, emphasized that their contact method has a nearly 100% success rate. That’s a key achievement, noted Wyrick. “You can have the best single-atom-transistor device in the world, but if you can’t make contact with it, it’s useless,” he said.

Fabricating single-atom transistors “is a difficult and complicated process that maybe everyone has to cut their teeth on, but we’ve laid out the steps so that other teams don’t have to proceed by trial and error,” said Richter.

In related work published today in Communications Physics, Silver and his colleagues demonstrated that they could precisely control the rate at which individual electrons tunnel through atomically precise tunnel barriers in single-electron transistors. The NIST researchers and their colleagues fabricated a series of single-electron transistors identical in every way except for differences in the size of the tunneling gap. Measurements of current flow indicated that by increasing or decreasing the gap between transistor components by less than a nanometer (billionth of a meter), the team could precisely control the flow of a single electron through the transistor in a predictable manner.

“Because quantum tunneling is so fundamental to any quantum device, including the construction of qubits, the ability to control the flow of one electron at a time is a significant achievement,” Wyrick said. In addition, as engineers pack more and more circuitry on a tiny computer chip and the gap between components continues to shrink, understanding and controlling the effects of quantum tunneling will become even more critical, Richter said.

Go to Source


10 Popular APIs for Credit Cards

Nothing is safe from digital transformation, not even the trusty credit card. Gone are the days consumers need to carry a plastic card around to pay via a credit card transaction, and new technologies including mobile applications can take credit for that! Developers looking to create applications for using, managing, reporting, and issuing credit cards need the proper Application Programming Interfaces, or APIs, to get the job done.

The best place to find these APIs is in the Credit Cards category on ProgrammableWeb. Developers can discover APIs to implement cardless credit card payments, implement payment installments, manage fee recovery services, get alerts about unusual purchases, check funds balances, dispute claims, automate ecommerce workflows, share data with partners, monitor digital footprints, test applications with fake card numbers, and so much more.

In this article we detail the ten most popular Credit Cards APIs based on website traffic on ProgrammbleWeb.

1. Fake Credit Card Generator API

The Fake Credit Card Generator APITrack this API allows users to create fake credit card numbers that come with expiration dates and CVV2 numbers. The service is designed to allow people retain their anonymity online, as well as use the fake numbers in programming test environments without risking charging an actual card number. Developers must request access for this API.

2. Chase Paymentech APIs

Chase Paymentech is an online payment system that provides credit card processing and online merchant services. The Chase Paymentech API enables Card-Not-Present (CNP) payment services for businesses of all sizes. For companies that sell abroad, customized international processing solutions and guidance on accepting credit card payments overseas is offered. Payment process and expansion help is offered for small or growing merchants. The Chase Paymentech payment system is accessible for registered users to the Chase / J.P. Morgan developer center.

3. Mastercard Payment Gateway Services API

Mastercard Payment Gateway Services enables customers to accept and process transactions across ecommerce, mobile commerce, and cardholder present channels such as mail order, call center, or SMS, on any device. The APITrack this API features JSON responses for managing browser payments, merchants, orders including currency and refunds, and transactions.

4. CreditAPI

The CreditAPI APITrack this API allows users to access consumer credit data from Experian, Equifax and Transunion. The API merges credit data that is obtained from the three credit repositories to produce a unified view of a borrowers credit record. Users can also process their credit data and the API will generate specific actions that the borrower can take to improve their credit score. The API uses HTTP calls and responses are formatted in XML, TEXT, PDF, HTML and more.

5. iZettle API

iZettle is a mobile, tablet, and website credit card payment application available internationally. The iZettle APITrack this API allows developers to access and integrate the functionality of iZettle with other applications and to create new applications. Some example API methods include managing account information, processing payments, and retrieving payment information. iZettle is under the supervision of the Swedish Financial Supervisory Authority and was acquired by PayPal in 2018.

6. Square API

Square helps consumers to pay with credit cards in a mobile environment. The Square Connect APITrack this API enables point of sale transactions for users with iPad, iPhone, and Android devices. The Square Connect API provides HTTP endpoints to retrieve reports for users’ payments, refunds, and settlements. It also allows to manage merchant’s items.

Square API enables developers to create a custom online payment workflow. Image: Square

7. Intuit QuickBooks Payments API

QuickBooks Payments APITrack this API lets developers use ecommerce websites to process and manage credit card payments. It can also be used for QuickBooks Online to record transactions. Inutit provides developer-centric resources including free SDKs, developer sandbox, and on-demand support.

8. Capital One Customer Transactions API

The Capital One Customer Transactions APITrack this API provides tools for developers to give customers access to view their Capital One credit card and bank account transaction data within an application. The API features protected login, customer controls for accessible accounts, on-demand data, and can be used in mobile, desktop and web applications.

9. Credit Reporting Services API

Credit Reporting Services (CRS) is a credit reporting agency certified as a re-seller for Experian, TransUnion, and Equifax. The company offers services for industries requiring credit data. The company’s XML Credit Report APITrack this API is available for providing credit report data.

10. SumUp API

SumUp is a service that allows users to accept and process credit card payments via mobile devices with the SumUp application. The SumUp APITrack this API allows developers to access and integrate the functionality of SumUp with other applications. Example API methods include managing account information and accepting and processing payments.

Check out more than 275 APIs, 280 SDKs, and 380 Source Code Samples in the Credit Cards category.

Go to Source
Author: <a href="">joyc</a>


Ship noise leaves crabs too stressed to hide from danger

The ocean is getting too loud even for crabs. Normally, shore crabs (Carcinus maenas) can slowly change their shell color to blend in with the rocky shores in which they live, but recent findings show that prolonged exposure to the sounds of ships weakens their camouflaging powers and leaves them more open to attack. The work, appearing March 9 in the journal Current Biology, illustrates how human-made undersea noise can turn common shore crabs into sitting ducks for potential predators.

“Prior work had shown that ship noise can be stressful for shore crabs, so in this study, we wanted to address how that stress might affect behaviors they rely on for survival,” says first author Emily Carter, a graduate from the University of Exeter. Unlike frogs or bats, who use sound to communicate or hunt, crabs don’t primarily use sound to interact with each other. However, this study demonstrates that noise pollution can still affect important shore crab survival behaviors like the ability to camouflage and quickly respond to danger.

Carter placed dark-shelled juvenile shore crabs into white tanks. Within the tanks, crabs were exposed to the underwater sounds of a cruise ship, container ship, and oil tanker. As a control, other crabs listened to natural water sounds — played either quietly, or loud at a similar amplitude to the ship noise. Over 8 weeks, the crabs exposed to ship noise lightened their color to match their tanks only half as much as those which heard ambient water alone (both quiet and loud). Carter believes this reduced change in color demonstrates the unique effect of ship noise pollution on crab camouflage.

“Color change in shore crabs is a slow, energetically costly process that’s controlled by hormones that activate specialized pigment cells across their shell,” says Carter. “Stress consumes energy and disrupts hormone balance, so we believe that the stress caused by ship noise either drains the crabs of the energy required to change color properly or disrupts the balance of hormones necessary to make that change.” The crabs also grew and molted much more slowly, showing that ship noise impacts multiple aspects of shore crab physiology.

What’s more is that when crabs were subjected to a simulated shore bird attack, those that heard ship noise didn’t run and hide as they would normally. “About half of the crabs exposed to ship noise did not respond to the attack at all, and the ones that did were slow to hide themselves,” says Carter. “Similar to how people have trouble concentrating when stressed, the nature of their response indicates that they couldn’t process what was happening, as if that awareness and decision-making ability just wasn’t there.”

In noise pollution research, the sounds of ships and other forms of human-made noise are typically studied for their effects on animals who directly use sound. Here, Carter, co-author Tom Tregenza, professor of evolutionary ecology at the University of Exeter, and senior author Martin Stevens, professor of sensory and evolutionary ecology at the University of Exeter, show that the noise pollution field should also consider behaviors based on their importance to survival rather than whether they have a direct link with noise.

“This work shows how processes like color change, which are not directly linked to acoustics, can still be affected by noise and how even animals like crabs are impacted by noise pollution — not just species that actively use sound, such as many fish or mammals,” says Stevens.

To expand upon this research, Stevens’ lab is investigating how multiple stressors, including that of noise pollution and warming oceans, could work synergistically to disrupt the coloration and behavior of marine organisms.

Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.

Go to Source


Showing robots how to do your chores

Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.

Making progress on that vision, MIT researchers have designed a system that lets these types of robots learn complicated tasks that would otherwise stymie them with too many confusing rules. One such task is setting a dinner table under certain conditions.

At its core, the researchers’ “Planning with Uncertain Specifications” (PUnS) system gives robots the humanlike planning ability to simultaneously weigh many ambiguous — and potentially contradictory — requirements to reach an end goal. In doing so, the system always chooses the most likely action to take, based on a “belief” about some probable specifications for the task it is supposed to perform.

In their work, the researchers compiled a dataset with information about how eight objects — a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl — could be placed on a table in various configurations. A robotic arm first observed randomly selected human demonstrations of setting the table with the objects. Then, the researchers tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.

To succeed, the robot had to weigh many possible placement orderings, even when items were purposely removed, stacked, or hidden. Normally, all that would confuse robots too much. But the researchers’ robot made no mistakes over several real-world experiments, and only a handful of mistakes over tens of thousands of simulated test runs.

“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group, who emphasizes that their work is just one step in fulfilling that vision. “That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home.”

Joining Shah on the paper are AeroAstro and Interactive Robotics Group graduate student Shen Li and Interactive Robotics Group leader Julie Shah, an associate professor in AeroAstro and the Computer Science and Artificial Intelligence Laboratory.

Bots hedging bets

Robots are fine planners in tasks with clear “specifications,” which help describe the task the robot needs to fulfill, considering its actions, environment, and end goal. Learning to set a table by observing demonstrations, is full of uncertain specifications. Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item’s immediate availability or social conventions. Present approaches to planning are not capable of dealing with such uncertain specifications.

A popular approach to planning is “reinforcement learning,” a trial-and-error machine-learning technique that rewards and penalizes them for actions as they work to complete a task. But for tasks with uncertain specifications, it’s difficult to define clear rewards and penalties. In short, robots never fully learn right from wrong.

The researchers’ system, called PUnS (for Planning with Uncertain Specifications), enables a robot to hold a “belief” over a range of possible specifications. The belief itself can then be used to dish out rewards and penalties. “The robot is essentially hedging its bets in terms of what’s intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification,” Ankit Shah says.

The system is built on “linear temporal logic” (LTL), an expressive language that enables robotic reasoning about current and future outcomes. The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs. The robot’s observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas. Each formula encoded a slightly different preference — or specification — for setting the table. That probability distribution becomes its belief.

“Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually,” Ankit Shah says.

Following criteria

The researchers also developed several criteria that guide the robot toward satisfying the entire belief over those candidate formulas. One, for instance, satisfies the most likely formula, which discards everything else apart from the template with the highest probability. Others satisfy the largest number of unique formulas, without considering their overall probability, or they satisfy several formulas that represent highest total probability. Another simply minimizes error, so the system ignores formulas with high probability of failure.

Designers can choose any one of the four criteria to preset before training and testing. Each has its own tradeoff between flexibility and risk aversion. The choice of criteria depends entirely on the task. In safety critical situations, for instance, a designer may choose to limit possibility of failure. But where consequences of failure are not as severe, designers can choose to give robots greater flexibility to try different approaches.

With the criteria in place, the researchers developed an algorithm to convert the robot’s belief — the probability distribution pointing to the desired formula — into an equivalent reinforcement learning problem. This model will ping the robot with a reward or penalty for an action it takes, based on the specification it’s decided to follow.

In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries. In real-world demonstrations, it showed behavior similar to how a human would perform the task. If an item wasn’t initially visible, for instance, the robot would finish setting the rest of the table without the item. Then, when the fork was revealed, it would set the fork in the proper place. “That’s where flexibility is very important,” Shah says. “Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup.”

Next, the researchers hope to modify the system to help robots change their behavior based on verbal instructions, corrections, or a user’s assessment of the robot’s performance. “Say a person demonstrates to a robot how to set a table at only one spot. The person may say, ‘do the same thing for all other spots,’ or, ‘place the knife before the fork here instead,'” Shah says. “We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations.”

Go to Source


Metal-organic frameworks can separate gases despite the presence of water

Metal-organic frameworks (MOFs) are promising materials for inexpensive and less energy-intensive gas separation even in the presence of impurities such as water.

Experimental analyses of the performance of metal-organic frameworks (MOFs) for the separation of propane and propene under real-world conditions revealed that the most commonly used theory to predict the selectivity does not yield accurate estimates, and also that water as an impurity does not have a detrimental effect on the material’s performance.

Short chain hydrocarbons are produced in mixtures after treatment of crude oil in refineries and need to be separated in order to be industrially useful. For example, propane is used as a fuel and propene as a raw material for chemical synthesis such as the production of polymers. However, the separation process usually requires high temperatures and pressures, and additionally the removal of other impurities such as water makes the process costly and energy-consuming.

The structure of the studied MOF offers a long-lived, adaptable, and most importantly efficient separation alternative at ambient conditions. They build on the fact that unsaturated molecules such as propene can be complexed with the material’s exposed metal atoms, while saturated ones such as propane fail to do so. While research has focused on developing different metal-organic frameworks for different separation processes, the feasibility of using these materials on industrial-scale applications is commonly only gauged by relying on a theory that makes many idealizing assumptions on both the material and the purity of the gasses. Thus, it has not been clear whether these predictions hold under more complicated but also more realistic conditions.

A team of Hokkaido University researchers around Professor Shin-ichiro Noro in collaboration with Professor Roland A. Fischer’s group at the Technical University of Munich conducted a series of measurements on the performance of a prototypical MOF to ascertain the material’s real-world selectivity, for both completely dry frameworks and ones pre-exposed to water.

Their results recently published in ACS Applied Materials & Interfaces show that the predicted selectivities of the material are too high compared to the real-world results. It also demonstrated that water does not drastically decrease the selectivity, although it does reduce the material’s capacity to adsorb gas. The team then performed quantum-chemical computations to understand why and realized that the water molecules themselves offer new binding sites to unsaturated hydrocarbons, such as propene (but not propane), thus retaining the material’s functionality.

The researchers state: “We showed the power of multi-component adsorption experiments to analyze the feasibility of using an MOF system.” They thus want to raise awareness of the shortcomings of commonly used theories and motivate other groups to also use a combination of different real-world measurements.

Story Source:

Materials provided by Hokkaido University. Note: Content may be edited for style and length.

Go to Source


Going super small to get super strong metals

You can’t see them, but most of the metals around you — coins, silverware, even the steel beams holding up buildings and overpasses — are made up of tiny metal grains. Under a powerful enough microscope, you can see interlocking crystals that look like a granite countertop.

It’s long been known by materials scientists that metals get stronger as the size of the grains making up the metal get smaller — up to a point. If the grains are smaller than 10 nanometers in diameter the materials are weaker because, it was thought, they slide past each other like sand sliding down a dune. The strength of metals had a limit.

But experiments led by former University of Utah postdoctoral scholar Xiaoling Zhou, now at Princeton University, associate professor of geology Lowell Miyagi, and Bin Chen at the Center for High Pressure Science and Technology Advanced Research in Shanghai, China, show that that’s not always the case — in samples of nickel with grain diameters as small as 3 nanometers, and under high pressures, the strength of the samples continued to increase with smaller grain sizes.

The result, Zhou and Miyagi say, is a new understanding of how individual atoms of metal grains interact with each other, as well as a way to use those physics to achieve super-strong metals. Their study, carried out with colleagues at the University of California, Berkeley and at universities in China, is published in Nature.

“Our results suggest a possible strategy for making ultrastrong metals,” Zhou says. “In the past, researchers believed the strongest grain size was around 10-15 nanometers. But now we found that we could make stronger metals at below 10 nanometers.”

Pushing past Hall-Petch

For most metallic objects, Miyagi says, the sizes of the metal grains are on the order of a few to a few hundred micrometers — about the diameter of a human hair. “High end cutlery often will have a finer, and more homogeneous, grain structure which can allow you to get a better edge,” he says.

The previously-understood relationship between metal strength and grain size was called the Hall-Petch relationship. Metal strength increased as grain size decreased, according to Hall-Petch, down to a limit of 10-15 nanometers. That’s a diameter of only about four to six strands of DNA. Grain sizes below that limit just weren’t as strong. So to maximize strength, metallurgists would aim for the smallest effective grain sizes.

“Grain size refinement is a good approach to improve strength,” Zhou says. “So it was quite frustrating, in the past, to find this grain size refinement approach no longer works below a critical grain size.”

The explanation for the weakening below 10 nanometers had to do with the way grain surfaces interacted. The surfaces of grains have a different atomic structure than the interiors, Miyagi says. As long as the grains are held together by the power of friction, the metal would retain strength. But at small grain sizes, it was thought, the grains would simply slide past each other under strain, leading to a weak metal.

Technical limitations previously prevented direct experiments on nanograins, though, limiting understanding of how nanoscale grains behaved and whether there may yet be untapped strength below the Hall-Petch limit. “So we designed our study to measure the strength of nanometals,” Zhou says.

Under pressure

The researchers tested samples of nickel, a material that’s available in a wide range of nanograin sizes, down to three nanometers. Their experiments involved placing samples of various grain sizes under intense pressures in a diamond anvil cell and using x-ray diffraction to watch what was happening at the nanoscale in each sample.

“If you’ve ever played around with a spring, you’ve probably pulled on it hard enough to ruin it so that it doesn’t do what it’s supposed to do,” Miyagi says. “That’s basically what we’re measuring here; how hard we can push on this nickel until we would deform it past the point of it being able to recover.”

Strength continued to increase all the way down to the smallest grain size available. The 3 nm sample withstood a force of 4.2 gigapascals (about the same force as ten 10,000 lbs. elephants balanced on a single high heel) before deforming irreversibly. That’s ten times stronger than nickel with a commercial-grade grain size.

It’s not that the Hall-Petch relationship broke down, Miyagi says, but that the way the grains interacted was different under the experimental conditions. The high pressure likely overcame the grain sliding effects.

“If you push two grains together really hard,” he says, “it’s hard for them to slide past each other because the friction between grains becomes large, and you can suppress these grain boundary sliding mechanisms that turns out are responsible for this weakening.”

When grain boundary sliding was suppressed at grain sizes below 20nm, the researchers observed a new atomic-scale deformation mechanism which resulted in extreme strengthening in the finest grained samples.

Ultrastrong possibilities

Zhou says that one of the advances of this study is in their method to measure the strength of materials at the nanoscale in a way that hasn’t been done before.

Miyagi says another advance is a new way to think about strengthening metals — by engineering their grain surfaces to suppress grain sliding.

“We don’t have many applications, industrially, of things where the pressures are as high as in these experiments, but by showing pressure is one way of suppressing grain boundary deformation we can think about other strategies to suppress it, maybe using complicated microstructures where you have grain shapes that inhibit sliding of grains past each other.”

Go to Source