Categories
ScienceDaily

Zooming in on dark matter

Cosmologists have zoomed in on the smallest clumps of dark matter in a virtual universe — which could help us to find the real thing in space.

An international team of researchers, including Durham University, UK, used supercomputers in Europe and China to focus on a typical region of a computer-generated universe.

The zoom they were able to achieve is the equivalent of being able to see a flea on the surface of the Moon.

This allowed them to make detailed pictures and analyses of hundreds of virtual dark matter clumps (or haloes) from the very largest to the tiniest.

Dark matter particles can collide with dark matter anti-particles near the centre of haloes where, according to some theories, they are converted into a burst of energetic gamma-ray radiation.

Their findings, published in the journal Nature, could mean that these very small haloes could be identified in future observations by the radiation they are thought to give out.

Co-author Professor Carlos Frenk, Ogden Professor of Fundamental Physics at the Institute for Computational Cosmology, at Durham University, UK, said: “By zooming in on these relatively tiny dark matter haloes we can calculate the amount of radiation expected to come from different sized haloes.

“Most of this radiation would be emitted by dark matter haloes too small to contain stars and future gamma-ray observatories might be able to detect these emissions, making these small objects individually or collectively ‘visible’.

“This would confirm the hypothesised nature of the dark matter, which may not be entirely dark after all.”

Most of the matter in the universe is dark (apart from the gamma radiation they emit in exceptional circumstances) and completely different in nature from the matter that makes up stars, planets and people.

The universe is made of approximately 27 per cent dark matter with the rest largely consisting of the equally mysterious dark energy. Normal matter, such as planets and stars, makes up a relatively small five per cent of the universe.

Galaxies formed and grew when gas cooled and condensed at the centre of enormous clumps of this dark matter — so-called dark matter haloes.

Astronomers can infer the structure of large dark matter haloes from the properties of the galaxies and gas within them.

The biggest haloes contain huge collections of hundreds of bright galaxies, called galaxy clusters, weighing a 1,000 trillion times more than our Sun.

However, scientists have no direct information about smaller dark matter haloes that are too tiny to contain a galaxy. These can only be studied by simulating the evolution of the Universe in a large supercomputer.

The smallest are thought to have the same mass as the Earth according to current popular scientific theories about dark matter that underlie the new research.

The simulations were carried out using the Cosmology Machine supercomputer, part of the DiRAC High-Performance Computing facility in Durham, funded by the Science and Technology Facilities Council (STFC), and computers at the Chinese Academy of Sciences.

By zooming-in on the virtual universe in such microscopic detail, the researchers were able to study the structure of dark matter haloes ranging in mass from that of the Earth to a big galaxy cluster.

Surprisingly, they found that haloes of all sizes have a very similar internal structure and are extremely dense at the centre, becoming increasingly spread out, with smaller clumps orbiting in their outer regions.

The researchers said that without a measure scale it was almost impossible to tell an image of a dark matter halo of a massive galaxy from one of a halo with a mass a fraction of the Sun’s.

Co-author Professor Simon White, of the Max Planck Institute of Astrophysics, Germany, said: “We expect that small dark matter haloes would be extremely numerous, containing a substantial fraction of all the dark matter in the universe, but they would remain mostly dark throughout cosmic history because stars and galaxies grow only in haloes more than a million times as massive as the Sun.

“Our research sheds light on these small haloes as we seek to learn more about what dark matter is and the role it plays in the evolution of the universe.”

The research team, led by the National Astronomical Observatories of the Chinese Academy of Sciences, and including Durham University, UK, the Max Planck Institute for Astrophysics, Germany, and the Center for Astrophysics in Harvard, USA, took five years to develop, test and carry out their cosmic zoom.

The research was funded by the STFC, the European Research Council, the Chinese Academy of Sciences, the Max Planck Society and Harvard University.

Go to Source
Author:

Categories
ScienceDaily

A ‘bang’ in LIGO and Virgo detectors signals most massive gravitational-wave source yet

For all its vast emptiness, the universe is humming with activity in the form of gravitational waves. Produced by extreme astrophysical phenomena, these reverberations ripple forth and shake the fabric of space-time, like the clang of a cosmic bell.

Now researchers have detected a signal from what may be the most massive black hole merger yet observed in gravitational waves. The product of the merger is the first clear detection of an “intermediate-mass” black hole, with a mass between 100 and 1,000 times that of the sun.

They detected the signal, which they have labeled GW190521, on May 21, 2019, with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO), a pair of identical, 4-kilometer-long interferometers in the United States; and Virgo, a 3-kilometer-long detector in Italy.

The signal, resembling about four short wiggles, is extremely brief in duration, lasting less than one-tenth of a second. From what the researchers can tell, GW190521 was generated by a source that is roughly 5 gigaparsecs away, when the universe was about half its age, making it one of the most distant gravitational-wave sources detected so far.

As for what produced this signal, based on a powerful suite of state-of-the-art computational and modeling tools, scientists think that GW190521 was most likely generated by a binary black hole merger with unusual properties.

Almost every confirmed gravitational-wave signal to date has been from a binary merger, either between two black holes or two neutron stars. This newest merger appears to be the most massive yet, involving two inspiraling black holes with masses about 85 and 66 times the mass of the sun.

The LIGO-Virgo team has also measured each black hole’s spin and discovered that as the black holes were circling ever closer together, they could have been spinning about their own axes, at angles that were out of alignment with the axis of their orbit. The black holes’ misaligned spins likely caused their orbits to wobble, or “precess,” as the two Goliaths spiraled toward each other.

The new signal likely represents the instant that the two black holes merged. The merger created an even more massive black hole, of about 142 solar masses, and released an enormous amount of energy, equivalent to around 8 solar masses, spread across the universe in the form of gravitational waves.

“This doesn’t look much like a chirp, which is what we typically detect,” says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO’s first detection of gravitational waves in 2015. “This is more like something that goes ‘bang,’ and it’s the most massive signal LIGO and Virgo have seen.”

The international team of scientists, who make up the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration, have reported their findings in two papers published today. One, appearing in Physical Review Letters, details the discovery, and the other, in The Astrophysical Journal Letters, discusses the signal’s physical properties and astrophysical implications.

“LIGO once again surprises us not just with the detection of black holes in sizes that are difficult to explain, but doing it using techniques that were not designed specifically for stellar mergers,” says Pedro Marronetti, program director for gravitational physics at the National Science Foundation. “This is of tremendous importance since it showcases the instrument’s ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected.”

In the mass gap

The uniquely large masses of the two inspiraling black holes, as well as the final black hole, raise a slew of questions regarding their formation.

All of the black holes observed to date fit within either of two categories: stellar-mass black holes, which measure from a few solar masses up to tens of solar masses and are thought to form when massive stars die; or supermassive black holes, such as the one at the center of the Milky Way galaxy, that are from hundreds of thousands, to billions of times that of our sun.

However, the final 142-solar-mass black hole produced by the GW190521 merger lies within an intermediate mass range between stellar-mass and supermassive black holes — the first of its kind ever detected.

The two progenitor black holes that produced the final black hole also seem to be unique in their size. They’re so massive that scientists suspect one or both of them may not have formed from a collapsing star, as most stellar-mass black holes do.

According to the physics of stellar evolution, outward pressure from the photons and gas in a star’s core support it against the force of gravity pushing inward, so that the star is stable, like the sun. After the core of a massive star fuses nuclei as heavy as iron, it can no longer produce enough pressure to support the outer layers. When this outward pressure is less than gravity, the star collapses under its own weight, in an explosion called a core-collapse supernova, that can leave behind a black hole.

This process can explain how stars as massive as 130 solar masses can produce black holes that are up to 65 solar masses. But for heavier stars, a phenomenon known as “pair instability” is thought to kick in. When the core’s photons become extremely energetic, they can morph into an electron and antielectron pair. These pairs generate less pressure than photons, causing the star to become unstable against gravitational collapse, and the resulting explosion is strong enough to leave nothing behind. Even more massive stars, above 200 solar masses, would eventually collapse directly into a black hole of at least 120 solar masses. A collapsing star, then, should not be able to produce a black hole between approximately 65 and 120 solar masses — a range that is known as the “pair instability mass gap.”

But now, the heavier of the two black holes that produced the GW190521 signal, at 85 solar masses, is the first so far detected within the pair instability mass gap.

“The fact that we’re seeing a black hole in this mass gap will make a lot of astrophysicists scratch their heads and try to figure out how these black holes were made,” says Christensen, who is the director of the Artemis Laboratory at the Nice Observatory in France.

One possibility, which the researchers consider in their second paper, is of a hierarchical merger, in which the two progenitor black holes themselves may have formed from the merging of two smaller black holes, before migrating together and eventually merging.

“This event opens more questions than it provides answers,” says LIGO member Alan Weinstein, professor of physics at Caltech. “From the perspective of discovery and physics, it’s a very exciting thing.”

“Something unexpected”

There are many remaining questions regarding GW190521.

As LIGO and Virgo detectors listen for gravitational waves passing through Earth, automated searches comb through the incoming data for interesting signals. These searches can use two different methods: algorithms that pick out specific wave patterns in the data that may have been produced by compact binary systems; and more general “burst” searches, which essentially look for anything out of the ordinary.

LIGO member Salvatore Vitale, assistant professor of physics at MIT, likens compact binary searches to “passing a comb through data, that will catch things in a certain spacing,” in contrast to burst searches that are more of a “catch-all” approach.

In the case of GW190521, it was a burst search that picked up the signal slightly more clearly, opening the very small chance that the gravitational waves arose from something other than a binary merger.

“The bar for asserting we’ve discovered something new is very high,” Weinstein says. “So we typically apply Occam’s razor: The simpler solution is the better one, which in this case is a binary black hole.”

But what if something entirely new produced these gravitational waves? It’s a tantalizing prospect, and in their paper the scientists briefly consider other sources in the universe that might have produced the signal they detected. For instance, perhaps the gravitational waves were emitted by a collapsing star in our galaxy. The signal could also be from a cosmic string produced just after the universe inflated in its earliest moments — although neither of these exotic possibilities matches the data as well as a binary merger.

“Since we first turned on LIGO, everything we’ve observed with confidence has been a collision of black holes or neutron stars,” Weinstein says “This is the one event where our analysis allows the possibility that this event is not such a collision. Although this event is consistent with being from an exceptionally massive binary black hole merger, and alternative explanations are disfavored, it is pushing the boundaries of our confidence. And that potentially makes it extremely exciting. Because we have all been hoping for something new, something unexpected, that could challenge what we’ve learned already. This event has the potential for doing that.”

This research was funded by the U.S. National Science Foundation.

Video: https://www.youtube.com/watch?time_continue=6&v=zRmwtL6lvIM&feature=emb_logo

Go to Source
Author:

Categories
ScienceDaily

‘Black dwarf supernova’: Physicist calculates when the last supernova ever will happen

The end of the universe as we know it will not come with a bang. Most stars will very, very slowly fizzle as their temperatures fade to zero.

“It will be a bit of a sad, lonely, cold place,” said theoretical physicist Matt Caplan, who added no one will be around to witness this long farewell happening in the far far future. Most believe all will be dark as the universe comes to an end. “It’s known as ‘heat death,’ where the universe will be mostly black holes and burned-out stars,” said Caplan, who imagined a slightly different picture when he calculated how some of these dead stars might change over the eons.

Punctuating the darkness could be silent fireworks — explosions of the remnants of stars that were never supposed to explode. New theoretical work by Caplan, an assistant professor of physics at Illinois State University, finds that many white dwarfs may explode in supernova in the distant far future, long after everything else in the universe has died and gone quiet.

In the universe now, the dramatic death of massive stars in supernova explosions comes when internal nuclear reactions produce iron in the core. Iron cannot be burnt by stars — it accumulates like a poison, triggering the star’s collapse creating a supernova. But smaller stars tend to die with a bit more dignity, shrinking and becoming white dwarfs at the end of their lives.

“Stars less than about 10 times the mass of the sun do not have the gravity or density to produce iron in their cores the way massive stars do, so they can’t explode in a supernova right now,” said Caplan. “As white dwarfs cool down over the next few trillion years, they’ll grow dimmer, eventually freeze solid, and become ‘black dwarf’ stars that no longer shine.” Like white dwarfs today, they’ll be made mostly of light elements like carbon and oxygen and will be the size of the Earth but contain about as much mass as the sun, their insides squeezed to densities millions of times greater than anything on Earth.

But just because they’re cold doesn’t mean nuclear reactions stop. “Stars shine because of thermonuclear fusion — they’re hot enough to smash small nuclei together to make larger nuclei, which releases energy. White dwarfs are ash, they’re burnt out, but fusion reactions can still happen because of quantum tunneling, only much slower, Caplan said. “Fusion happens, even at zero temperature, it just takes a really long time.” He noted this is the key for turning black dwarfs into iron and triggering a supernova.

Caplan’s new work, accepted for publication by Monthly Notices of the Royal Astronomical Society, calculates how long these nuclear reactions take to produce iron, and how much iron black dwarfs of different sizes need to explode. He calls his theoretical explosions “black dwarf supernova” and calculates that the first one will occur in about 10 to the 1100th years. “In years, it’s like saying the word ‘trillion’ almost a hundred times. If you wrote it out, it would take up most of a page. It’s mindbogglingly far in the future.”

Of course, not all black dwarfs will explode. “Only the most massive black dwarfs, about 1.2 to 1.4 times the mass of the sun, will blow.” Still, that means as many as 1 percent of all stars that exist today, about a billion trillion stars, can expect to die this way. As for the rest, they’ll remain black dwarfs. “Even with very slow nuclear reactions, our sun still doesn’t have enough mass to ever explode in a supernova, even in the far far future. You could turn the whole sun to iron and it still wouldn’t pop.”

Caplan calculates that the most massive black dwarfs will explode first, followed by progressively less massive stars, until there are no more left to go off after about 1032000 years. At that point, the universe may truly be dead and silent. “It’s hard to imagine anything coming after that, black dwarf supernova might be the last interesting thing to happen in the universe. They may be the last supernova ever.” By the time the first black dwarfs explode, the universe will already be unrecognizable. “Galaxies will have dispersed, black holes will have evaporated, and the expansion of the universe will have pulled all remaining objects so far apart that none will ever see any of the others explode.It won’t even be physically possible for light to travel that far.”

Even though he’ll never see one, Caplan remains unbothered. “I became a physicist for one reason. I wanted to think about the big questions- why is the universe here, and how will it end?” When asked what big question comes next, Caplan says, “Maybe we’ll try simulating some black dwarf supernova. If we can’t see them in the sky then at least we can see them on a computer.”

Go to Source
Author:

Categories
ScienceDaily

Plato was right. Earth is made, on average, of cubes

Plato, the Greek philosopher who lived in the 5th century B.C.E., believed that the universe was made of five types of matter: earth, air, fire, water, and cosmos. Each was described with a particular geometry, a platonic shape. For earth, that shape was the cube.

Science has steadily moved beyond Plato’s conjectures, looking instead to the atom as the building block of the universe. Yet Plato seems to have been onto something, researchers have found.

In a new paper in the Proceedings of the National Academy of Sciences, a team from the University of Pennsylvania, Budapest University of Technology and Economics, and University of Debrecen uses math, geology, and physics to demonstrate that the average shape of rocks on Earth is a cube.

“Plato is widely recognized as the first person to develop the concept of an atom, the idea that matter is composed of some indivisible component at the smallest scale,” says Douglas Jerolmack, a geophysicist in Penn’s School of Arts & Sciences’ Department of Earth and Environmental Science and the School of Engineering and Applied Science’s Department of Mechanical Engineering and Applied Mechanics. “But that understanding was only conceptual; nothing about our modern understanding of atoms derives from what Plato told us.

“The interesting thing here is that what we find with rock, or earth, is that there is more than a conceptual lineage back to Plato. It turns out that Plato’s conception about the element earth being made up of cubes is, literally, the statistical average model for real earth. And that is just mind-blowing.”

The group’s finding began with geometric models developed by mathematician Gábor Domokos of the Budapest University of Technology and Economics, whose work predicted that natural rocks would fragment into cubic shapes.

“This paper is the result of three years of serious thinking and work, but it comes back to one core idea,” says Domokos. “If you take a three-dimensional polyhedral shape, slice it randomly into two fragments and then slice these fragments again and again, you get a vast number of different polyhedral shapes. But in an average sense, the resulting shape of the fragments is a cube.”

Domokos pulled two Hungarian theoretical physicists into the loop: Ferenc Kun, an expert on fragmentation, and János Török, an expert on statistical and computational models. After discussing the potential of the discovery, Jerolmack says, the Hungarian researchers took their finding to Jerolmack to work together on the geophysical questions; in other words, “How does nature let this happen?”

“When we took this to Doug, he said, ‘This is either a mistake, or this is big,'” Domokos recalls. “We worked backward to understand the physics that results in these shapes.”

Fundamentally, the question they answered is what shapes are created when rocks break into pieces. Remarkably, they found that the core mathematical conjecture unites geological processes not only on Earth but around the solar system as well.

“Fragmentation is this ubiquitous process that is grinding down planetary materials,” Jerolmack says. “The solar system is littered with ice and rocks that are ceaselessly smashing apart. This work gives us a signature of that process that we’ve never seen before.”

Part of this understanding is that the components that break out of a formerly solid object must fit together without any gaps, like a dropped dish on the verge of breaking. As it turns out, the only one of the so-called platonic forms — polyhedra with sides of equal length — that fit together without gaps are cubes.

“One thing we’ve speculated in our group is that, quite possibly Plato looked at a rock outcrop and after processing or analyzing the image subconsciously in his mind, he conjectured that the average shape is something like a cube,” Jerolmack says.

“Plato was very sensitive to geometry,” Domokos adds. According to lore, the phrase “Let no one ignorant of geometry enter” was engraved at the door to Plato’s Academy. “His intuitions, backed by his broad thinking about science, may have led him to this idea about cubes,” says Domokos.

To test whether their mathematical models held true in nature, the team measured a wide variety of rocks, hundreds that they collected and thousands more from previously collected datasets. No matter whether the rocks had naturally weathered from a large outcropping or been dynamited out by humans, the team found a good fit to the cubic average.

However, special rock formations exist that appear to break the cubic “rule.” The Giant’s Causeway in Northern Ireland, with its soaring vertical columns, is one example, formed by the unusual process of cooling basalt. These formations, though rare, are still encompassed by the team’s mathematical conception of fragmentation; they are just explained by out-of-the-ordinary processes at work.

“The world is a messy place,” says Jerolmack. “Nine times out of 10, if a rock gets pulled apart or squeezed or sheared — and usually these forces are happening together — you end up with fragments which are, on average, cubic shapes. It’s only if you have a very special stress condition that you get something else. The earth just doesn’t do this often.”

The researchers also explored fragmentation in two dimensions, or on thin surfaces that function as two-dimensional shapes, with a depth that is significantly smaller than the width and length. There, the fracture patterns are different, though the central concept of splitting polygons and arriving at predictable average shapes still holds.

“It turns out in two dimensions you’re about equally likely to get either a rectangle or a hexagon in nature,” Jerolmack says. “They’re not true hexagons, but they’re the statistical equivalent in a geometric sense. You can think of it like paint cracking; a force is acting to pull the paint apart equally from different sides, creating a hexagonal shape when it cracks.”

In nature, examples of these two-dimensional fracture patterns can be found in ice sheets, drying mud, or even the earth’s crust, the depth of which is far outstripped by its lateral extent, allowing it to function as a de facto two-dimensional material. It was previously known that the earth’s crust fractured in this way, but the group’s observations support the idea that the fragmentation pattern results from plate tectonics.

Identifying these patterns in rock may help in predicting phenomenon such as rock fall hazards or the likelihood and location of fluid flows, such as oil or water, in rocks.

For the researchers, finding what appears to be a fundamental rule of nature emerging from millennia-old insights has been an intense but satisfying experience.

“There are a lot of sand grains, pebbles, and asteroids out there, and all of them evolve by chipping in a universal manner,” says Domokos, who is also co-inventor of the Gömböc, the first known convex shape with the minimal number — just two — of static balance points. Chipping by collisions gradually eliminates balance points, but shapes stop short of becoming a Gömböc; the latter appears as an unattainable end point of this natural process.

The current result shows that the starting point may be a similarly iconic geometric shape: the cube with its 26 balance points. “The fact that pure geometry provides these brackets for a ubiquitous natural process, gives me happiness,” he says.

“When you pick up a rock in nature, it’s not a perfect cube, but each one is a kind of statistical shadow of a cube,” adds Jerolmack. “It calls to mind Plato’s allegory of the cave. He posited an idealized form that was essential for understanding the universe, but all we see are distorted shadows of that perfect form.”

Go to Source
Author:

Categories
ScienceDaily

New research of oldest light confirms age of the universe

Just how old is the universe? Astrophysicists have been debating this question for decades. In recent years, new scientific measurements have suggested the universe may be hundreds of millions of years younger than its previously estimated age of approximately 13.8 billions of years.

Now new research published in a series of papers by an international team of astrophysicists, including Neelima Sehgal, PhD, from Stony Brook University, suggest the universe is about 13.8 billion years old. By using observations from the Atacama Cosmology Telescope (ACT) in Chile, their findings match the measurements of the Planck satellite data of the same ancient light.

The ACT research team is an international collaboration of scientists from 41 institutions in seven countries. The Stony Brook team from the Department of Physics and Astronomy in the College of Arts and Sciences, led by Professor Sehgal, plays an essential role in analyzing the cosmic microwave background (CMB) — the afterglow light from the Big Bang.

“In Stony Brook-led work we are restoring the ‘baby photo’ of the universe to its original condition, eliminating the wear and tear of time and space that distorted the image,” explains Professor Sehgal, a co-author on the papers. “Only by seeing this sharper baby photo or image of the universe, can we more fully understand how our universe was born.”

Obtaining the best image of the infant universe, explains Professor Sehgal, helps scientists better understand the origins of the universe, how we got to where we are on Earth, the galaxies, where we are going, how the universe may end, and when that ending may occur.

The ACT team estimates the age of the universe by measuring its oldest light. Other scientific groups take measurements of galaxies to make universe age estimates.

The new ACT estimate on the age of the universe matches the one provided by the standard model of the universe and measurements of the same light made by the Planck satellite. This adds a fresh twist to an ongoing debate in the astrophysics community, says Simone Aiola, first author of one of the new papers on the findings posted to arXiv.org.

“Now we’ve come up with an answer where Planck and ACT agree,” says Aiola, a researcher at the Flatiron Institute’s Center for Computational Astrophysics in New York City. “It speaks to the fact that these difficult measurements are reliable.”

In 2019, a research team measuring the movements of galaxies calculated that the universe is hundreds of millions of years younger than the Planck team predicted. That discrepancy suggested that a new model for the universe might be needed and sparked concerns that one of the sets of measurements might be incorrect.

The age of the universe also reveals how fast the cosmos is expanding, a number quantified by the Hubble constant. The ACT measurements suggest a Hubble constant of 67.6 kilometers per second per megaparsec. That means an object 1 megaparsec (around 3.26 million light-years) from Earth is moving away from us at 67.6 kilometers per second due to the expansion of the universe. This result agrees almost exactly with the previous estimate of 67.4 kilometers per second per megaparsec by the Planck satellite team, but it’s slower than the 74 kilometers per second per megaparsec inferred from the measurements of galaxies.

“I didn’t have a particular preference for any specific value — it was going to be interesting one way or another,” says Steve Choi of Cornell University, first author of another paper posted to arXiv.org. “We find an expansion rate that is right on the estimate by the Planck satellite team. This gives us more confidence in measurements of the universe’s oldest light.”

As ACT continues making observations, astronomers will have an even clearer picture of the CMB and a more exact idea of how long ago the cosmos began. The ACT team will also scour those observations for signs of physics that doesn’t fit the standard cosmological model. Such strange physics could resolve the disagreement between the predictions of the age and expansion rate of the universe arising from the measurements of the CMB and the motions of galaxies.

The ACT research is funded by the National Science Foundation (NSF), and the NSF also funds the work of Professor Sehgal and colleagues at Stony Brook.

Editor’s Note: The papers from the Atacama Cosmology Telescope researchers are available online at: https://act.princeton.edu/publications

Go to Source
Author:

Categories
ScienceDaily

How colliding neutron stars could shed light on universal mysteries

An important breakthrough in how we can understand dead star collisions and the expansion of the Universe has been made by an international team, led by the University of East Anglia.

They have discovered an unusual pulsar — one of deep space’s magnetized spinning neutron-star ‘lighthouses’ that emits highly focused radio waves from its magnetic poles.

The newly discovered pulsar (known as PSR J1913+1102) is part of a binary system — which means that it is locked in a fiercely tight orbit with another neutron star.

Neutron stars are the dead stellar remnants of a supernova. They are made up of the most dense matter known — packing hundreds of thousands of times the Earth’s mass into a sphere the size of a city.

In around half a billion years the two neutron stars will collide, releasing astonishing amounts of energy in the form of gravitational waves and light.

But the newly discovered pulsar is unusual because the masses of its two neutron stars are quite different — with one far larger than the other.

This asymmetric system gives scientists confidence that double neutron star mergers will provide vital clues about unsolved mysteries in astrophysics — including a more accurate determination of the expansion rate of the Universe, known as the Hubble constant.

The discovery, published today in the journal Nature, was made using the Arecibo radio telescope in Puerto Rico.

Lead researcher Dr Robert Ferdman, from UEA’s School of Physics, said: “Back in 2017, scientists at the Laser Interferometer Gravitational-Wave Observatory (LIGO) first detected the merger of two neutron stars.

“The event caused gravitational-wave ripples through the fabric of space time, as predicted by Albert Einstein over a century ago.”

Known as GW170817, this spectacular event was also seen with traditional telescopes at observatories around the world, which identified its location in a distant galaxy, 130 million light years from our own Milky Way.

Dr Ferdman said: “It confirmed that the phenomenon of short gamma-ray bursts was due to the merger of two neutron stars. And these are now thought to be the factories that produce most of the heaviest elements in the Universe, such as gold.”

The power released during the fraction of a second when two neutron stars merge is enormous — estimated to be tens of times larger than all stars in the Universe combined.

So the GW170817 event was not surprising. But the enormous amount of matter ejected from the merger and its brightness was an unexpected mystery.

Dr Ferdman said: “Most theories about this event assumed that neutron stars locked in binary systems are very similar in mass.

“Our new discovery changes these assumptions. We have uncovered a binary system containing two neutron stars with very different masses.

“These stars will collide and merge in around 470 million years, which seems like a long time, but it is only a small fraction of the age of the Universe.

“Because one neutron star is significantly larger, its gravitational influence will distort the shape of its companion star — stripping away large amounts of matter just before they actually merge, and potentially disrupting it altogether.

“This ‘tidal disruption’ ejects a larger amount of hot material than expected for equal-mass binary systems, resulting in a more powerful emission.

“Although GW170817 can be explained by other theories, we can confirm that a parent system of neutron stars with significantly different masses, similar to the PSR J1913+1102 system, is a very plausible explanation.

“Perhaps more importantly, the discovery highlights that there are many more of these systems out there — making up more than one in 10 merging double neutron star binaries.”

Co-author Dr Paulo Freire from the Max Planck Institute for Radio Astronomy in Bonn, Germany, said: “Such a disruption would allow astrophysicists to gain important new clues about the exotic matter that makes up the interiors of these extreme, dense objects.

“This matter is still a major mystery — it’s so dense that scientists still don’t know what it is actually made of. These densities are far beyond what we can reproduce in Earth-based laboratories.”

The disruption of the lighter neutron star would also enhance the brightness of the material ejected by the merger. This means that along with gravitational-wave detectors such as the US-based LIGO and the Europe-based Virgo detector, scientists will also be able to observe them with conventional telescopes.

Dr Ferdman said: “Excitingly, this may also allow for a completely independent measurement of the Hubble constant — the rate at which the Universe is expanding. The two main methods for doing this are currently at odds with each other, so this is a crucial way to break the deadlock and understand in more detail how the Universe evolved.”

Go to Source
Author:

Categories
ScienceDaily

To find giant black holes, start with Jupiter

The revolution in our understanding of the night sky and our place in the universe began when we transitioned from using the naked eye to a telescope in 1609. Four centuries later, scientists are experiencing a similar transition in their knowledge of black holes by searching for gravitational waves.

In the search for previously undetected black holes that are billions of times more massive than the sun, Stephen Taylor, assistant professor of physics and astronomy and former astronomer at NASA’s Jet Propulsion Laboratory (JPL) together with the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration has moved the field of research forward by finding the precise location — the center of gravity of our solar system — with which to measure the gravitational waves that signal the existence of these black holes.

The potential presented by this advancement, co-authored by Taylor, was published in the journal the Astrophysical Journal in April 2020.

Black holes are regions of pure gravity formed from extremely warped spacetime. Finding the most titanic black holes in the Universe that lurk at the heart of galaxies will help us understand how such galaxies (including our own) have grown and evolved over the billions of years since their formation. These black holes are also unrivaled laboratories for testing fundamental assumptions about physics.

Gravitational waves are ripples in spacetime predicted by Einstein’s general theory of relativity. When black holes orbit each other in pairs, they radiate gravitational waves that deform spacetime, stretching and squeezing space. Gravitational waves were first detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 2015, opening new vistas on the most extreme objects in the universe. Whereas LIGO observes relatively short gravitational waves by looking for changes in the shape of a 4-km long detector, NANOGrav, a National Science Foundation (NSF) Physics Frontiers Center, looks for changes in the shape of our entire galaxy.

Taylor and his team are searching for changes to the arrival rate of regular flashes of radio waves from pulsars. These pulsars are rapidly spinning neutron stars, some going as fast as a kitchen blender. They also send out beams of radio waves, appearing like interstellar lighthouses when these beams sweep over Earth. Over 15 years of data have shown that these pulsars are extremely reliable in their pulse arrival rates, acting as outstanding galactic clocks. Any timing deviations that are correlated across lots of these pulsars could signal the influence of gravitational waves warping our galaxy.

“Using the pulsars we observe across the Milky Way galaxy, we are trying to be like a spider sitting in stillness in the middle of her web,” explains Taylor. “How well we understand the solar system barycenter is critical as we attempt to sense even the smallest tingle to the web.” The solar system barycenter, its center of gravity, is the location where the masses of all planets, moons, and asteroids balance out.

Where is the center of our web, the location of absolute stillness in our solar system? Not in the center of the sun as many might assume, rather it is closer to the surface of the star. This is due to Jupiter’s mass and our imperfect knowledge of its orbit. It takes 12 years for Jupiter to orbit the sun, just shy of the 15 years that NANOGrav has been collecting data. JPL’s Galileo probe (named for the famed scientist that used a telescope to observe the moons of Jupiter) studied Jupiter between 1995 and 2003, but experienced technical maladies that impacted the quality of the measurements taken during the mission.

Identifying the center of the solar system’s gravity has long been calculated with data from Doppler tracking to get an estimate of the location and trajectories of bodies orbiting the sun. “The catch is that errors in the masses and orbits will translate to pulsar-timing artifacts that may well look like gravitational waves,” explains JPL astronomer and co-author Joe Simon.

Taylor and his collaborators were finding that working with existing solar system models to analyze NANOGrav data gave inconsistent results. “We weren’t detecting anything significant in our gravitational wave searches between solar system models, but we were getting large systematic differences in our calculations,” notes JPL astronomer and the paper’s lead author Michele Vallisneri. “Typically, more data delivers a more precise result, but there was always an offset in our calculations.”

The group decided to search for the center of gravity of the solar system at the same time as sleuthing for gravitational waves. The researchers got more robust answers to finding gravitational waves and were able to more accurately localize the center of the solar system’s gravity to within 100 meters. To understand that scale, if the sun were the size of a football field, 100 meters would be the diameter of a strand of hair. “Our precise observation of pulsars scattered across the galaxy has localized ourselves in the cosmos better than we ever could before,” said Taylor. “By finding gravitational waves this way, in addition to other experiments, we gain a more holistic overview of all different kinds of black holes in the Universe.”

As NANOGrav continues to collect ever more abundant and precise pulsar timing data, astronomers are confident that massive black holes will show up soon and unequivocally in the data.

Taylor was partially supported by an appointment to the NASA Postdoctoral Program at JPL. The NANOGrav project receives support from the NSF Physics Frontier Center award #1430284 and this work was supported in part by NSF Grant PHYS-1066293 and by the hospitality of the Aspen Center for Physics. Data for this project were collected using the facilities of the Green Bank Observatory and the Arecibo Observatory.

Story Source:

Materials provided by Vanderbilt University. Original written by Marissa Shapiro. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New test of dark energy and expansion from cosmic structures

A new paper has shown how large structures in the distribution of galaxies in the Universe provide the most precise tests of dark energy and cosmic expansion yet.

The study uses a new method based on a combination of cosmic voids — large expanding bubbles of space containing very few galaxies — and the faint imprint of sound waves in the very early Universe, known as baryon acoustic oscillations (BAO), that can be seen in the distribution of galaxies. This provides a precise ruler to measure the direct effects of dark energy driving the accelerated expansion of the Universe.

This new method gives much more precise results than the technique based on the observation of exploding massive stars, or supernovae, which has long been the standard method for measuring the direct effects of dark energy.

The research was led by the University of Portsmouth, and is published in Physical Review Letters.

The study makes use of data from over a million galaxies and quasars gathered over more than a decade of operations by the Sloan Digital Sky Survey.

The results confirm the model of a cosmological constant dark energy and spatially flat Universe to unprecedented accuracy, and strongly disfavour recent suggestions of positive spatial curvature inferred from measurements of the cosmic microwave background (CMB) by the Planck satellite.

Lead author Dr Seshadri Nadathur, research fellow at the University’s Institute of Cosmology and Gravitation (ICG), said: “This result shows the power of galaxy surveys to pin down the amount of dark energy and how it evolved over the last billion years. We’re making really precise measurements now and the data is going to get even better with new surveys coming online very soon.”

Dr Florian Beutler, a senior research fellow at the ICG, who was also involved in the work, said that the study also reported a new precise measurement of the Hubble constant, the value of which has recently been the subject of intense debate among astronomers.

He said: “We see tentative evidence that data from relatively nearby voids and BAO favour the high Hubble rate seen from other low-redshift methods, but including data from more distant quasar absorption lines brings it in better agreement with the value inferred from Planck CMB data.”

Story Source:

Materials provided by University of Portsmouth. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

ALMA discovers massive rotating disk in early universe

In our 13.8 billion-year-old universe, most galaxies like our Milky Way form gradually, reaching their large mass relatively late. But a new discovery made with the Atacama Large Millimeter/submillimeter Array (ALMA) of a massive rotating disk galaxy, seen when the universe was only ten percent of its current age, challenges the traditional models of galaxy formation. This research appears on 20 May 2020 in the journal Nature.

Galaxy DLA0817g, nicknamed the Wolfe Disk after the late astronomer Arthur M. Wolfe, is the most distant rotating disk galaxy ever observed. The unparalleled power of ALMA made it possible to see this galaxy spinning at 170 miles (272 kilometers) per second, similar to our Milky Way.

“While previous studies hinted at the existence of these early rotating gas-rich disk galaxies, thanks to ALMA we now have unambiguous evidence that they occur as early as 1.5 billion years after the Big Bang,” said lead author Marcel Neeleman of the Max Planck Institute for Astronomy in Heidelberg, Germany.

How did the Wolfe Disk form?

The discovery of the Wolfe Disk provides a challenge for many galaxy formation simulations, which predict that massive galaxies at this point in the evolution of the cosmos grew through many mergers of smaller galaxies and hot clumps of gas.

“Most galaxies that we find early in the universe look like train wrecks because they underwent consistent and often ‘violent’ merging,” explained Neeleman. “These hot mergers make it difficult to form well-ordered, cold rotating disks like we observe in our present universe.”

In most galaxy formation scenarios, galaxies only start to show a well-formed disk around 6 billion years after the Big Bang. The fact that the astronomers found such a disk galaxy when the universe was only ten percent of its current age, indicates that other growth processes must have dominated.

“We think the Wolfe Disk has grown primarily through the steady accretion of cold gas,” said J. Xavier Prochaska, of the University of California, Santa Cruz and coauthor of the paper. “Still, one of the questions that remains is how to assemble such a large gas mass while maintaining a relatively stable, rotating disk.”

Star formation

The team also used the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) and the NASA/ESA Hubble Space Telescope to learn more about star formation in the Wolfe Disk. In radio wavelengths, ALMA looked at the galaxy’s movements and mass of atomic gas and dust while the VLA measured the amount of molecular mass — the fuel for star formation. In UV-light, Hubble observed massive stars. “The star formation rate in the Wolfe Disk is at least ten times higher than in our own galaxy,” explained Prochaska. “It must be one of the most productive disk galaxies in the early universe.”

A ‘normal’ galaxy

The Wolfe Disk was first discovered by ALMA in 2017. Neeleman and his team found the galaxy when they examined the light from a more distant quasar. The light from the quasar was absorbed as it passed through a massive reservoir of hydrogen gas surrounding the galaxy — which is how it revealed itself. Rather than looking for direct light from extremely bright, but more rare galaxies, astronomers used this ‘absorption’ method to find fainter, and more ‘normal’ galaxies in the early universe.

“The fact that we found the Wolfe Disk using this method, tells us that it belongs to the normal population of galaxies present at early times,” said Neeleman. “When our newest observations with ALMA surprisingly showed that it is rotating, we realized that early rotating disk galaxies are not as rare as we thought and that there should be a lot more of them out there.”

“This observation epitomizes how our understanding of the universe is enhanced with the advanced sensitivity that ALMA brings to radio astronomy,” said Joe Pesce, astronomy program director at the National Science Foundation, which funds the telescope. “ALMA allows us to make new, unexpected findings with almost every observation.”

Go to Source
Author:

Categories
ScienceDaily

Exoplanets: How we’ll search for signs of life

Whether there is life elsewhere in the universe is a question people have pondered for millennia; and within the last few decades, great strides have been made in our search for signs of life outside of our solar system.

NASA missions like the space telescope Kepler have helped us document thousands of exoplanets — planets that orbit around other stars. And current NASA missions like Transiting Exoplanet Survey Satellite (TESS) are expected to vastly increase the current number of known exoplanets. It is expected that dozens will be Earth-sized rocky planets orbiting in their stars’ habitable zones, at distances where water could exist as a liquid on their surfaces. These are promising places to look for life.

This will be accomplished by missions like the soon-to-be-launched James Webb Space Telescope, which will complement and extend the discoveries of the Hubble Space Telescope by observing at infrared wavelengths. It is expected to launch in 2021, and will allow scientists to determine if rocky exoplanets have oxygen in their atmospheres. Oxygen in Earth’s atmosphere is due to photosynthesis by microbes and plants. To the extent that exoplanets resemble Earth, oxygen in their atmospheres may also be a sign of life.

Not all exoplanets will be Earth-like, though. Some will be, but others will differ from Earth enough that oxygen doesn’t necessarily come from life. So with all of these current and future exoplanets to study, how do scientists narrow down the field to those for which oxygen is most indicative of life?

To answer this question, an interdisciplinary team of researchers, led by Arizona State University (ASU), has provided a framework, called a “detectability index” which may help prioritize exoplanets that require additional study. The details of this index have recently been published in the Astrophysical Journal of the American Astronomical Society.

“The goal of the index is to provide scientists with a tool to select the very best targets for observation and to maximize the chances of detecting life,” says lead author Donald Glaser of ASU’s School of Molecular Sciences.

The oxygen detectability index for a planet like Earth is high, meaning that oxygen in Earth’s atmosphere is definitely due to life and nothing else. Seeing oxygen means life. A surprising finding by the team is that the detectability index plummets for exoplanets not-too-different from Earth.

Although Earth’s surface is largely covered in water, Earth’s oceans are only a small percentage (0.025%) of Earth’s mass. By comparison, moons in the outer solar system are typically close to 50% water ice.

“It’s easy to imagine that in another solar system like ours, an Earth-like planet could be just 0.2% water,” says co-author Steven Desch of ASU’s School of Earth and Space Exploration. “And that would be enough to change the detectability index. Oxygen would not be indicative of life on such planets, even if it were observed. That’s because an Earth-like planet that was 0.2% water — about eight times what Earth has — would have no exposed continents or land.”

Without land, rain would not weather rock and release important nutrients like phosphorus. Photosynthetic life could not produce oxygen at rates comparable to other non-biological sources.

“The detectability index tells us it’s not enough to observe oxygen in an exoplanet’s atmosphere. We must also observe oceans and land,” says Desch. “That changes how we approach the search for life on exoplanets. It helps us interpret observations we’ve made of exoplanets. It helps us pick the best target exoplanets to look for life on. And it helps us design the next generation of space telescopes so that we get all the information we need to make a positive identification of life.”

Scientists from diverse fields were brought together to create this index. The formation of the team was facilitated by NASA’s Nexus for Exoplanetary System Science (NExSS) program, which funds interdisciplinary research to develop strategies for looking for life on exoplanets. Their disciplines include theoretical and observational astrophysics, geophysics, geochemistry, astrobiology, oceanography, and ecology.

“This kind of research needs diverse teams, we can’t do it as individual scientists” says co-author Hilairy Hartnett who holds joint appointments at ASU’s School of Earth and Space Exploration and School of Molecular Sciences.

In addition to lead author Glaser and co-authors Harnett and Desch, the team includes co-authors Cayman Unterborn, Ariel Anbar, Steffen Buessecker, Theresa Fisher, Steven Glaser, Susanne Neuer, Camerian Millsaps, Joseph O’Rourke, Sara Imari Walker, and Mikhail Zolotov who collectively represent ASU’s School of Molecular Sciences, School of Earth and Space Exploration, and School of Life Sciences. Additional scientists on the team include researchers from the University of California Riverside, Johns Hopkins University and the University of Porto (Portugal).

It is the hope of this team that this detectability index framework will be employed in the search for life. “The detection of life on a planet outside our solar system would change our entire understanding of our place in the universe,” says Glaser. “NASA is deeply invested in searching for life, and it is our hope that this work will be used to maximize the chance of detecting life when we look for it.”

Go to Source
Author: