New neural network differentiates Middle and Late Stone Age toolkits

MSA toolkits first appear some 300 thousand years ago, at the same time as the earliest fossils of Homo sapiens, and are still in use 30 thousand years ago. However, from 67 thousand years ago, changes in stone tool production indicate a marked shift in behaviour; the new toolkits that emerge are labelled LSA and remained in use into the recent past. A growing body of evidence suggests that the transition from MSA to LSA was not a linear process, but occurred at different times in different places. Understanding this process is important to examine what drives cultural innovation and creativity, and what explains this critical behavioural change. Defining differences between the MSA and LSA is an important step towards this goal.

“Eastern Africa is a key region to examine this major cultural change, not only because it hosts some of the youngest MSA sites and some of the oldest LSA sites, but also because the large number of well excavated and dated sites make it ideal for research using quantitative methods,” says Dr. Jimbob Blinkhorn, an archaeologist from the Pan African Evolution Research Group, Max Planck Institute for the Science of Human History and the Centre for Quaternary Research, Department of Geography, Royal Holloway. “This enabled us to pull together a substantial database of changing patterns of stone tool production and use, spanning 130 to 12 thousand years ago, to examine the MSA-LSA transition.”

The study examines the presence or absence of 16 alternate tool types across 92 stone tool assemblages, but rather than focusing on them individually, emphasis is placed on the constellations of tool forms that frequently occur together.

“We’ve employed an Artificial Neural Network (ANN) approach to train and test models that differentiate LSA assemblages from MSA assemblages, as well as examining chronological differences between older (130-71 thousand years ago) and younger (71-28 thousand years ago) MSA assemblages with a 94% success rate,” says Dr. Matt Grove, an archaeologist at the University of Liverpool.

Artificial Neural Networks (ANNs) are computer models intended to mimic the salient features of information processing in the brain. Like the brain, their considerable processing power arises not from the complexity of any single unit but from the action of many simple units acting in parallel. Despite the widespread use of ANNs today, applications in archaeological research remain limited.

“ANNs have sometimes been described as a ‘black box’ approach, as even when they are highly successful, it may not always be clear exactly why,” says Grove. “We employed a simulation approach that breaks open this black box to understand which inputs have a significant impact on the results. This enabled us to identify how patterns of stone tool assemblage composition vary between the MSA and LSA, and we hope this demonstrates how such methods can be used more widely in archaeological research in the future.”

“The results of our study show that MSA and LSA assemblages can be differentiated based on the constellation of artefact types found within an assemblage alone,” Blinkhorn adds. “The combined occurrence of backed pieces, blade and bipolar technologies together with the combined absence of core tools, Levallois flake technology, point technology and scrapers robustly identifies LSA assemblages, with the opposite pattern identifying MSA assemblages. Significantly, this provides quantified support to qualitative differences noted by earlier researchers that key typological changes do occur with this cultural transition.”

The team plans to expand the use of these methods to dig deeper into different regional trajectories of cultural change in the African Stone Age. “The approach we’ve employed offers a powerful toolkit to examine the categories we use to describe the archaeological record and to help us examine and explain cultural change amongst our ancestors,” says Blinkhorn.

Go to Source


Machine learning peeks into nano-aquariums

In the nanoworld, tiny particles such as proteins appear to dance as they transform and assemble to perform various tasks while suspended in a liquid. Recently developed methods have made it possible to watch and record these otherwise-elusive tiny motions, and researchers now take a step forward by developing a machine learning workflow to streamline the process.

The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and is published in the journal ACS Central Science.

Being able to see — and record — the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact inside tiny aquariumlike sample containers, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few. However, it is difficult to interpret the dataset, the researchers said. The video files produced are large, filled with temporal and spatial information, and are noisy due to background signals — in other words, they require a lot of tedious image processing and analysis.

“Developing a method even to see these particles was a huge challenge,” Chen said. “Figuring out how to efficiently get the useful data pieces from a sea of outliers and noise has become the new challenge.”

To confront this problem, the team developed a machine learning workflow that is based upon an artificial neural network that mimics, in part, the learning potency of the human brain. The program builds off of an existing neural network, known as U-Net, that does not require handcrafted features or predetermined input and has yielded significant breakthroughs in identifying irregular cellular features using other types of microscopy, the study reports.

“Our new program processed information for three types of nanoscale dynamics including motion, chemical reaction and self-assembly of nanoparticles,” said lead author and graduate student Lehan Yao. “These represent the scenarios and challenges we have encountered in the analysis of liquid-phase electron microscopy videos.”

The researchers collected measurements from approximately 300,000 pairs of interacting nanoparticles, the study reports.

As found in past studies by Chen’s group, contrast continues to be a problem while imaging certain types of nanoparticles. In their experimental work, the team used particles made out of gold, which is easy to see with an electron microscope. However, particles with lower elemental or molecular weights like proteins, plastic polymers and other organic nanoparticles show very low contrast when viewed under an electron beam, Chen said.

“Biological applications, like the search for vaccines and drugs, underscore the urgency in our push to have our technique available for imaging biomolecules,” she said. “There are critical nanoscale interactions between viruses and our immune systems, between the drugs and the immune system, and between the drug and the virus itself that must be understood. The fact that our new processing method allows us to extract information from samples as demonstrated here gets us ready for the next step of application and model systems.”

The team has made the source code for the machine learning program used in this study publicly available through the supplemental information section of the new paper. “We feel that making the code available to other researchers can benefit the whole nanomaterials research community,” Chen said.

See the liquid-phase electron microscopy with combined machine learning in action:

Go to Source


The origins of roughness

Most natural and artificial surfaces are rough: metals and even glasses that appear smooth to the naked eye can look like jagged mountain ranges under the microscope. There is currently no uniform theory about the origin of this roughness despite it being observed on all scales, from the atomic to the tectonic. Scientists suspect that the rough surface is formed by irreversible plastic deformation that occurs in many processes of mechanical machining of components such as milling. Prof. Dr. Lars Pastewka from the Simulation group at the Department of Microsystems Engineering at the University of Freiburg and his team have simulated such mechanical loads in computer simulations. The researchers found out that surfaces made of different materials, which show distinct mechanisms of plastic deformation, always develop surface roughness with identical statistical properties. They published their results in the freely accessible online journal Science Advances.

Geological surfaces, such as mountain ranges, are created by mechanical deformation, which then leads to processes such as fracture or wear. Synthetic surfaces typically go through many steps of shaping and finishing, such as polishing, lapping, and grinding, explains Pastewka. Most of these surface changes, whether natural or synthetic, lead to plastic deformations on the smallest atomic length scale: “Even at the crack tips of most brittle materials such as glass, there is a finite process zone in which the material is plastically deformed,” says the Freiburg researcher. “Roughness on these smallest scales is important because it controls the area of intimate atomic contact when two surfaces are pressed together and thus adhesion, conductivity and other functional properties of surfaces in contact.”

In collaboration with colleagues from the Karlsruhe Institute of Technology, the École Polytechnique Fédérale de Lausanne/Switzerland, and the Sandia National Laboratories/USA, and funded by the European Research Council (ERC), Pastewka and his group were able to simulate the surface topography for three reference material systems at the JUQUEEN and JUWELS supercomputers at the Jülich Supercomputing Centre, which included monocrystalline gold, a high entropy alloy of nickel, iron and titanium, and the metallic glass copper-zirconium, in which the atoms do not form ordered structures but an irregular pattern. Each of these three materials is known to have different micromechanical or molecular properties. The scientists now investigated the mechanism of the deformation and the resulting changes in the atomic scale both within the solid and on its surface.

Pastewka, who is also a member of the Cluster of Excellence Living, Adaptive and Energy-autonomous Material Systems (livMatS), and his team found that despite their different structures and material properties, all three systems, when compressed, develop rough surfaces with a so-called self-affined topography. This means that the systems have identical geometric structures regardless of the scale on which they are observed: Surface topography in a virtual microscope at the nanometer scale cannot be distinguished from the structure of mountain landscapes at the kilometer scale. “This is one explanation,” says Pastewka, “as to why an almost universal structure of surface roughness is observed in experiments.”

Story Source:

Materials provided by University of Freiburg. Note: Content may be edited for style and length.

Go to Source

IEEE Spectrum

Large-Area MicroLED Display Startup Makes Gallium-Nitride Transistors Too

MicroLEDs appear to be, forgive the pun, the bright future of displays. Made of micrometer-scale gallium-nitride LEDs, the technology offers an unmatchable ratio of brightness to power consumption.

The problem is that the most easily manufacturable ones are so small they’re suitable only for augmented reality and similar applications. Making bigger ones, like for a watch display or a smartphone screen, requires the near-perfect transfer of tens of thousands of individual microLEDs per second onto a prefabricated silicon backplane. It’s a very difficult proposition, but Apple and some startups are trying to tackle it.

If New Mexico-based startup iBeam is correct, even larger microLED displays could be produced quickly and cheaply on flexible substrates. “iBeam is a new paradigm in manufacturing for microLEDs,” says Julian Osinski, the startup’s vice president of product technology. “We have a way of growing microLEDs directly on a roll of metal foil, and that’s something nobody else can do.”

iBeam’s technology is adapted from the superconductor manufacturing industry where something similar has produced product by the kilometer. Founder and CEO Vladimir Matias is a superconductor manufacturing veteran and saw its potential in producing gallium-nitride devices.

LEDs and other gallium-nitride devices are usually grown atop a silicon or sapphire wafer by a process called epitaxy. For that to work, you need a single crystal, preferably with a similar crystal structure, for the gallium nitride to grow on. The iBeam process can produce that crystal-like substrate on an otherwise amorphous or polycrystalline surface such as metal or glass.

When you deposit material on an amorphous substrate you normally get a film of randomly oriented grains of crystal, explains Matias. But briefly blasting that film with ions from just the right angle gets all the grains to line up. iBeam chooses the film’s material so that the aligned grains match well with gallium nitride’s crystal structure. From there, they grow layers of gallium nitride using standard techniques and fashion them into microLEDs. Quantum dots will then be added to convert the color of some of the microLEDs from their natural blue to red and green.

Just as is done with superconductors, the procedure could be rapidly done in a roll-to-roll fashion, Osinski says. Today’s industry processes produce gallium nitride for about US $2 to $3 per square centimeter. “We’d like to take it down to 10 cents [per square centimeter] or less so it becomes competitive with OLEDs,” he says.

The company has used its existing process to produce microLEDs, and last month it announced the production of high-electron mobility transistors (HEMTs), as well. If the HEMTs could be constructed along with the microLEDs, they could form the circuitry that controls the microLED pixels.

(Kei May Lau’s team at Hong Kong University of Science and Technology developed a structure that integrates the HEMT and microLED so tightly that they effectively become one device.)

The startup’s near-term goal is to produce a small prototype display, which Osinki thinks may take until the end of next year. They hope to have large-scale manufacturing nailed down by 2022. That’s later than some microLED firms are planning to commercialize their products, but iBeam is counting on being able to produce much larger displays and at much lower cost. The company plans to sell its manufacturing process and materials to established display makers rather than become a manufacturer itself.

3D Printing Industry

Albirght Silicone introduces 3D printing capabilities; WACKER launches new ACEO silicone 3D printing material

3D printing with silicone is a rather niche area, however, activity does appear to be heating up. The two companies in this news update both offer 3D printing solutions for users who want to access the benefits and material properties of silicones. Albright Silicone, a Massachusetts-based engineering company, has launched a new 3D printing silicone […]

Go to Source
Author: Anas Essop


Giant radio galaxies defy conventional wisdom

Conventional wisdom tells us that large objects appear smaller as they get farther from us, but this fundamental law of classical physics is reversed when we observe the distant universe.

Astrophysicists at the University of Kent simulated the development of the biggest objects in the universe to help explain how galaxies and other cosmic bodies were formed. By looking at the distant universe, it is possible to observe it in a past state, when it was still at a formative stage. At that time, galaxies were growing and supermassive black holes were violently expelling enormous amounts of gas and energy. This matter accumulated into pairs of reservoirs, which formed the biggest objects in the universe, so-called giant radio galaxies. These giant radio galaxies stretch across a large part of the Universe. Even moving at the speed of light, it would take several million years to cross one.

Professor Michael D. Smith of the Centre for Astrophysics and Planetary Science, and student Justin Donohoe collaborated on the research. They expected to find that as they simulated objects farther into the distant universe, they would appear smaller, but in fact they found the opposite.

Professor Smith said: ‘When we look far into the distant universe, we are observing objects way in the past — when they were young. We expected to find that these distant giants would appear as a comparatively small pair of vague lobes. To our surprise, we found that these giants still appear enormous even though they are so far away.’

Radio galaxies have long been known to be powered by twin jets which inflate their lobes and create giant cavities. The team performed simulations using the Forge supercomputer, generating three-dimensional hydrodynamics that recreated the effects of these jets. They then compared the resulting images to observations of the distant galaxies. Differences were assessed using a new classification index, the Limb Brightening Index (LB Index), which measures changes to the orientation and size of the objects.

Professor Smith said: ‘We already know that once you are far enough away, the Universe acts like a magnifying glass and objects start to increase in size in the sky. Because of the distance, the objects we observed are extremely faint, which means we can only see the brightest parts of them, the hot spots. These hot spots occur at the outer edges of the radio galaxy and so they appear to be larger than ever, confounding our initial expectations.’

Story Source:

Materials provided by University of Kent. Original written by Michelle Ulyatt. Note: Content may be edited for style and length.

Go to Source


Plasma flow near sun’s surface explains sunspots, other solar phenomena

For 400 years people have tracked sunspots, the dark patches that appear for weeks at a time on the sun’s surface. They have observed but been unable to explain why the number of spots peaks every 11 years.

A University of Washington study published this month in the journal Physics of Plasmas proposes a model of plasma motion that would explain the 11-year sunspot cycle and several other previously mysterious properties of the sun.

“Our model is completely different from a normal picture of the sun,” said first author Thomas Jarboe, a UW professor of aeronautics and astronautics. “I really think we’re the first people that are telling you the nature and source of solar magnetic phenomena — how the sun works.”

The authors created a model based on their previous work with fusion energy research. The model shows that a thin layer beneath the sun’s surface is key to many of the features we see from Earth, like sunspots, magnetic reversals and solar flow, and is backed up by comparisons with observations of the sun.

“The observational data are key to confirming our picture of how the sun functions,” Jarboe said.

In the new model, a thin layer of magnetic flux and plasma, or free-floating electrons, moves at different speeds on different parts of the sun. The difference in speed between the flows creates twists of magnetism, known as magnetic helicity, that are similar to what happens in some fusion reactor concepts.

“Every 11 years, the sun grows this layer until it’s too big to be stable, and then it sloughs off,” Jarboe said. Its departure exposes the lower layer of plasma moving in the opposite direction with a flipped magnetic field.

When the circuits in both hemispheres are moving at the same speed, more sunspots appear. When the circuits are different speeds, there is less sunspot activity. That mismatch, Jarboe says, may have happened during the decades of little sunspot activity known as the “Maunder Minimum.”

“If the two hemispheres rotate at different speeds, then the sunspots near the equator won’t match up, and the whole thing will die,” Jarboe said.

“Scientists had thought that a sunspot was generated down at 30 percent of the depth of the sun, and then came up in a twisted rope of plasma that pops out,” Jarboe said. Instead, his model shows that the sunspots are in the “supergranules” that form within the thin, subsurface layer of plasma that the study calculates to be roughly 100 to 300 miles (150 to 450 kilometers) thick, or a fraction of the sun’s 430,000-mile radius.

“The sunspot is an amazing thing. There’s nothing there, and then all of a sudden, you see it in a flash,” Jarboe said.

The group’s previous research has focused on fusion power reactors, which use very high temperatures similar to those inside the sun to separate hydrogen nuclei from their electrons. In both the sun and in fusion reactors the nuclei of two hydrogen atoms fuse together, releasing huge amounts of energy.

The type of reactor Jarboe has focused on, a spheromak, contains the electron plasma within a sphere that causes it to self-organize into certain patterns. When Jarboe began to consider the sun, he saw similarities, and created a model for what might be happening in the celestial body.

“For 100 years people have been researching this,” Jarboe said. “Many of the features we’re seeing are below the resolution of the models, so we can only find them in calculations.”

Other properties explained by the theory, he said, include flow inside the sun, the twisting action that leads to sunspots and the total magnetic structure of the sun. The paper is likely to provoke intense discussion, Jarboe said.

“My hope is that scientists will look at their data in a new light, and the researchers who worked their whole lives to gather that data will have a new tool to understand what it all means,” he said.

Go to Source


Moon glows brighter than sun in images from NASA’s Fermi

If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That’s how NASA’s Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade.

Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon’s disk or any surface features. Instead, Fermi’s Large Area Telescope (LAT) detects a prominent glow centered on the Moon’s position in the sky.

Mario Nicola Mazziotta and Francesco Loparco, both at Italy’s National Institute of Nuclear Physics in Bari, have been analyzing the Moon’s gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays.

“Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes,” explained Mazziotta.

Because the particles are electrically charged, they’re strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.

Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts — more than 10 million times greater than the energy of visible light — and organized them over time, showing how longer exposures improve the view.

“Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full,” said Loparco.

As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.

While the Moon’s gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun’s denser atmosphere, producing gamma rays that can reach Fermi.

Although the gamma-ray Moon doesn’t show a monthly cycle of phases, its brightness does change over time. Fermi LAT data show that the Moon’s brightness varies by about 20% over the Sun’s 11-year activity cycle. Variations in the intensity of the Sun’s magnetic field during the cycle change the rate of cosmic rays reaching the Moon, altering the production of gamma rays.

Story Source:

Materials provided by NASA/Goddard Space Flight Center. Original written by Francis Reddy. Note: Content may be edited for style and length.

Go to Source