Categories
ProgrammableWeb

Why API Gateways Still Matter in Service Mesh Deployments

Most of the organizations have become brave enough and spend a lot of time on R&D to convert their underlying business infrastructure to match with the next generation. Most of the time people are focusing on totally revamping their architecture by choosing to go with a microservices model, leaving behind their monolithic architecture. Even though this has been the trend, organizations find it difficult to take this leap, as it would require some specific set of knowledge and technologies, and their architects and IT employees need time to become experts of these novel ideas.

Having expertise in implementing microservice architecture (MSA) and studying different technologies that provide the capability gives you another set of problems when actually trying to design your architecture. I will try to discuss a few of these problems here. First I will try to explain the transition from monolith to microservices.

Monolith

In the most common cases of monolithic architecture, there will be a single application that provides the business capability, and any scaling of that application can be done as a whole based on the demand. When certain types of governance are required and the business functionality can be abstracted with APIs (as the organization moves to modernize its legacy IT), API gateways can take over the role of requirements like security, authorization, rate limiting, and transformations from the different business components.

Transform to Microservices

One of the initial steps in the journey of moving towards an MSA is to break down the monolith application into smaller services each of which handles specific business logic. The services are loosely coupled with each other and communicate with each other using typical API calls over the network based on HTTP, gRPC, etc.

But microservice sprawl can lead to another problem; their management and governance. The challenges include but are not limited to:

  1. Monitoring the microservices for their health, metrics and logs
  2. Hot deploying a microservice with a bug fix (so as not to disrupt the other services that depend on it)
  3. Adding new microservices and handling the routing for them
  4. Securing service to service communication and handling security in a standard manner across all services
  5. Managing the traffic with circuit breaking, timeouts and rate limiting
  6. Adding new policies to govern the microservices and manage those policies

One proven way of dealing with these and other challenges raised by microservice expansion is to build a service mesh by adopting a service mesh solution provider. 

A service mesh is a network of microservices that, when taken together, form the basis of a  composable  application. A service mesh provides infrastructure to handle the interaction between those microservices. It consists of a data plane which is the mesh of microservices and a control plane that governs and manages the data plane.  A service mesh injects a proxy as a sidecar to each microservice. This sidecar proxy governs how microservices communicate with each other and how the control plane communicates with the sidecar proxy in order to manage the data plane. 

Deploying a service mesh adds a new set of questions at the architectural level. The main problem is where do I put my good old API gateway in the service mesh or if I even need one. Do API gateways and service meshes solve the same set of problems. For example:

  • A service mesh provides secure service to service communication. Do I need API gateway security any more?
  • A service mesh provides request authentication and identifies the client. Do I still need the API gateway for end-user authentication?
  • A service mesh provides circuit breaking, timeouts and pluggable policies. Do I need the same features from the API gateway?
  • A service mesh also handles the traffic routing within the mesh. Why do I need API gateway routing anymore?
  • A service mesh provides metrics and logging about the included microservices. Why do I need an API gateway for monitoring and analytics?

On the surface, it appears as though API gateways and service meshes solve the same problem and are therefore redundant. Actually they do solve the same problems, but in two completely different contexts. Mainly an API gateway acts in the edge of the deployment facing the external clients, handling north south traffic and service mesh manages the traffic among the different microservices which is the east west traffic.

Let’s try to break down the above statement under different facts.

1. APIs vs Microservices

The basic idea of a service mesh is to provide an ecosystem to manage the microservices. Ultimately however, the end-to-end business functionality (eg: the workflow to place an order) is a set of interfaces defined as APIs for external parties and developers to consume. In other words, the API is an abstraction of a workflow consisting of multiple microservices that, when stitched together, performs a meaningful business operation. The service mesh controls those microservices which are behind the API while the API is a digital asset that’s discoverable by external/internal parties at the edge of your deployment. To govern these APIs and to manage them and act as a policy enforcement point (PEP) for externally discoverable APIs, the overall deployment therefore requires an API gateway at the edge.

2. Security

A typical service mesh would provide different mechanisms to secure the service to service communication or the east-west traffic of the deployment. The most common way seems to be the mutual transport layer security (TLS). So the services can communicate with a set of trusted services only. One of the main capabilities of the service mesh is to manage the certificates required for mutual TLS within the mesh, as the services are updated frequently. A service mesh handles these capabilities via its own control plane by rotating the certificates. When a new microservice is deployed or existing certificates expire, the control plane updates all the other microservices by distributing newly added microservices’s public certificate with other microservices. This helps to hot deploy new services within the mesh while maintaining secure mutual transport layer security.

But once these microservices are exposed as APIs to the external parties, there should be a way to authenticate the external parties consuming these APIs. Authenticating north south traffic is more serious because it’s coming from unknown external parties. API gateways are specifically designed for this purpose, to act at the edge and stop unauthenticated or untrusted traffic from coming into the system. 
So API gateways provide stronger end user security mechanisms like Oauth2 and OIDC flows that are designed to identify and authenticate the actual end users of these microservices. Once the API gateway authenticates the north south traffic, the service mesh handles the service to service communication with its own internal security mechanisms like mutual TLS.  

A service mesh can provide authorization capabilities as well. The ability to define custom authorization policies are another key feature of a service mesh. But these policies are actually a set of rules that governs the traffic within the mesh. For example allow traffic from a certain IP address, allow the traffic for certain HTTP paths (eg: :/order/{id}) only, etc. These policies do not focus on end user privileges like their roles or permissions. But API gateways provide more end user authorization capabilities with mechanisms like role-based and permission-based authorization checks using fine grained access mechanisms like scope, XACML or via connecting with policy agents.

3. Traffic management

Traffic management is another common functionality provided by the API gateway as well as by a service mesh provider. Service mesh implementations provide capabilities like connection timeouts, circuit breaking, and request retrying when connecting with microservices. API gateways also provide the same set of capabilities when connecting with microservices. 

Where they differ is in the nature of configurable traffic policies. In a service mesh, the rate limiting policies can be applied at the microservice level. We can limit the allowed number of requests or the request bandwidth (bytes per second) for a particular microservice using a service mesh. An API gateway however allows you to define more complex traffic policies based on the end user. It can limit the access to APIs for certain sets of users under different rate limit policies. Such end user based traffic limiting capabilities can be integrated with billing engines in order to monetize the APIs as well. An API gateway can also apply traffic policies to meaningful APIs that provide a particular business capability rather than applying them at microservice level. These capabilities of API gateways make it possible to block a user from using an API. On the other hand, these kinds of traffic management requirements would be difficult to achieve with the capabilities provided by a service mesh. 

4. Traffic routing

API gateways and service meshes both have their own dynamic routing tables for routing requests to the correct endpoint or microservice. A service mesh does routing at two levels. First, routing at the ingress level (ingress gateway) to route traffic to the correct side car or microservice and second, within the sidecar proxy to route traffic for service to service communication. 

The API gateway engages at the ingress level in the service mesh, and can act as the first layer of routing rather than using an ingress gateway. So an API gateway is perfectly suited to handling the ingress traffic thereby replacing the ingress gateway of a service mesh allowing only secure traffic to the mesh.

5. Monitoring tools

API gateways and service mesh implementations focus on monitoring tools, but they are intended for two different purposes. Service meshes provide metrics related to microservices where each side car publishes data related to latency, resource utilization, and the throughput of each microservice that is attached to the sidecar. This data is helpful for devops personnel whose job it is to identify issues within the service mesh, and enables them to isolate issues. 

On the other hand, an API gateway monitors the traffic at the ingress level and provides valuable insights regarding the usage of APIs. Examples of this data include which API has the most frequent hits, from which geographical area the most frequent users are coming, which set of users are the most frequent visitors, how many API calls were successful, and how many calls failed. So these kinds of data can be used for analytical purposes to design future improvements of the platform. They can also give an idea of how much of revenue is generated and how much revenue is lost due to faulty invocations as well.
 

The above diagram explains how an API gateway fits into a service mesh at the edge of your deployment. So it’s a bad idea to consider these two as competitors just by looking at the features of each. It’s better to view the two as being complementary to one another in deployments that involve both microservices and APIs. 
 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">rajithk90</a>

Categories
ScienceDaily

Computer vision helps scientists study lithium ion batteries

Lithium-ion batteries lose their juice over time, causing scientists and engineer to work hard to understand that process in detail. Now, scientists at the Department of Energy’s SLAC National Accelerator Laboratory have combined sophisticated machine learning algorithms with X-ray tomography data to produce a detailed picture of how one battery component, the cathode, degrades with use.

The new study, published May 8 in Nature Communications, focused on how to better visualize what’s going on in cathodes made of nickel-manganese-cobalt, or NMC. In these cathodes, NMC particles are held together by a conductive carbon matrix, and researchers have speculated that one cause of performance decline could be particles breaking away from that matrix. The team’s goal was to combine cutting-edge capabilities at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) and the European Synchrotron Radiation Facility (ESRF) to develop a comprehensive picture of how NMC particles break apart and break away from the matrix and how that might contribute to performance losses.

Of course, it’s a tall order for humans to figure out what’s going on just by looking at pictures of an NMC cathode, so the team turned to computer vision, a subfield of machine learning algorithms originally designed to scan images or videos and identify and track objects like dogs or cars.

Even then, there were challenges. Computer vision algorithms often zero in on boundaries defined by light or dark lines, so they’d have a hard time differentiating between several small NMC particles stuck together and a single large but partially fractured one; to most computer vision systems, those fractures would look like clean breaks.

To address that problem, the team used a type of algorithm set up to deal with hierarchical objects — for example, a jigsaw puzzle, which we would think of as a complete entity even though it’s made up of many individual pieces. With input and judgments from the researchers themselves, they trained this algorithm to distinguish different kinds of particles and thus develop a three-dimensional picture of how NMC particles, whether large or small, fractured or not, break away from the cathode.

They discovered that particles detaching from the carbon matrix really do contribute significantly to a battery’s decline, at least under conditions one would typically see in consumer electronics, such as smart phones.

Second, while large NMC particles are more likely to become damaged and break away, quite a few smaller particles break away, too, and overall, there’s more variation in the way small particles behave, said Yijin Liu, a staff scientist at SLAC and a senior author of the new paper. That’s important because researchers had generally assumed that by making battery particles smaller, they could make longer-lasting batteries — something the new study suggests might not be so straightforward, Liu said.

Story Source:

Materials provided by DOE/SLAC National Accelerator Laboratory. Original written by Nathan Collins. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Print your own laboratory-grade microscope for US$18

For the first time, labs around the world can 3D print their own precision microscopes to analyse samples and detect diseases , thanks to an open-source design created at the University of Bath.

The OpenFlexure Microscope, described in Biomedical Optics Express, is a fully automated, laboratory-grade instrument with motorised sample positioning and focus control. It is unique among 3D-printed microscope in its ability to yield high-quality images. It has been designed to be easy to use, with an intuitive software interface and simplified alignment procedures. It is also highly customisable, meaning it can be adapted for laboratory, school and home use.

Best of all, the Bath design is a lot more affordable than a commercial microscope, both in terms of the upfront cost and the maintenance costs of the equipment. A commercial microscope intended for lab use can sell for tens of thousands of pounds. An OpenFlexure microscope can be constructed for as little as £15 or US$18 (this would cover the cost of the printed plastic, a camera and some fastening hardware). A top-end version would cost a couple of hundred pounds to produce, and would include a microscope objective and an embedded Raspberry Pi computer.

Dr Joel Collins, co-creator of the microscope and physics researcher at the University of Bath, said, “We want these microscopes to be used around the world — in schools, in research laboratories, in clinics and in people’s homes if they want a microscope just to play with. You need to be able to pick it up and use it straight away. You also need it to be affordable.”

To date, over 100 OpenFlexure microscopes have been printed in Tanzania and Kenya, demonstrating the viability of a complex piece of hardware being conceptualised in one part of the world and manufactured elsewhere.

Co-creator Dr Richard Bowman said, “Our Tanzanian partners, STICLab, have modified the design to better suit their local market, demonstrating another key strength of open source hardware — the ability to customise, improve, and take ownership of a product.”

COVID-19 AND 3D PRINTED MEDICAL DEVICES

There has been a surge of interest in 3D printers since the onset of the pandemic, with many projects springing up around the world to develop low-cost, open-source 3D ventilators — or ventilator parts — to address the global shortage.

However, a piece of medical hardware requires years of detailed safety checks before it can be trusted for medical or laboratory use — the OpenFlexure Microscope project, for instance, has taken five years to complete. The Bath team believes it is highly unlikely that a new ventilator will be designed and approved during the course of this pandemic. They say it is much more likely that modifications of existing designs will be chosen by health authorities, where this is an option.

Dr Bowman, who has been working on the OpenFlexure project since its inception, first from the University of Cambridge and then from the Department of Physics at Bath, said, “Building a safety-critical medical device like a ventilator takes years for an organisation with hundreds of experienced engineers and an established quality management system. Making a ventilator that works in a few weeks is an impressive achievement, but ensuring it complies with even the relaxed, emergency version of the rules takes a lot longer than creating the initial design. Demonstrating to a regulator that the design and the manufacturing process meet all the requirements will be even harder.”

He added, “The flip side is that the medical device industry is very conservatively regulated, and it would be a good thing if all of this new attention (in 3D printed hardware) means there’s some rethinking done about how we can uphold high safety standards but make it easier to build something if you’re not a mega corporation.”

Story Source:

Materials provided by University of Bath. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Navigating the clean energy transition during the COVID-19 crisis

The COVID-19 pandemic emerged at a time when climate and energy policies were experiencing greater attention and — in some cases — greater momentum. But the ensuing global health emergency and economic crisis mean that the circumstances under which these climate and energy policies were conceived have drastically changed. In a Commentary published April 29 in the journal Joule, energy and climate policy researchers in Switzerland and Germany provide a framework for responsibly and meaningfully integrating policies supporting the clean energy transition into the COVID-19 response in the weeks, months, and years to come.

“We’re writing this commentary as COVID-19 fundamentally changes the economic environment of the clean energy transition, requiring policy makers to take major decisions within short timeframes,” says senior author Tobias S. Schmidt of ETH Zurich. “While many blogs or comments put forward ‘shopping’ lists of which policies to enact or which technologies to support, much of the advice lacked structure.”

In their Commentary, Schmidt and his colleagues argue against small “green wins” in the short-term that could prevent meaningful change in the long-term. “Bailouts should exclude sectors that are clearly incompatible with the Paris Agreement, such as tar sands development, but at the same time, bailout decisions primarily have to consider the societal value of uninterrupted service and of safeguarding jobs,” Schmidt says. “Instead, policymakers should consider increasing their leverage to shape business activities for Paris Agreement-compatible pathways in the future, for instance, by taking equity stakes or securing a say in the future strategy of bailed-out corporations.”

“The general public should understand that the short-term emissions reductions we are experiencing due to the lockdowns will not have major effects on climate change,” Schmidt says. “To decarbonize our energy systems and industry, we need structural change, meaning more and not less investment.”

Once the immediate crisis has passed, when many countries will have to address a major economic downturn, the authors say that low interest rates and massive public spending could offer important opportunities for the clean energy transition. “It is essential that we not repeat the mistakes of the post-financial crisis bailouts, which often led to massive increases in CO2 emissions,” says Schmidt.

Going forward, he says, “We think the COVID-19 pandemic has reminded us that we require policies that are proof to exogenous shocks, and we hope that future research will support policy makers in developing shock-proof policy designs.”

This work was supported by the Swiss State Secretariat for Education, Research and Innovation (SERI) as part of the European Union’s Horizon 2020 research and innovation program project INNOPATHS.

Story Source:

Materials provided by Cell Press. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

These COVID-19 APIs Have Drawn the Most Developer Interest

Since February, ProgrammableWeb has been following the COVID-19 outbreak. During that time we have tried to keep our readers up to date on new resources being made available to developers who want to join the fight against the pandemic. For example, to support developer efforts, over 60 COVID-19 related APIs have been released along with all kinds of open-source code, data sets, and other tools. You can visit our COVID-19 Developer Resource Center to learn more.

One question that has arisen is which APIs have gained the most traction with developers. Every API in the ProgrammableWeb directory offers a tracking capability that notifies developers of any relevant updates (new versions, news, etc.) through a weekly personalized Watchlist. This functionality extends to all directory content including SDKs, Sample Code, Libraries, and Frameworks. Entire categories can also be tracked. If a reader is interested in the COVID-19 category, for example, they can track it to receive weekly notifications that will tell them about new content that has been added to the category — including directory assets and news stories — or updates to existing content.

On the ProgrammableWeb side of things, this tracking data gives us insight into the APIs that our readers are most interested in. With that in mind, we looked at which COVID-19 related APIs have the most followers. There are currently 68 APIs tagged with the COVID-19 category, these are the top five in terms of followers.

NovelCOVID – This RESTful API is an open-source collaboration hosted on GitHub. It is a free API that returns the current information about the COVID-19 outbreak. The API supports country-specific responses and allows queries on the following parameters: cases, today’s cases, deaths, today’s deaths, recovered, and critical. You can read our full coverage of this API.

Bing COVID-19 Data – The Bing COVID-19 Data API provides total confirmed cases, deaths, and recoveries by country. Data is sourced from the US Centers for Disease Control and Prevention (CDC), World Health Organization (WHO), and the European Centre for Disease Prevention and Control (ECDC). The API is used as a source for a live map tracker from Microsoft Bing.

COVID19INDIA  – This API is part of a volunteer-driven, crowdsourced database project for COVID-19 stats and patient tracing in India. The API returns daily confirmed cases, daily deceased cases, and daily recovered cases as time-series data. This information is also available cumulatively and per district. 

About Corona COVID-19 – The About Corona COVID-19 API provides a RESTful interface that allows users to query data from The World Health Organization Situation Reports, the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE), The U.S. Department of Health & Human Services, The National Health Commission of the People’s Republic of China, the ECDC, and the Chinese Centre for Disease Control and Prevention (CCDC).

This API retrieves data by country including population, the number of confirmed cases, recovered cases, critical cases, deaths, recovered cases per death ratio, cases per million population, and more. The data is updated multiple times a day.

Nubentos COVID-19 Tracking – This API is from the self-proclaimed API marketplace for health, and it aims to provide valuable resources for tracking the COVID-19 outbreak. It provides developers access to data collected from global health organizations and local administrations including the WHO, the CDC, the CCDC, China’s National Health Commission, and the Chinese Website DXY. We covered this API in March.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">wsantos</a>

Categories
ScienceDaily

Catalyst enables reactions with the help of green light

For the first time, chemists at the University of Bonn and Lehigh University in Bethlehem (USA) have developed a titanium catalyst that makes light usable for selective chemical reactions. It provides a cost-effective and non-toxic alternative to the ruthenium and iridium catalysts used so far, which are based on very expensive and toxic metals. The new catalyst can be used to produce highly selective chemical products that can provide the basis for antiviral drugs or luminescent dyes, for example. The results have been published in the international edition of the journal Angewandte Chemie.

The electrons in chemical molecules are reluctant to lead a single life; they usually occur in pairs. Then they are particularly stable and do not tend to forge new partnerships in the form of new bonds. However, if some of the electrons are brought to a higher energy level with the help of light (photons), things begin to look different when it comes to this “monogamy”: In such an excited state, the molecules like to donate or to accept an electron. This creates so-called “radicals,” that have electrons, are highly reactive and can be used to form new bonds.

Irradiation with green light

The new catalyst is based on this principle: At its core is titanium, which is connected to a carbon ring in which the electrons are particularly mobile and can be easily excited. Green light is sufficient to use the catalyst for electron transfer to produce reactive organic intermediates that are otherwise not easily obtainable. “In the laboratory, we irradiated a reaction flask containing the titanium catalyst that can be viewed as a ‘red dye’ with green light,” reports Prof. Dr. Andreas Gansäuer from the Kekulé Institute of Organic Chemistry and Biochemistry at the University of Bonn. “And it worked right away.” The mixture generates radicals from organic molecules that initiate many reaction cycles from which a wide variety of chemical products can be produced.

A key factor in reactions with this photo redox catalyst is the wavelength of the light used for irradiation. “Ultraviolet radiation is unsuitable because it is far too energy-rich and would destroy the organic compounds,” says Gansäuer. Green light from LED lamps is both mild and energy-rich enough to trigger the reaction.

Catalysts are substances that increase the speed of chemical reactions and reduce the activation energy without being consumed themselves. This means that they are available continuously and can trigger reactions that would otherwise not occur in this form. The catalyst can be tailored to the desired products depending on the organic molecule with which the titanium is bonded.

Building blocks for antiviral drugs or luminescent dyes

The new titanium catalyst facilitates the reactions of epoxides, a group of chemicals from which epoxy resin are made. These are used as an adhesive or for composites. However, the scientists are not aiming for this mass product, but for the synthesis of much more valuable fine chemicals. “The titanium-based, tailor-made photo redox catalysts can for instance be used to produce building blocks for antiviral drugs or luminescent dyes,” says Gansäuer. He is confident that these new catalysts provide a cost-effective and more sustainable alternative to the ruthenium and iridium catalysts used so far, which are based on very expensive and toxic metals.

The development is an international collaborative effort by Zhenhua Zhang, Tobias Hilche, Daniel Slak, Niels Rietdijk and Andreas Gansäuer from the University of Bonn and Ugochinyere N. Oloyede and Robert A. Flowers II from Lehigh University (USA). While the scientists from the University of Bonn investigated how the desired compounds could best be synthesized with the new catalyst, their colleagues from the USA carried out measurements to prove the reaction pathways. “The luminescence phenomenon really opens up interesting space to consider the design of new sustainable reactions that proceed through free radical intermediates,” says Prof. Robert Flowers from the Lehigh University.

Story Source:

Materials provided by University of Bonn. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Breaking the size and speed limit of modulators: The workhorses of the internet

Researchers developed and demonstrated for the first time a silicon-based electro-optical modulator that is smaller, as fast as and more efficient than state-of-the-art technologies. By adding indium tin oxide (ITO) — a transparent conductive oxide found in touchscreen displays and solar cells — to a silicon photonic chip platform, the researchers were able to create a compact device 1 micrometer in size and able to yield gigahertz-fast, or 1 billion times per second, signal modulation.

Electro-optical modulators are the workhorses of the internet. They convert electrical data from computers and smartphones to optical data streams for fiber optic networks, enabling modern data communications like video streaming. The new invention is timely since demand for data services is growing rapidly and moving towards next generation communication networks. Taking advantage of their compact footprint, electro-optic converters can be utilized as transducers in optical computing hardware such as optical artificial neural networks that mimic the human brain and a plethora of other applications for modern-day life.

THE SITUATION

Electro-optical modulators in use today are typically between 1 millimeter and 1 centimeter in size. Reducing their size allows increased packaging density, which is vital on a chip. While silicon often serves as the passive structure on which photonic integrated circuits are built, the light matter interaction of silicon materials induces a rather weak optical index change, requiring a larger device footprint. While resonators could be used to boost this weak electro-optical effect, they narrow devices’ optical operating range and incur high energy consumption from required heating elements.

THE SOLUTION

By heterogeneously adding a thin material layer of indium tin oxide to the silicon photonic waveguide chip, researchers at the George Washington University, led by Volker Sorger, an associate professor of electrical and computer engineering, have demonstrated an optical index change 1,000 times larger than silicon. Unlike many designs based on resonators, this spectrally-broadband device is stable against temperature changes and allows a single fiber-optic cable to carry multiple wavelengths of light, increasing the amount of data that can move through a system.

FROM THE RESEARCHER

“We are delighted to have achieved this decade-long goal of demonstrating a GHz-fast ITO modulator. This sets a new horizon for next-generation photonic reconfigurable devices with enhanced performance yet reduced size,” said Dr. Sorger

Story Source:

Materials provided by George Washington University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Time on screens has little impact on kids’ social skills

Despite the time spent with smartphones and social media, young people today are just as socially skilled as those from the previous generation, a new study suggests.

Researchers compared teacher and parent evaluations of children who started kindergarten in 1998 — six years before Facebook launched — with those who began school in 2010, when the first iPad debuted.

Results showed both groups of kids were rated similarly on interpersonal skills such as the ability to form and maintain friendships and get along with people who are different. They were also rated similarly on self-control, such as the ability to regulate their temper.

In other words, the kids are still all right, said Douglas Downey, lead author of the study and professor of sociology at The Ohio State University.

“In virtually every comparison we made, either social skills stayed the same or actually went up modestly for the children born later,” Downey said.

“There’s very little evidence that screen exposure was problematic for the growth of social skills.”

Downey conducted the study with Benjamin Gibbs, associate professor of sociology at Brigham Young University. The study was just published online in the American Journal of Sociology.

The idea for the study came several years ago when Downey had an argument at a pizza restaurant with his son, Nick, about whether social skills had declined among the new generation of youth.

“I started explaining to him how terrible his generation was in terms of their social skills, probably because of how much time they spent looking at screens,” Downey said.

“Nick asked me how I knew that. And when I checked there really wasn’t any solid evidence.”

So Downey, with his colleague, decided to investigate. For their study, they used data from The Early Childhood Longitudinal Study, which is run by the National Center for Educational Statistics.

The ECLS follows children from kindergarten to fifth grade. The researchers compared data on the ECLS-K cohort that included children who began kindergarten in 1998 (19,150 students) with the cohort that began kindergarten in 2010 (13,400 students).

Children were assessed by teachers six times between the start of kindergarten and the end of fifth grade. They were assessed by parents at the beginning and end of kindergarten and the end of first grade.

Downey and Gibbs focused mostly on the teacher evaluations, because they followed children all the way to fifth grade, although the results from parents were comparable.

Results showed that from the teachers’ perspective, children’s social skills did not decline between the 1998 and 2010 groups. And similar patterns persisted as the children progressed to fifth grade.

In fact, teachers’ evaluations of children’s interpersonal skills and self-control tended to be slightly higher for those in the 2010 cohort than those in the 1998 group, Downey said.

Even children within the two groups who had the heaviest exposure to screens showed similar development in social skills compared to those with little screen exposure, results showed.

There was one exception: Social skills were slightly lower for children who accessed online gaming and social networking sites many times a day.

“But even that was a pretty small effect,” Downey said.

“Overall, we found very little evidence that the time spent on screens was hurting social skills for most children.”

Downey said while he was initially surprised to see that time spent on screens didn’t affect social skills, he really shouldn’t have been.

“There is a tendency for every generation at my age to start to have concerns about the younger generation. It is an old story,” he said.

These worries often involve “moral panic” over new technology, Downey explained. Adults are concerned when technological change starts to undermine traditional relationships, particularly the parent-child relationship.

“The introduction of telephones, automobiles, radio all led to moral panic among adults of the time because the technology allowed children to enjoy more autonomy,” he said.

“Fears over screen-based technology likely represent the most recent panic in response to technological change.”

If anything, new generations are learning that having good social relationships means being able to communicate successfully both face-to-face and online, Downey said.

“You have to know how to communicate by email, on Facebook and Twitter, as well as face-to-face. We just looked at face-to-face social skills in this study, but future studies should look at digital social skills as well.”

Go to Source
Author:

Categories
ScienceDaily

In a first, NASA measures wind speed on a brown dwarf

For the first time, scientists have directly measured wind speed on a brown dwarf, an object larger than Jupiter (the largest planet in our solar system) but not quite massive enough to become a star. To achieve the finding, they used a new method that could also be applied to learn about the atmospheres of gas-dominated planets outside our solar system.

Described in a paper in the journal Science, the work combines observations by a group of radio telescopes with data from NASA’s recently retired infrared observatory, the Spitzer Space Telescope, managed by the agency’s Jet Propulsion Laboratory in Southern California.

Officially named 2MASS J10475385+2124234, the target of the new study was a brown dwarf located 32 light-years from Earth — a stone’s throw away, cosmically speaking. The researchers detected winds moving around the planet at 1,425 mph (2,293 kph). For comparison, Neptune’s atmosphere features the fastest winds in the solar system, which whip through at more than 1,200 mph (about 2,000 kph).

Measuring wind speed on Earth means clocking the motion of our gaseous atmosphere relative to the planet’s solid surface. But brown dwarfs are composed almost entirely of gas, so “wind” refers to something slightly different. The upper layers of a brown dwarf are where portions of the gas can move independently. At a certain depth, the pressure becomes so intense that the gas behaves like a single, solid ball that is considered the object’s interior. As the interior rotates, it pulls the upper layers — the atmosphere -along so that the two are almost in synch.

In their study, the researchers measured the slight difference in speed of the brown dwarf’s atmosphere relative to its interior. With an atmospheric temperature of over 1,100 degrees Fahrenheit (600 degrees Celsius), this particular brown dwarf radiates a substantial amount of infrared light. Coupled with its close proximity to Earth, this characteristic made it possible for Spitzer to detect features in the brown dwarf’s atmosphere as they rotate in and out of view. The team used those features to clock the atmospheric rotation speed.

To determine the speed of the interior, they focused on the brown dwarf’s magnetic field. A relatively recent discovery found that the interiors of brown dwarfs generate strong magnetic fields. As the brown dwarf rotates, the magnetic field accelerates charged particles that in turn produce radio waves, which the researchers detected with the radio telescopes in the Karl G. Jansky Very Large Array in New Mexico.

Planetary Atmospheres

The new study is the first to demonstrate this comparative method for measuring wind speed on a brown dwarf. To gauge its accuracy, the group tested the technique using infrared and radio observations of Jupiter, which is also composed mostly of gas and has a physical structure similar to a small brown dwarf. The team compared the rotation rates of Jupiter’s atmosphere and interior using data that was similar to what they were able to collect for the much more distant brown dwarf. They then confirmed their calculation for Jupiter’s wind speed using more detailed data collected by probes that have studied Jupiter up close, thus demonstrating that their approach for the brown dwarf worked.

Scientists have previously used Spitzer to infer the presence of winds on exoplanets and brown dwarfs based on variations in the brightness of their atmospheres in infrared light. And data from the High Accuracy Radial velocity Planet Searcher (HARPS) — an instrument on the European Southern Observatory’s La Silla telescope in Chile — has been used to make a direct measurement of wind speeds on a distant planet.

But the new paper represents the first time scientists have directly compared the atmospheric speed with the speed of a brown dwarf’s interior. The method employed could be applied to other brown dwarfs or to large planets if the conditions are right, according to the authors.

“We think this technique could be really valuable to providing insight into the dynamics of exoplanet atmospheres,” said lead author Katelyn Allers, an associate professor of physics and astronomy at Bucknell University in Lewisburg, Pennsylvania. “What’s really exciting is being able to learn about how the chemistry, the atmospheric dynamics and the environment around an object are interconnected, and the prospect of getting a really comprehensive view into these worlds.”

The Spitzer Space Telescope was decomissioned on Jan. 30, 2020, after more than 16 years in space. JPL managed Spitzer mission operations for NASA’s Science Mission Directorate in Washington. Spitzer science data continue to be analyzed by the science community via the Spitzer data archive located at the Infrared Science Archive housed at IPAC at Caltech. Science operations were conducted at the Spitzer Science Center at IPAC at Caltech in Pasadena. Spacecraft operations were based at Lockheed Martin Space in Littleton, Colorado. Caltech manages JPL for NASA.

For more information about Spitzer, visit:

Go to Source
Author:

Categories
ScienceDaily

Magnetic monopoles detected in Kagome spin ice systems

Magnetic monopoles were detected for the first time worldwide at the Berlin Neutron Source BER II in 2008. At that time they in a one-dimensional spin system of a dysprosium compound. About 10 years ago, monopole quasi-particles could also be detected in two-dimensional spin-ice systems consisting of tetrahedral crystal units. However, these spin-ice materials were electrical insulators.

Now: Magnetic monopoles in a metal

Dr. Kan Zhao and Prof. Philipp Gegenwart from the University of Augsburg, together with teams from the Heinz Meier Leibnitz Centre, Forschungszentrum Jülich, the University of Colorado, the Academy of Sciences in Prague and the Helmholtz-Zentrum Berlin, have now shown for the first time that a metallic compound can also form such magnetic monopoles. The team in Augsburg prepared crystalline samples from the elements holmium, silver and germanium for this purpose.

Kagome spin-ice system means frustration

In the HoAgGe crystals, the magnetic moments (spins) of the holmium atoms form a so-called two-dimensional Kagome pattern. This name comes from the Japanese Kagome braiding art, in which the braiding bands are not woven at right angles to each other, but in such a way that triangular patterns are formed.

In the Kagome-pattern the spins of neighbouring atoms can not be aligned contrary to each other as usual. Instead, there are two permitted spin configurations: Either the spins of two of the three atoms point exactly towards the center of the triangle, while those of the third atom point out of the center. Or it is exactly the other way round: One spin points to the center, the other two out of it. This limits the possibilities of spin arrangements — hence the name “Kagome spin ice.” One consequence of this is that this system behaves as if magnetic monopoles were present in it.

This behaviour has now been experimentally demonstrated for the first time in HoAgGe crystals by the cooperation lead by the Augsburg researchers. They cooled the samples near absolute zero temperature and examined them under external magnetic fields of varying strength. Part of the experiments were carried out at the Heinz Maier-Leibnitz Centre in Garching near Munich. They were supported by the department of sample environment of the HZB, which provided a superconducting cryomagnet for the experiments at the FRM-II.

Data on the spin energy spectrum at NEAT

Thus they were able to generate different spin arrangements, which are expected in a Kagome spin ice system. Model calculations from the Augsburg research team showed what the energy spectrum of the spins should look like. This energy spectrum of the spins could then be measured using the method of inelastic neutron scattering at the NEAT instrument at the Berlin neutron source. “This was the final building block for detecting the magnetic monopoles in this system. The agreement with the theoretically predicted spectra is really excellent” says Dr. Margarita Russina, who is responsible for the NEAT instrument at HZB.

Story Source:

Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length.

Go to Source
Author: