Categories
ScienceDaily

First measurements of radiation levels on the moon

In the coming years and decades, various nations want to explore the moon, and plan to send astronauts there again for this purpose. But on our inhospitable satellite, space radiation poses a significant risk. The Apollo astronauts carried so-called dosimeters with them, which performed rudimentary measurements of the total radiation exposure during their entire expedition to the moon and back again. In the current issue (25 September) of the journal Science Advances, Chinese and German scientists report for the first time on time-resolved measurements of the radiation on the moon.

The “Lunar Lander Neutron and Dosimetry” (LND) was developed and built at Kiel University, on behalf of the Space Administration at the German Aerospace Center (DLR), with funding from the Federal Ministry for Economic Affairs and Energy (BMWi). The measurements taken by the LND allow the calculation of the so-called equivalent dose. This is important to estimate the biological effects of space radiation on humans. “The radiation exposure we have measured is a good benchmark for the radiation within an astronaut suit,” said Thomas Berger of the German Aerospace Center in Cologne, co-author of the publication.

The measurements show an equivalent dose rate of about 60 microsieverts per hour. In comparison, on a long-haul flight from Frankfurt to New York, it is about 5 to 10 times lower, and on the ground well over 200 times lower. Since astronauts would be on the moon for much longer than passengers flying to New York and back, this represents considerable exposure for humans, said Robert Wimmer-Schweingruber from Kiel University, whose team developed and built the instrument. “We humans are not really made to withstand space radiation. However, astronauts can and should shield themselves as far as possible during longer stays on the moon, for example by covering their habitat with a thick layer of lunar soil,” explained second author Wimmer-Schweingruber. “During long-term stays on the moon, the astronauts’ risk of getting cancer and other diseases could thus be reduced,” added co-author Christine Hellweg from the German Aerospace Center.

The measurements were taken on board the Chinese lunar lander Chang’e-4, which landed on the far side of the moon on 3 January 2019. The device from Kiel takes measurements during the lunar “daylight,” and like all other scientific equipment, switches off during the very cold and nearly two-week-long lunar night, to conserve battery power. The device and lander were scheduled to take measurements for at least a year, and have now already exceeded this goal. The data from the device and the lander is transmitted back to earth via the relay satellite Queqiao, which is located behind the moon.

The data obtained also has some relevance with respect to future interplanetary missions. Since the moon has neither a protective magnetic field nor an atmosphere, the radiation field on the surface of the moon is similar to that in interplanetary space, apart from the shielding by the moon itself. “This is why the measurements taken by the LND will also be used to review and further develop models that can be used for future missions. For example, if a manned mission departs to Mars, the new findings enable us to reliably estimate the anticipated radiation exposure in advance. That’s why it is important that our detector also allows us to measure the composition of the radiation,” said Wimmer-Schweingruber.

Story Source:

Materials provided by Kiel University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Modern theory from ancient impacts

Around 4 billion years ago, the solar system was far less hospitable than we find it now. Many of the large bodies we know and love were present, but probably looked considerably different, especially the Earth. We know from a range of sources, including ancient meteorites and planetary geology, that around this time there were vastly more collisions between, and impacts from, asteroids originating in the Mars-Jupiter asteroid belt.

Knowledge of these events is especially important to us, as the time period in question is not only when the surface of our planet was taking on a more recognizable form, but was also when life was just getting started. With more accurate details of Earth’s rocky history, it could help researchers answer some long-standing questions concerning the mechanisms responsible for life, as well as provide information for other areas of life science.

“Meteorites provide us with the earliest history of ourselves,” said Professor Yuji Sano from the Atmosphere and Ocean Research Institute at the University of Tokyo. “This is what fascinated me about them. By studying properties, such as radioactive decay products, of meteorites that fell to Earth, we can deduce when they came and where they came from. For this study we examined meteorites that came from Vesta, the second-largest asteroid after the dwarf planet Ceres.”

Sano and his team found evidence that Vesta was hit by multiple impacting bodies around 4.4 billion to 4.15 billion years ago. This is earlier than 3.9 billion years ago, which is when the late heavy bombardment (LHB) is thought to have occurred. Current evidence for the LHB comes from lunar rocks collected during the Apollo moon missions of the 1970s, as well as other sources. But these new studies are improving upon previous models and will pave the way for an up-to-date database of early solar impact records.

“That Vesta-origin meteorites clearly show us impacts earlier than the LHB raises the question, ‘Did the late heavy bombardment truly occur?'” said Sano. “It seems to us that early solar system impacts peaked sooner than the LHB and reduced smoothly with time. It may not have been the cataclysmic period of chaos that current models describe.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Single photons from a silicon chip

Quantum technology holds great promise: Just a few years from now, quantum computers are expected to revolutionize database searches, AI systems, and computational simulations. Today already, quantum cryptography can guarantee absolutely secure data transfer, albeit with limitations. The greatest possible compatibility with our current silicon-based electronics will be a key advantage. And that is precisely where physicists from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and TU Dresden have made remarkable progress: The team has designed a silicon-based light source to generate single photons that propagate well in glass fibers.

Quantum technology relies on the ability to control the behavior of quantum particles as precisely as possible, for example by locking individual atoms in magnetic traps or by sending individual light particles — called photons — through glass fibers. The latter is the basis of quantum cryptography, a communication method that is, in principle, tap-proof: Any would-be data thief intercepting the photons unavoidably destroys their quantum properties. The senders and receivers of the message will notice that and can stop the compromised transmission in time.

This requires light sources that deliver single photons. Such systems already exist, especially based on diamonds, but they have one flaw: “These diamond sources can only generate photons at frequencies that are not suitable for fiber optic transmission,” explains HZDR physicist Dr. Georgy Astakhov. “Which is a significant limitation for practical use.” So Astakhov and his team decided to use a different material — the tried and tested electronic base material silicon.

100,000 single photons per second

To make the material generate the infrared photons required for fiber optic communication, the experts subjected it to a special treatment, selectively shooting carbon into the silicon with an accelerator at the HZDR Ion Beam Center. This created what is called G-centers in the material — two adjacent carbon atoms coupled to a silicon atom forming a sort of artificial atom.

When radiated with red laser light, this artificial atom emits the desired infrared photons at a wavelength of 1.3 micrometers, a frequency excellently suited for fiber optic transmission. “Our prototype can produce 100,000 single photons per second,” Astakhov reports. “And it is stable. Even after several days of continuous operation, we haven’t observed any deterioration.” However, the system only works in extremely cold conditions — the physicists use liquid helium to cool it down to a temperature of minus 268 degrees Celsius.

“We were able to show for the first time that a silicon-based single-photon source is possible,” Astakhov’s colleague Dr. Yonder Berencén is happy to report. “This basically makes it possible to integrate such sources with other optical components on a chip.” Among other things, it would be of interest to couple the new light source with a resonator to solve the problem that infrared photons largely emerge from the source randomly. For use in quantum communication, however, it would be necessary to generate photons on demand.

Light source on a chip

This resonator could be tuned to exactly hit the wavelength of the light source, which would make it possible to increase the number of generated photons to the point that they are available at any given time. “It has already been proven that such resonators can be built in silicon,” reports Berencén. “The missing link was a silicon-based source for single photons. And that’s exactly what we’ve now been able to create.”

But before they can consider practical applications, the HZDR researchers still have to solve some problems — such as a more systematic production of the new telecom single-photon sources. “We will try to implant the carbon into silicon with greater precision,” explains Georgy Astakhov. “HZDR with its Ion Beam Center provides an ideal infrastructure for realizing ideas like this.”

Story Source:

Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Unique supernova explosion

One-hundred million light years away from Earth, an unusual supernova is exploding.

That exploding star — which is known as “supernova LSQ14fmg” — was the faraway object discovered by a 37-member international research team led by Florida State University Assistant Professor of Physics Eric Hsiao. Their research, which was published in the Astrophysical Journal, helped uncover the origins of the group of supernovae this star belongs to.

This supernova’s characteristics — it gets brighter extremely slowly, and it is also one of the brightest explosions in its class — are unlike any other.

“This was a truly unique and strange event, and our explanation for it is equally interesting,” said Hsiao, the paper’s lead author.

The exploding star is what is known as a Type Ia supernova, and more specifically, a member of the “super-Chandrasekhar” group.

Stars go through a sort of life cycle, and these supernovae are the exploding finale of some stars with low mass. They are so powerful that they shape the evolution of galaxies, and so bright that we can observe them from Earth even halfway across the observable universe.

An image of the “Blue Snowball” planetary nebula taken with the Florida State University Observatory. The supernova LSQ14fmg exploded in a system similar to this, with a central star losing a copious amount of mass through a stellar wind. When the mass loss abruptly stopped, it created a ring of material surrounding the star. Courtesy of Eric Hsiao

Type Ia supernovae were crucial tools for discovering what’s known as dark energy, which is the name given to the unknown energy that causes the current accelerated expansion of the universe. Despite their importance, astronomers knew little about the origins of these supernova explosions, other than that they are the thermonuclear explosions of white dwarf stars.

But the research team knew that the light from a Type Ia supernova rises and falls over the course of weeks, powered by the radioactive decay of nickel produced in the explosion. A supernova of that type would get brighter as the nickel becomes more exposed, then fainter as the supernova cools and the nickel decays to cobalt and to iron.

After collecting data with telescopes in Chile and Spain, the research team saw that the supernova was hitting some material surrounding it, which caused more light to be released along with the light from the decaying nickel. They also saw evidence that carbon monoxide was being produced. Those observations led to their conclusion — the supernova was exploding inside what had been an asymptotic giant branch (AGB) star on the way to becoming a planetary nebula.

“Seeing how the observation of this interesting event agrees with the theory is very exciting,” said Jing Lu, an FSU doctoral candidate and a co-author of the paper.

They theorized that the explosion was triggered by the merger of the core of the AGB star and another white dwarf star orbiting within it. The central star was losing a copious amount of mass through a stellar wind before the mass loss was turned off abruptly and created a ring of material surrounding the star. Soon after the supernova exploded, it impacted a ring of material often seen in planetary nebulae and produced the extra light and the slow brightening observed.

“This is the first strong observational proof that a Type Ia supernova can explode in a post-AGB or proto-planetary-nebula system and is an important step in understanding the origins of Type Ia supernovae,” Hsiao said. “These supernovae can be particularly troublesome because they can mix into the sample of normal supernovae used to study dark energy. This research gives us a better understanding of the possible origins of Type Ia supernovae and will help to improve future dark energy research.”

Story Source:

Materials provided by Florida State University. Original written by Bill Wellock. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

New neural network differentiates Middle and Late Stone Age toolkits

MSA toolkits first appear some 300 thousand years ago, at the same time as the earliest fossils of Homo sapiens, and are still in use 30 thousand years ago. However, from 67 thousand years ago, changes in stone tool production indicate a marked shift in behaviour; the new toolkits that emerge are labelled LSA and remained in use into the recent past. A growing body of evidence suggests that the transition from MSA to LSA was not a linear process, but occurred at different times in different places. Understanding this process is important to examine what drives cultural innovation and creativity, and what explains this critical behavioural change. Defining differences between the MSA and LSA is an important step towards this goal.

“Eastern Africa is a key region to examine this major cultural change, not only because it hosts some of the youngest MSA sites and some of the oldest LSA sites, but also because the large number of well excavated and dated sites make it ideal for research using quantitative methods,” says Dr. Jimbob Blinkhorn, an archaeologist from the Pan African Evolution Research Group, Max Planck Institute for the Science of Human History and the Centre for Quaternary Research, Department of Geography, Royal Holloway. “This enabled us to pull together a substantial database of changing patterns of stone tool production and use, spanning 130 to 12 thousand years ago, to examine the MSA-LSA transition.”

The study examines the presence or absence of 16 alternate tool types across 92 stone tool assemblages, but rather than focusing on them individually, emphasis is placed on the constellations of tool forms that frequently occur together.

“We’ve employed an Artificial Neural Network (ANN) approach to train and test models that differentiate LSA assemblages from MSA assemblages, as well as examining chronological differences between older (130-71 thousand years ago) and younger (71-28 thousand years ago) MSA assemblages with a 94% success rate,” says Dr. Matt Grove, an archaeologist at the University of Liverpool.

Artificial Neural Networks (ANNs) are computer models intended to mimic the salient features of information processing in the brain. Like the brain, their considerable processing power arises not from the complexity of any single unit but from the action of many simple units acting in parallel. Despite the widespread use of ANNs today, applications in archaeological research remain limited.

“ANNs have sometimes been described as a ‘black box’ approach, as even when they are highly successful, it may not always be clear exactly why,” says Grove. “We employed a simulation approach that breaks open this black box to understand which inputs have a significant impact on the results. This enabled us to identify how patterns of stone tool assemblage composition vary between the MSA and LSA, and we hope this demonstrates how such methods can be used more widely in archaeological research in the future.”

“The results of our study show that MSA and LSA assemblages can be differentiated based on the constellation of artefact types found within an assemblage alone,” Blinkhorn adds. “The combined occurrence of backed pieces, blade and bipolar technologies together with the combined absence of core tools, Levallois flake technology, point technology and scrapers robustly identifies LSA assemblages, with the opposite pattern identifying MSA assemblages. Significantly, this provides quantified support to qualitative differences noted by earlier researchers that key typological changes do occur with this cultural transition.”

The team plans to expand the use of these methods to dig deeper into different regional trajectories of cultural change in the African Stone Age. “The approach we’ve employed offers a powerful toolkit to examine the categories we use to describe the archaeological record and to help us examine and explain cultural change amongst our ancestors,” says Blinkhorn.

Go to Source
Author:

Categories
ProgrammableWeb

10 Top Messaging APIs

In recent years, messaging has become a primary means of communication for much of the world. The asynchronous convenience of text messaging (SMS), web instant messaging, and in-app messaging, has driven this rise in popularity, along with a slough of enticing features within messaging applications to keep us hooked.

Engaging features in messaging applications include cross platform operation, artificial intelligence chatbots, anytime/anywhere usage thanks to WiFi or mobile network operators, file transfers, free international “calls”, business communications, audio messages, aggregated services, group chat, encryption, self-destructing messages, instant payments, and automatic alerts. Fun additions such as emoji, stickers, image & video support, avatars animations, “story” creation, games, cute bubbles and screen effects, contextual keyboards and even handwritten text lure customers to use messaging applications.

It’s not unusual to see applications with built-in custom messaging services, and developers who create applications have a vast amount of choices for delivering messaging technology. In order to integrate with these services, developers need APIs.

What is a Messaging API?

A Messaging API, or Application Programming Interface, is a means for developers to connect to specific messaging services programmatically.

The best place to discovery APIs for adding messaging capabilities to applications is in the ProgrammableWeb directory in the Messaging category. This article highlights the 10 most popular messaging APIs based on website traffic in the ProgrammableWeb directory.

1. Telegram

Telegram is a cloud-based mobile and desktop messaging app that focuses on speed and security. The Telegram API allows developers to build their own customized Telegram clients and applications. API methods are provided for dealing with spam and ToS violations, logging in via QR code, registration/authorization, working with GIFs, working with 2FA login, working with VoIP calls, working with deep links, working with files, and much more.

2. Bulk SMS Gateway API

The Bulk SMS Gateway APITrack this API allows developers to integrate bulk SMS services into their applications and portals. This API is suited for sending both promotional and transactional SMS to clients. API documentation is not publicly available. This service is provided by KAPSYSTEM, a company in India that provides bulk SMS and messaging solutions.

3. WhatsApp Business API

The WhatsApp Business APIs allow businesses to interact with and reach customers all over the world, connecting them using end-to-end encryption to ensure only the intended parties can read or listen to messages and calls. A REST APITrack this API and Streaming (Webhooks) APITrack this API are available.

4. Twilio SMS API

Twilio is a cloud communications platform that provides tools for adding messaging, voice, and video to web and mobile applications. The Twilio SMS APITrack this API allows developers to send and receive SMS messages, track sent messages, and retrieve and modify message history from their applications. This API uses a RESTful interface over HTTPS.

5. BDApps Pro SMS API

BDApps is an application development platform that provides Robi network tools for monetization and messaging. The BDApps Pro SMS APITrack this API allows developers to send and receive SMS using JSON objects over HTTP. This API can also be used to check the delivery status of sent SMS, receive SMS with a short code, and more. BDApps is based in Bangladesh.

6. Verizon ThingSpace SMS API

The Verizon ThingSpace SMS APITrack this API lets applications send time sensitive information to users phones about devices or sensor readings,such as temperature threshold warnings, gas leakage, smoke, fires, outages, and more. The ThingSpace SMS API allows users to check the delivery status of messages, and receive other notifications about messaging.

7. Telenor SMS API

Telenor is a mobile carrier based on Norway. The Telenor SMS APITrack this API provides access to the company’s text messaging service for business-to-business and business-to-consumer bulk messaging needs. The company provides short and whole numbers for sending and receiving text and MMS messages. There are various options for using the API, including SOAP and XMPP protocols.

8. waboxapp API

waboxapp is an API that allows users to integrate systems and Instant Messaging (IM) accounts. The waboxapp APITrack this API simplifies the integration of IM accounts such as WhatsApp in chat applications.

9. Twitter Direct Message API

The Twitter Direct Message APITrack this API allows developers to create engaging customer service and marketing experiences using Twitter Direct Messages (DM). Developers can send and receive direct messages, create welcome messages, attach media to messages, prompt users for structured replies, link to websites with buttons, manage conversations across multiple applications, display custom content, and prompt users for NPS and CSAT feedback with the API.

10. Mirrorfly API

Mirrorfly is a real time chat and messaging solution. The Mirrorfly APITrack this API allows developers to integrate chat, video, and voice functionality into their mobile and web applications. This service is customizable, comes with built-in WebRTC, and can be used for enterprise communication, in-app messaging, broadcasting, streaming, customer support, team chat, social chat, and personal chat. Both cloud-based and on-premises versions of Mirrorfly are available.

Build custom chat applications with MirrorFly API and SDK. Screenshot: MirrorFly

See the Messaging category for more than 1100 Messaging APIs, 1000 SDKs, and 1000 Source Code Samples, along with How-To and news articles and other developer resources..

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">joyc</a>

Categories
ScienceDaily

Warming Greenland ice sheet passes point of no return

Nearly 40 years of satellite data from Greenland shows that glaciers on the island have shrunk so much that even if global warming were to stop today, the ice sheet would continue shrinking.

The finding, published today, Aug. 13, in the journal Nature Communications Earth and Environment, means that Greenland’s glaciers have passed a tipping point of sorts, where the snowfall that replenishes the ice sheet each year cannot keep up with the ice that is flowing into the ocean from glaciers.

“We’ve been looking at these remote sensing observations to study how ice discharge and accumulation have varied,” said Michalea King, lead author of the study and a researcher at The Ohio State University’s Byrd Polar and Climate Research Center. “And what we’ve found is that the ice that’s discharging into the ocean is far surpassing the snow that’s accumulating on the surface of the ice sheet.”

King and other researchers analyzed monthly satellite data from more than 200 large glaciers draining into the ocean around Greenland. Their observations show how much ice breaks off into icebergs or melts from the glaciers into the ocean. They also show the amount of snowfall each year — the way these glaciers get replenished.

The researchers found that, throughout the 1980s and 90s, snow gained through accumulation and ice melted or calved from glaciers were mostly in balance, keeping the ice sheet intact. Through those decades, the researchers found, the ice sheets generally lost about 450 gigatons (about 450 billion tons) of ice each year from flowing outlet glaciers, which was replaced with snowfall.

“We are measuring the pulse of the ice sheet — how much ice glaciers drain at the edges of the ice sheet — which increases in the summer. And what we see is that it was relatively steady until a big increase in ice discharging to the ocean during a short five- to six-year period,” King said.

The researchers’ analysis found that the baseline of that pulse — the amount of ice being lost each year — started increasing steadily around 2000, so that the glaciers were losing about 500 gigatons each year. Snowfall did not increase at the same time, and over the last decade, the rate of ice loss from glaciers has stayed about the same — meaning the ice sheet has been losing ice more rapidly than it’s being replenished.

“Glaciers have been sensitive to seasonal melt for as long as we’ve been able to observe it, with spikes in ice discharge in the summer,” she said. “But starting in 2000, you start superimposing that seasonal melt on a higher baseline — so you’re going to get even more losses.”

Before 2000, the ice sheet would have about the same chance to gain or lose mass each year. In the current climate, the ice sheet will gain mass in only one out of every 100 years.

King said that large glaciers across Greenland have retreated about 3 kilometers on average since 1985 — “that’s a lot of distance,” she said. The glaciers have shrunk back enough that many of them are sitting in deeper water, meaning more ice is in contact with water. Warm ocean water melts glacier ice, and also makes it difficult for the glaciers to grow back to their previous positions.

That means that even if humans were somehow miraculously able to stop climate change in its tracks, ice lost from glaciers draining ice to the ocean would likely still exceed ice gained from snow accumulation, and the ice sheet would continue to shrink for some time.

“Glacier retreat has knocked the dynamics of the whole ice sheet into a constant state of loss,” said Ian Howat, a co-author on the paper, professor of earth sciences and distinguished university scholar at Ohio State. “Even if the climate were to stay the same or even get a little colder, the ice sheet would still be losing mass.”

Shrinking glaciers in Greenland are a problem for the entire planet. The ice that melts or breaks off from Greenland’s ice sheets ends up in the Atlantic Ocean — and, eventually, all of the world’s oceans. Ice from Greenland is a leading contributor to sea level rise — last year, enough ice melted or broke off from the Greenland ice sheet to cause the oceans to rise by 2.2 millimeters in just two months.

The new findings are bleak, but King said there are silver linings.

“It’s always a positive thing to learn more about glacier environments, because we can only improve our predictions for how rapidly things will change in the future,” she said. “And that can only help us with adaptation and mitigation strategies. The more we know, the better we can prepare.”

This work was supported by grants from NASA. Other Ohio State researchers who worked on this study are Salvatore Candela, Myoung Noh and Adelaide Negrete.

Go to Source
Author:

Categories
ScienceDaily

Mix of contaminants in Fukushima wastewater, risks of ocean dumping

Nearly 10 years after the Tohoku-oki earthquake and tsunami devastated Japan’s Fukushima Dai-ichi Nuclear Power Plant and triggered an unprecedented release radioactivity into the ocean, radiation levels have fallen to safe levels in all but the waters closest to the shuttered power plant. Today, fish and other seafood caught in waters beyond all but a limited region have been found to be well within Japan’s strict limits for radioactive contamination, but a new hazard exists and is growing every day in the number of storage tanks on land surrounding the power plant that hold contaminated wastewater. An article published August 8 in the journal Science takes a look at some of the many radioactive elements contained in the tanks and suggests that more needs to be done to understand the potential risks of releasing wastewater from the tanks into the ocean.

“We’ve watched over the past nine-plus years as the levels of radioactive cesium have declined in seawater and in marine life in the Pacific,” said Ken Buesseler, a marine chemist at the Woods Hole Oceanographic Institution and author of the new paper. “But there are quite a few radioactive contaminants still in those tanks that we need to think about, some of which that were not seen in large amounts in 2011, but most importantly, they don’t all act the same in the ocean.”

Since 2011, Buesseler has been studying the spread of radiation from Fukushima into and across the Pacific. In June of that year, he mobilized a team of scientists to conduct the first international research cruise to study the early pathways that cesium-134 and -137, two radioactive isotopes of cesium produced in reactors, were taking as they entered the powerful Kuroshio Current off the coast of Japan. He has also built a network of citizen scientists in the U.S. and Canada who have helped monitor the arrival and movement of radioactive material on the Pacific coast of North America.

Now, he is more concerned about the more than 1,000 tanks on the grounds of the power plant filling with ground water and cooling water that have become contaminated through contact with the reactors and their containment buildings. Sophisticated cleaning processes have been able to remove many radioactive isotopes and efforts to divert groundwater flows around the reactors have greatly reduced the amount of contaminated water being collected to less than 200 metric tons per day, but some estimates see the tanks being filled in the near future, leading some Japanese officials to suggest treated water should be released into the ocean to free up space for more wastewater.

One of the radioactive isotopes that remains at the highest levels in the treated water and would be released is tritium, an isotope of hydrogen is almost impossible to remove, as it becomes part of the water molecule itself. However, tritium has a relatively short half-life, which measures the rate of decay of an isotope; is not absorbed as easily by marine life or seafloor sediments, and produces beta particles, which is not as damaging to living tissue as other forms of radiation. Isotopes that remain in the treated wastewater include carbon-14, cobalt-60, and strontium-90. These and the other isotopes that remain, which were only revealed in 2018, all take much longer to decay and have much greater affinities for seafloor sediments and marine organisms like fish, which means they could be potentially hazardous to humans and the environment for much longer and in more complex ways than tritium.

“The current focus on tritium in the wastewater holding tanks ignores the presence other radioactive isotopes in the wastewater,” said Buesseler. “It’s a hard problem, but it’s solvable. The first step is to clean up those additional radioactive contaminants that remain in the tanks, and then make plans based on what remains. Any option that involves ocean releases would need independent groups keeping track of all of the potential contaminants in seawater, the seafloor, and marine life. The health of the ocean — and the livelihoods of countless people — rely on this being done right.”

Story Source:

Materials provided by Woods Hole Oceanographic Institution. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Randomness theory could hold key to internet security

The question has been central to cryptography for thousands of years, and lies at the heart of efforts to secure private information on the internet. In a new paper, Cornell Tech researchers identified a problem that holds the key to whether all encryption can be broken — as well as a surprising connection to a mathematical concept that aims to define and measure randomness.

“Our result not only shows that cryptography has a natural ‘mother’ problem, it also shows a deep connection between two quite separate areas of mathematics and computer science — cryptography and algorithmic information theory,” said Rafael Pass, professor of computer science at Cornell Tech.

Pass is co-author of “On One-Way Functions and Kolmogorov Complexity,” which will be presented at the IEEE Symposium on Foundations of Computer Science, to be held Nov. 16-19 in Durham, North Carolina.

“The result,” he said, “is that a natural computational problem introduced in the 1960s in the Soviet Union characterizes the feasibility of basic cryptography — private-key encryption, digital signatures and authentication, for example.”

For millennia, cryptography was considered a cycle: Someone invented a code, the code was effective until someone eventually broke it, and the code became ineffective. In the 1970s, researchers seeking a better theory of cryptography introduced the concept of the one-way function — an easy task or problem in one direction that is impossible in the other.

For example, it’s easy to light a match, but impossible to return a burning match to its unlit state without rearranging its atoms — an immensely difficult task.

“The idea was, if we have such a one-way function, maybe that’s a very good starting point for understanding cryptography,” Pass said. “Encrypting the message is very easy. And if you have the key, you can also decrypt it. But someone who doesn’t know the key should have to do the same thing as restoring a lit match.”

But researchers have not been able to prove the existence of a one-way function. The most well-known candidate — which is also the basis of the most commonly used encryption schemes on the internet — relies on integer factorization. It’s easy to multiply two random prime numbers — for instance, 23 and 47 — but significantly harder to find those two factors if only given their product, 1,081.

It is believed that no efficient factoring algorithm exists for large numbers, Pass said, though researchers may not have found the right algorithms yet.

“The central question we’re addressing is: Does it exist? Is there some natural problem that characterizes the existence of one-way functions?” he said. “If it does, that’s the mother of all problems, and if you have a way to solve that problem, you can break all purported one-way functions. And if you don’t know how to solve that problem, you can actually get secure cryptography.”

Meanwhile, mathematicians in the 1960s identified what’s known as Kolmogorov Complexity, which refers to quantifying the amount of randomness or pattern of a string of numbers. The Kolmogorov Complexity of a string of numbers is defined as the length of the shortest computer program that can generate the string; for some strings, such as 121212121212121212121212121212, there is a short program that generates it — alternate 1s and 2s. But for more complicated and apparently random strings of numbers, such as 37539017332840393452954329, there may not exist a program that is shorter than the length of the string itself.

The problem has long interested mathematicians and computer scientists, including Juris Hartmanis, professor emeritus of computer science and engineering. Because the computer program attempting to generate the number could take millions or even billions of years, researchers in the Soviet Union in the 1960s, as well as Hartmanis and others in the 1980s, developed the time-bounded Kolmogorov Complexity — the length of the shortest program that can output a string of numbers in a certain amount of time.

In the paper, Pass and doctoral student Yanyi Liu showed that if computing time-bounded Kolmogorov Complexity is hard, then one-way functions exist.

Although their finding is theoretical, it has potential implications across cryptography, including internet security.

“If you can come up with an algorithm to solve the time-bounded Kolmogorov complexity problem, then you can break all crypto, all encryption schemes, all digital signatures,” Pass said. “However, if no efficient algorithm exists to solve this problem, you can get a one-way function, and therefore you can get secure encryption and digital signatures and so forth.”

The research was funded in part by the National Science Foundation and the Air Force Office of Scientific Research, and was based on research funded by the Intelligence Advanced Research Projects Activity in the Office of the Director of National Intelligence.

Go to Source
Author:

Categories
ProgrammableWeb

NSW Health Pathology Aided by APIs in Response to COVID-19

Over the past four years, NSW Health Pathology invested heavily in API-led connectivity. With coverage from Justin Hendry, writing for ITnews.com, we learn how this investment has paved the way for them to pivot handily to building public-facing services during the coronavirus pandemic. 

At this year’s MuleSoft CONNECT digital summit, enterprise architect Tim Eckersly spoke about the agency’s rapid response early on in the pandemic, crediting the agency’s “large library of healthcare microservices.” This library is one of the projects developed over the past four years, designed to allow “seamless integration between a very broad range of healthcare systems…each wave of delivery built up a groundswell of microservices…the reusable components gradually take a much more dominant posture and provide a really solid launching place to have this rapid response.”

Disclosure: MuleSoft is the parent company of ProgrammableWeb.

The NWS agency is the largest public provider of pathology in Australia. Eckersley leads the agency’s DevOps. He credits their architectural approach with allowing the agency to build out and launch a results-delivery bot in just two weeks. 

Eckersley explains, “In terms of what we’ve been able to achieve with MuleSoft, we’ve used it to integrate our four laboratory information systems, which are our core systems of record in the background, with the greater health system…So that’s the eMRs [electronic medical records] or the eHRs [electronic health records], depending on if you’re in Australia or the United States, as well as the outpatients administration systems.”

Eckersley’s team developed their automated, citizen-facing service in the first weeks of the pandemic, working in partnership with AWS, Deloitte, and Microsoft. This group approach shaved off a tremendous amount of work time: Eckersley credits the strategy with returning “5,000 days of effort back to clinical frontline staff.” The service will also work to “tie those [systems] together with our federal systems, so things like the My Health Record and the national cancer screening registry.”

The service is projected to return test results in as little as 24 hours or less – much more quickly than in other parts of the world. This service was initially piloted with a few regional clinics, before rolling out in the rest of the state. The simple, approachable service is easy for participants, Eckersley explains. “All [patients need to do when they go to get a nasal swab taken] is scan a QR code and it immediately pops open a text message of ‘what are my results?’ to our text bot service…and then that text bot requests that [the patient] put in identifying information, as well as the date their collection was taken, and it will instantly give them the results as soon as they become available.”

A key facet of the strategy is focused on collecting fringe cases, with a bot which integrates with a healthcare system (such as Cerna, Auslab, and a Jira service desk, which allows automated ticket creation). This collection enables the service to keep notifications within a three-day window of response time. The four-year process of building a library of microservices is the foundation enabling the agency to hit the ground running with a delivery window of two weeks. 

Eckersley breaks down the more technical elements of their process: “[By] taking an HL7 message, using the MuleSoft HL7 adapters and then connecting it up with cloud infrastructure like Azure service bus for messaging, we’ve been able to make a state-scaled solution really quickly which can pick up the millions of messages that we get running through the state in any given week and handle them in an API-led way…So we take that message in HL7, we convert it to XML, and then we push it through our process API layer…at that point, it is converted into a range of different FHIR [Fast Healthcare Interoperability Resources].”

Converting to FHIR empowers the agencies to use a NoSQL database like Cosmos at hyperscale: information can then be stored, and experience APIs can be presented to agency web and mobile apps (as well as those belonging to their partners). The agency is currently shifting all Mulesoft services piecemeal to Kubernetes, with the idea that a slow shift will reduce risk and allow detailed prioritization of which apps move when. 

Also presenting at the 2020 Mulesoft CONNECT digital event was NWS Health Pathology CIO James Patterson. Patterson praised the strategy of reusing as many components as possible, explaining that it reduced the creation of “technical debt.” Patterson explains: 

“Even where we’ve had things like a billing project that’s using MuleSoft integration to bring data from our legacy systems into our more modern systems, we’ve been able to pick up components of that previous project and reuse them to build these new services. Where we’ve had legacy, we’ve had to build things from scratch in our modern integration environment, and obviously that takes longer and takes more effort…we’re creating a situation where we’re removing technical debt as we go through the crisis, and I think that’s been really centered around our strategy with Mulesoft.

Patterson credits the upheaval of the pandemic with forcing adoption of agile practices, whereas pre-pandemic, agile practices made up just 10% of the work time at NSW Health Pathology. He praises the shift, musing that “I think the opportunity is now there to introduce that way of working into all of our work or most of our work, which will really enhance the experience of our customers internally.” 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">Katherine-Harrison-Adcock</a>