Categories
ProgrammableWeb

Postman Launches Postman Public Workspaces to Enable Collaborative API Design

Postman, an API design platform provider, has announced the launch of Postman Public Workspaces in an effort to accelerate API development practices through collaboration at a massive scale. This new initiative takes inspiration from the world of massively multiplayer gaming and leverages the platform’s userbase of more than 13 million developers.

Workspaces have been a part of Postman for some time now and have allowed developers to share Postman components with collaborators and organize their API work via these spaces. This functionality was previously limited to team access, but with the addition of Public Workspaces developers will now be able to gain benefit from Postman’s entire userbase. The company believes that this level of collaboration will fundamentally shift the way that APIs are designed. The product announcement noted that:

“Postman’s public workspaces will let a massive community of users engage with APIs or collections organized by API producers, with the ultimate quest of improving every API and the experience of that API’s consumers.”

Additionally, the announcement noted that Public Workspaces are not limited in scope to only APIs or companies. In fact, to kick off the product launch Postman has launched a series of workspaces that include likely offerings such as the Postman public workspace, in addition to less obvious options like the US 2020 Election public workspace.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ScienceDaily

AI speeds up development of new high-entropy alloys

Developing new materials takes a lot of time, money and effort. Recently, a POSTECH research team has taken a step closer to creating new materials by applying AI to develop high-entropy alloys (HEAs) which are coined as “alloy of alloys.”

A joint research team led by Professor Seungchul Lee, Ph.D. candidate Soo Young Lee, Professor Hyungyu Jin and Ph.D. candidate Seokyeong Byeon of the Department of Mechanical Engineering along with Professor Hyoung Seop Kim of the Department of Materials Science and Engineering have together developed a technique for phase prediction of HEAs using AI. The findings from the study were published in the latest issue of Materials and Design, an international journal on materials science.

Metal materials are conventionally made by mixing the principal element for the desired property with two or three auxiliary elements. In contrast, HEAs are made with equal or similar proportions of five or more elements without a principal element. The types of alloys that can be made like this are theoretically infinite and have exceptional mechanical, thermal, physical, and chemical properties. Alloys resistant to corrosion or extremely low temperatures, and high-strength alloys have already been discovered.

However, until now, designing new high-entropy alloy materials was based on trial and error, thus requiring much time and budget. It was even more difficult to determine in advance the phase and the mechanical and thermal properties of the high-entropy alloy being developed.

To this, the joint research team focused on developing prediction models on HEAs with enhanced phase prediction and explainability using deep learning. They applied deep learning in three perspectives: model optimization, data generation and parameter analysis. In particular, the focus was on building a data-enhancing model based on the conditional generative adversarial network. This allowed AI models to reflect samples of HEAs that have not yet been discovered, thus improving the phase prediction accuracy compared to the conventional methods.

In addition, the research team developed a descriptive AI-based HEA phase prediction model to provide interpretability to deep learning models, which acts as a black box, while also providing guidance on key design parameters for creating HEAs with certain phases.

“This research is the result of drastically improving the limitations of existing research by incorporating AI into HEAs that have recently been drawing much attention,” remarked Professor Seungchul Lee. He added, “It is significant that the joint research team’s multidisciplinary collaboration has produced the results that can accelerate AI-based fabrication of new materials.”

Professor Hyungyu Jin also added, “The results of the study are expected to greatly reduce the time and cost required for the existing new material development process, and to be actively used to develop new high-entropy alloys in the future.”

Story Source:

Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Microsoft to Require Token-Based Authentication on GitHub and Visual Studio

Microsoft has announced that in an effort to provide increased security GitHub nad Visual Studio users will no longer be able to use account passwords for API authentication. Beginning on November 13th, 2020, the company will require token-based authentication for all interactions that require authentication.

As a result of these changes, Microsoft noted that Git credential helpers will no longer be able to create new access tokens or help authenticate users for GitHub operations by using username and password. The company has now released new versions of Visual Studio 2017 and Visual Studio 2019 that include support for Git Credential Manager Core (GCM Core), a new secure Git credential helper. This new helper which supports OAuth token-based authentication is designed to make this transition as seamless as possible. 

Additionally, users that are unable to update to the latest version of these products can reference the GCM Core GitHub page to find information on a workaround. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ProgrammableWeb

Facebook Begins Rollout of Data Use Checkup to Facebook Platform Developers

In an effort to further protect user privacy, and given past failures in this area, Facebook has recently simplified the company’s platform terms and developer policies in hopes that this will improve adherence to guidelines. To support these goals Facebook has announced the rollout of Data Use Checkup, an annual process for developers that validates data usage.

This new process, which is supported by a self-service tool, was first announced in April of 2020 and will require developers to use check each application they manage for adherence to company standards. Developers will have 60 days to comply with this standard before losing access to APIs.

The rollout of this program will be gradual and developers will begin to be notified over the next several months. The announcement of the rollout notes that developers will be notified “via a developer alert, an email to the registered contact, and in your Task List within the App Dashboard.” To simplify the process for developers that manage multiple apps, Facebook is allowing batch processing via an interface that facilitates this action, although developers will still be required to check each apps permissions.

Developers can check the App Dashboard to verify if they are able to enroll in the program at this time. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ScienceDaily

A step forward in solving the reactor-neutrino flux problem

Joint effort of the nuclear theory group at the University of Jyvaskyla and the international collaborative EXO-200 experiment paves the way for solving the reactor antineutrino flux problems. The EXO-200 collaboration consists of researchers from 26 laboratories and the experiment is designed to measure the mass of the neutrino. As a by product of the calibration efforts of the experiment the electron spectral shape of the beta decay of Xe-137 could be measured. This particular decay is optimally well suited for testing a theoretical hypothesis to solve the long-standing and persistent reactor antineutrino anomaly. The results of measurements of the spectral shape were published in Physical Review Letters (June 2020)

Nuclear reactors are driven by fissioning uranium and plutonium fuel. The neutron-rich fission products decay by beta decay towards the beta-stability line by emitting electrons and electron antineutrinos. Each beta decay produces a continuous energy spectrum for the emitted electrons and antineutrinos up to a maximum energy (beta end-point energy).

The number of emitted electrons for each electron energy constitutes the electron spectral shape and the complement of it describes the antineutrino spectral shape.

Nuclear reactors emit antineutrinos with an energy distribution that is sum of the antineutrino spectral shapes of all the beta decays in the reactor. This energy distribution has been measured by large neutrino-oscillation experiments. On the other hand, this energy distribution of antineutrinos has been built by using the available nuclear data on beta decays of the fission products.

The established reference for this construction is the Huber-Mueller (HM) model. Comparison of the HM-predicted antineutrino energy spectrum with that measured by the oscillation experiments revealed a deficit in the number of measured antineutrinos and an additional “bump,” an extra increase in the measured number of the antineutrinos between 4 and 7 MeV of antineutrino energy. The deficit was coined the reactor antineutrino anomaly or the flux anomaly and has been associated with the oscillation of the ordinary neutrinos to the so-called sterile neutrinos which do not interact with ordinary matter, and thus disappear from the antineutrino flux emitted by the reactors. Up to recently there has not been a convincing explanation for the appearance of the bump in the measured antineutrino flux.

Only recently a potential explanation for the flux anomaly and bump has been discussed quantitatively. The flux deficit and the bump could be associated to omission of accurate spectral shapes of the so-called first-fobidden non-unique beta decays taken into account for the first time in the so-called “HKSS” flux model (from the first letters of the surnames of the authors, L. Hayen, J. Kostensalo, N. Severijns, J. Suhonen, of the related article).

How to verify that the HKSS flux and bump predictions are reliable?

“One way is to measure the spectral shapes of the key transitions and compare with the HKSS predictions. These measurements are extremely hard but recently a perfect test case could be measured by the  renowned EXO-200 collaboration and comparison with our theory group’s predictions could be achieved in a joint publication [AlKharusi2020]. A perfect match of the measured and theory-predicted spectral shape was obtained, thus supporting the HKSS calculations and its conclusions. Further measurements of spectral shapes of other transitions could be anticipated in the (near) future,” says Professor Jouni Suhonen from the Department of Physics at the University of Jyvaskyla.

Story Source:

Materials provided by University of Jyväskylä – Jyväskylän yliopisto. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Utah State Government Finds Apple, Google Exposure Notification API Insufficient

Apple and Google recently announced the initial rollout of the joint effort Exposure Notification API. The API, which enables cross-platform contact tracing functionality, has been greeted with varying degrees of acceptance. Some municipalities, including my home state of Washington, are jumping headfirst into incorporating the functionality. Others, not so much. 

The most recent municipality to forego the inclusion of the Exposure Notification API is Utah. The state still plans on implementing a contact tracing application, however, their effort, which is named Healthy Together, will not utilize the Exposure Notification API and is being developed by social media startup Twenty. Healthy Together seems like a completely innocuous application at first glance and in many ways it is. Users take a daily symptom analysis test and when necessary are directed to the closest testing location if the application determines the user could have contracted SARS-CoV-2. However, some believe that the amount of information the application requires to achieve this is unnecessary. 

The Healthy Together application uses GPS location data and Bluetooth in order to not only provide information on nearby testing locations but also to help identify individuals that may have come into contact with the user who is suspected of having COVID-19. The Utah State website explains this need for this data by stating that “Bluetooth on its own gives a less accurate picture than bluetooth and GPS location data. The goal of Healthy Together is to allow public health officials to understand how the disease spreads through the vector of people and places, and both location and bluetooth data are needed to accomplish that.”

At face value this argument makes sense, but it is also true that to get an accurate image of the spread of the disease you need a high percentage of community buy-in. It is yet to be seen what percentage of Utah’s population will be willing to provide this level of location data, however placing your bets on more-data-is-better could end up being an unnecessary gamble as people begin to tire of government intervention in their lives. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ProgrammableWeb

Apple, Google Join Forces to Build Cross-Platform COVID-19 Contact Tracing Tech

Apple and Google today announced a joint effort to create Bluetooth-enabled technology, in addition to cross-platform APIs, that will allow for global contact tracing via Android and iOS devices. The companies note that the initiative will operate on an opt-in basis and is being designed with user privacy in mind.

Apple’s announcement of the partnership states that this initiative is meant to act as an extension of efforts already underway by global health authorities:

“… public health officials have identified contact tracing as a valuable tool to help contain its spread. A number of leading public health authorities, universities, and NGOs around the world have been doing important work to develop opt-in contact tracing technology.”

The initial plan is for Apple and Google to develop APIs that enable interoperability between Android and iOS devices. This interoperability will be limited to communication between applications that are developed by partnering health authorities and target contact tracing initiatives. This initial API-centric phase is expected to be introduced sometime in May 2020. 

The two companies are also working on a broader effort that will result in a Bluetooth-based contact tracing platform that is integrated directly into the underlying platforms. This second effort is meant to expand access to tracing initiatives and create an ecosystem of apps and government health authorities. Importantly, Apple had this to say about its intention to ensure data privacy: 

“… Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.”

Anyone interested in the gritty details can check out the co-published draft technical documentation

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
3D Printing Industry

Researchers use injket 3D printing to create gold 3D images

In an effort to advance biomedical sensors, material scientists from the University of Seville, Spain, and the University of Nottingham have created a 3D printed image using nanoparticles of stabilized gold. As stated by the research published in Nature, gold nanoparticles themselves are not printable but provide biocompatible properties in fields such as diagnostics. For example, electrochemical […]

Go to Source
Author: Tia Vialva

Categories
ScienceDaily

How anti-sprawl policies may be harming water quality

Urban growth boundaries are created by governments in an effort to concentrate urban development — buildings, roads and the utilities that support them — within a defined area. These boundaries are intended to decrease negative impacts on people and the environment. However, according to a Penn State researcher, policies that aim to reduce urban sprawl may be increasing water pollution.

“What we were interested in was whether the combination of sprawl — or lack of sprawl — along with simultaneous agriculture development in suburban and rural areas could lead to increased water-quality damages,” said Douglas Wrenn, a co-funded faculty member in the Institutes of Energy and the Environment.

These water quality damages were due to pollution from nitrogen, phosphorus and sediment, three ingredients that in high quantities can cause numerous environmental problems in streams, rivers and bays. As a part of the EPA’s Clean Water Act (CWA), total maximum daily loads (TMDL) govern how much of these pollutants are allowed in a body of water while still meeting water-quality standards.

According to Wrenn, an associate professor in Penn State’s College of Agricultural Sciences, one of the reasons anti-sprawl policies can lead to more water pollution is because higher-density development has more impervious surfaces, such as concrete. These surfaces don’t absorb water but cause runoff. The water then flows into bodies of water, bringing sediment, nitrogen and phosphorus with it.

Secondly, agriculture creates considerably more water pollution than low-density residential areas. And when development outside of the boundaries that could replace agriculture is prevented, the amount of pollution that could be reduced is lost.

“If you concentrate development inside an urban growth boundary and allow agriculture to continue business as usual,” Wrenn said, “then you could actually end with anti-sprawl policies that lead to an increase in overall water quality damages.”

Wrenn said it is important for land-use planners in urban areas and especially in urbanizing and urban-fringe counties to understand this.

The EPA’s water quality regulation is divided between point source and nonpoint source polluters. Point source polluters include wastewater treatment facilities, big factories, consolidated animal feeding operations and stormwater management systems. Nonpoint sources are essentially everything else. And the CWA does not regulate nonpoint sources, which includes agriculture.

“When it comes to meeting TMDL regulations, point source polluters will always end up being responsible,” he said. “They are legally bound to basically do it all.”

Wrenn said point source polluters are very interested in getting nonpoint source polluters, specifically agriculture, involved in reducing pollution because their cost of reduction is usually far less expensive and often times more achievable.

“What our research has shown is that land-use regulation where land-use planners have some ability to manage where and when land-use development takes place, this gives some indication that land-use policy can be a helper or a hinderance to meeting these TMDL regulations,” Wrenn said.

Story Source:

Materials provided by Penn State. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Creating learning resources for blind students

Mathematics and science Braille textbooks are expensive and require an enormous effort to produce — until now. A team of researchers has developed a method for easily creating textbooks in Braille, with an initial focus on math textbooks. The new process is made possible by a new authoring system which serves as a “universal translator” for textbook formats, combined with enhancements to the standard method for putting mathematics in a Web page. Basing the new work on established systems will ensure that the production of Braille textbooks will become easy, inexpensive, and widespread.

“This project is about equity and equal access to knowledge,” said Martha Siegel, a Professor Emerita from Towson University in Maryland. Siegel met a blind student who needed a statistics textbook for a required course. The book was ordered but took six months (and several thousand dollars) to prepare, causing the student significant delay in her studies. Siegel and Al Maneki, a retired NSA mathematician who serves as senior STEM advisor to the National Federation of the Blind and who is blind himself, decided to do something about it.

“Given the amazing technology available today, we thought it would be easy to piece together existing tools into an automated process,” said Alexei Kolesnikov. Kolesnikov, a colleague of Siegel at Towson University, was recruited to the project in the Summer of 2018. Automating the process is the key, because currently Braille books are created by skilled people retyping from the printed version, which involves considerable time and cost. Converting the words is easy: Braille is just another alphabet. The hard part is conveying the structure of the book in a non-visual way, converting the mathematics formulas, and converting the graphs and diagrams.

The collaboration which solved the problem was formed in January, 2019, with the help of the American Institute of Mathematics, through its connections in the math research and math education communities.

“Mathematics teachers who have worked with visually impaired students understand the unique challenges they face,” said Henry Warchall, Senior Adviser in the Division of Mathematical Sciences at the National Science Foundation, which funds the American Institute of Mathematics. “By developing an automated way to create Braille mathematics textbooks, this project is making mathematics significantly more accessible, advancing NSF’s goal of broadening participation in the nation’s scientific enterprise.”

There are three main problems to solve when producing a Braille version of a textbook. First is the overall structure. A typical textbook uses visual clues to indicate chapters, sections, captions, and other landmarks. In Braille all the letters are the same size and shape, so these structural elements are described with special symbols. The other key issues are accurately conveying complicated mathematics formulas, and providing a non-visual way to represent graphs and diagrams.

The first problem was solved by a system developed by team member Rob Beezer, a math professor at the University of Puget Sound in Washington. Beezer sees this work as a natural extension of a dream he has been pursuing for several years. “We have been developing a system for writing textbooks which automatically produces print versions as well as online, EPUB, Jupyter, and other formats. Our mantra is Write once, read anywhere.” Beezer added Braille as an output format in his system, which is called PreTeXt. Approximately 100 books have been written in PreTeXt, all of which can now be converted to Braille.

Math formulas are represented using the Nemeth Braille Code, initially developed by the blind mathematician Abraham Nemeth in the 1950s. The Nemeth Braille in this project is produced by MathJax, a standard package for displaying math formulas on web pages. Team member Volker Sorge, of the School of Computer Science at the University of Birmingham, noted, “We have made great progress in having MathJax produce accessible math content on the Web, so the conversion to Braille was a natural extension of that work.” Sorge is a member of the MathJax consortium and the sole developer of Speech Rule Engine, the system that is at the core of the Nemeth translation and provides accessibility features in MathJax and other online tools.

“Some people have the mistaken notion that online versions and screen readers eliminate the need for Braille,” commented project co-leader Al Maneki. Sighted learners need to spend time staring at formulas, looking back and forth and comparing different parts. In the same way, a Braille formula enables a person to touch and compare various pieces. Having the computer pronounce a formula for you is not adequate for a blind reader, any more than it would be adequate for a sighted reader.

It will be particularly useful for visually impaired students to have simultaneous access to both the printed Braille and an online version.

Graphs and diagrams remain a unique challenge for representing non-visually. Many of the usual tools of presenting information using color or thickness of a line, shading, etc., are not available in tactile graphics. The tips of our fingers have a much lower resolution than our eyes, so the size of the image has to be bigger (yet still fit on the page). The labels that are included in the picture have to be translated to Braille, and placed so that they do not interfere with the drawn lines. Diagrams that show three-dimensional shapes are particularly hard to “read” in a tactile format. Ongoing work will automate the process of converting images to tactile graphics.

This work is part of a growing effort to create high-quality free textbooks. Many of the textbooks authored with PreTeXt are available at no cost in highly interactive online versions, in addition to traditional PDF and printed versions. Having Braille as an additional format, produced automatically, will make these inexpensive textbooks also available to blind students.

The group has begun discussions with professional organizations to incorporate Braille output into the production system for their publications.

Details of this work will be announced during three talks on Thursday, January 16, 2020, at the conference Joint Mathematics Meetings in Denver, Colorado.

Further information:

The structural components are handled by the PreTeXt authoring system. Rob Beezer, a math professor at the University of Puget Sound in Washington, is the inventor of PreTeXt and also developed the enhancements to PreTeXt which were required for this project.

The Braille math formulas are handled by MathJax, a system originally designed for displaying math formulas on a web page. Volker Sorge, a Reader in Scientific Document Analysis in the School of Computer Science at the University of Birmingham in the UK, is the lead developer for adding accessibility features to MathJax, including the recent enhancements for producing Nemeth Braille.

The production of tactile images is the most difficult problem faced in producing Braille textbooks. Akexei Kolesnikov, a math professor at Towson University in Maryland, is the lead developer for the image processing in this project. Ongoing work, including a workshop at the American Institute of Mathematics in August, 2020, will create new ways for describing images with the goal of automating the production of non-visual representations.

Go to Source
Author: