Facebook Begins Rollout of Data Use Checkup to Facebook Platform Developers

In an effort to further protect user privacy, and given past failures in this area, Facebook has recently simplified the company’s platform terms and developer policies in hopes that this will improve adherence to guidelines. To support these goals Facebook has announced the rollout of Data Use Checkup, an annual process for developers that validates data usage.

This new process, which is supported by a self-service tool, was first announced in April of 2020 and will require developers to use check each application they manage for adherence to company standards. Developers will have 60 days to comply with this standard before losing access to APIs.

The rollout of this program will be gradual and developers will begin to be notified over the next several months. The announcement of the rollout notes that developers will be notified “via a developer alert, an email to the registered contact, and in your Task List within the App Dashboard.” To simplify the process for developers that manage multiple apps, Facebook is allowing batch processing via an interface that facilitates this action, although developers will still be required to check each apps permissions.

Developers can check the App Dashboard to verify if they are able to enroll in the program at this time. 

Go to Source
Author: <a href="">KevinSundstrom</a>


A step forward in solving the reactor-neutrino flux problem

Joint effort of the nuclear theory group at the University of Jyvaskyla and the international collaborative EXO-200 experiment paves the way for solving the reactor antineutrino flux problems. The EXO-200 collaboration consists of researchers from 26 laboratories and the experiment is designed to measure the mass of the neutrino. As a by product of the calibration efforts of the experiment the electron spectral shape of the beta decay of Xe-137 could be measured. This particular decay is optimally well suited for testing a theoretical hypothesis to solve the long-standing and persistent reactor antineutrino anomaly. The results of measurements of the spectral shape were published in Physical Review Letters (June 2020)

Nuclear reactors are driven by fissioning uranium and plutonium fuel. The neutron-rich fission products decay by beta decay towards the beta-stability line by emitting electrons and electron antineutrinos. Each beta decay produces a continuous energy spectrum for the emitted electrons and antineutrinos up to a maximum energy (beta end-point energy).

The number of emitted electrons for each electron energy constitutes the electron spectral shape and the complement of it describes the antineutrino spectral shape.

Nuclear reactors emit antineutrinos with an energy distribution that is sum of the antineutrino spectral shapes of all the beta decays in the reactor. This energy distribution has been measured by large neutrino-oscillation experiments. On the other hand, this energy distribution of antineutrinos has been built by using the available nuclear data on beta decays of the fission products.

The established reference for this construction is the Huber-Mueller (HM) model. Comparison of the HM-predicted antineutrino energy spectrum with that measured by the oscillation experiments revealed a deficit in the number of measured antineutrinos and an additional “bump,” an extra increase in the measured number of the antineutrinos between 4 and 7 MeV of antineutrino energy. The deficit was coined the reactor antineutrino anomaly or the flux anomaly and has been associated with the oscillation of the ordinary neutrinos to the so-called sterile neutrinos which do not interact with ordinary matter, and thus disappear from the antineutrino flux emitted by the reactors. Up to recently there has not been a convincing explanation for the appearance of the bump in the measured antineutrino flux.

Only recently a potential explanation for the flux anomaly and bump has been discussed quantitatively. The flux deficit and the bump could be associated to omission of accurate spectral shapes of the so-called first-fobidden non-unique beta decays taken into account for the first time in the so-called “HKSS” flux model (from the first letters of the surnames of the authors, L. Hayen, J. Kostensalo, N. Severijns, J. Suhonen, of the related article).

How to verify that the HKSS flux and bump predictions are reliable?

“One way is to measure the spectral shapes of the key transitions and compare with the HKSS predictions. These measurements are extremely hard but recently a perfect test case could be measured by the  renowned EXO-200 collaboration and comparison with our theory group’s predictions could be achieved in a joint publication [AlKharusi2020]. A perfect match of the measured and theory-predicted spectral shape was obtained, thus supporting the HKSS calculations and its conclusions. Further measurements of spectral shapes of other transitions could be anticipated in the (near) future,” says Professor Jouni Suhonen from the Department of Physics at the University of Jyvaskyla.

Story Source:

Materials provided by University of Jyväskylä – Jyväskylän yliopisto. Note: Content may be edited for style and length.

Go to Source


Utah State Government Finds Apple, Google Exposure Notification API Insufficient

Apple and Google recently announced the initial rollout of the joint effort Exposure Notification API. The API, which enables cross-platform contact tracing functionality, has been greeted with varying degrees of acceptance. Some municipalities, including my home state of Washington, are jumping headfirst into incorporating the functionality. Others, not so much. 

The most recent municipality to forego the inclusion of the Exposure Notification API is Utah. The state still plans on implementing a contact tracing application, however, their effort, which is named Healthy Together, will not utilize the Exposure Notification API and is being developed by social media startup Twenty. Healthy Together seems like a completely innocuous application at first glance and in many ways it is. Users take a daily symptom analysis test and when necessary are directed to the closest testing location if the application determines the user could have contracted SARS-CoV-2. However, some believe that the amount of information the application requires to achieve this is unnecessary. 

The Healthy Together application uses GPS location data and Bluetooth in order to not only provide information on nearby testing locations but also to help identify individuals that may have come into contact with the user who is suspected of having COVID-19. The Utah State website explains this need for this data by stating that “Bluetooth on its own gives a less accurate picture than bluetooth and GPS location data. The goal of Healthy Together is to allow public health officials to understand how the disease spreads through the vector of people and places, and both location and bluetooth data are needed to accomplish that.”

At face value this argument makes sense, but it is also true that to get an accurate image of the spread of the disease you need a high percentage of community buy-in. It is yet to be seen what percentage of Utah’s population will be willing to provide this level of location data, however placing your bets on more-data-is-better could end up being an unnecessary gamble as people begin to tire of government intervention in their lives. 

Go to Source
Author: <a href="">KevinSundstrom</a>


Apple, Google Join Forces to Build Cross-Platform COVID-19 Contact Tracing Tech

Apple and Google today announced a joint effort to create Bluetooth-enabled technology, in addition to cross-platform APIs, that will allow for global contact tracing via Android and iOS devices. The companies note that the initiative will operate on an opt-in basis and is being designed with user privacy in mind.

Apple’s announcement of the partnership states that this initiative is meant to act as an extension of efforts already underway by global health authorities:

“… public health officials have identified contact tracing as a valuable tool to help contain its spread. A number of leading public health authorities, universities, and NGOs around the world have been doing important work to develop opt-in contact tracing technology.”

The initial plan is for Apple and Google to develop APIs that enable interoperability between Android and iOS devices. This interoperability will be limited to communication between applications that are developed by partnering health authorities and target contact tracing initiatives. This initial API-centric phase is expected to be introduced sometime in May 2020. 

The two companies are also working on a broader effort that will result in a Bluetooth-based contact tracing platform that is integrated directly into the underlying platforms. This second effort is meant to expand access to tracing initiatives and create an ecosystem of apps and government health authorities. Importantly, Apple had this to say about its intention to ensure data privacy: 

“… Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.”

Anyone interested in the gritty details can check out the co-published draft technical documentation

Go to Source
Author: <a href="">KevinSundstrom</a>

3D Printing Industry

Researchers use injket 3D printing to create gold 3D images

In an effort to advance biomedical sensors, material scientists from the University of Seville, Spain, and the University of Nottingham have created a 3D printed image using nanoparticles of stabilized gold. As stated by the research published in Nature, gold nanoparticles themselves are not printable but provide biocompatible properties in fields such as diagnostics. For example, electrochemical […]

Go to Source
Author: Tia Vialva


How anti-sprawl policies may be harming water quality

Urban growth boundaries are created by governments in an effort to concentrate urban development — buildings, roads and the utilities that support them — within a defined area. These boundaries are intended to decrease negative impacts on people and the environment. However, according to a Penn State researcher, policies that aim to reduce urban sprawl may be increasing water pollution.

“What we were interested in was whether the combination of sprawl — or lack of sprawl — along with simultaneous agriculture development in suburban and rural areas could lead to increased water-quality damages,” said Douglas Wrenn, a co-funded faculty member in the Institutes of Energy and the Environment.

These water quality damages were due to pollution from nitrogen, phosphorus and sediment, three ingredients that in high quantities can cause numerous environmental problems in streams, rivers and bays. As a part of the EPA’s Clean Water Act (CWA), total maximum daily loads (TMDL) govern how much of these pollutants are allowed in a body of water while still meeting water-quality standards.

According to Wrenn, an associate professor in Penn State’s College of Agricultural Sciences, one of the reasons anti-sprawl policies can lead to more water pollution is because higher-density development has more impervious surfaces, such as concrete. These surfaces don’t absorb water but cause runoff. The water then flows into bodies of water, bringing sediment, nitrogen and phosphorus with it.

Secondly, agriculture creates considerably more water pollution than low-density residential areas. And when development outside of the boundaries that could replace agriculture is prevented, the amount of pollution that could be reduced is lost.

“If you concentrate development inside an urban growth boundary and allow agriculture to continue business as usual,” Wrenn said, “then you could actually end with anti-sprawl policies that lead to an increase in overall water quality damages.”

Wrenn said it is important for land-use planners in urban areas and especially in urbanizing and urban-fringe counties to understand this.

The EPA’s water quality regulation is divided between point source and nonpoint source polluters. Point source polluters include wastewater treatment facilities, big factories, consolidated animal feeding operations and stormwater management systems. Nonpoint sources are essentially everything else. And the CWA does not regulate nonpoint sources, which includes agriculture.

“When it comes to meeting TMDL regulations, point source polluters will always end up being responsible,” he said. “They are legally bound to basically do it all.”

Wrenn said point source polluters are very interested in getting nonpoint source polluters, specifically agriculture, involved in reducing pollution because their cost of reduction is usually far less expensive and often times more achievable.

“What our research has shown is that land-use regulation where land-use planners have some ability to manage where and when land-use development takes place, this gives some indication that land-use policy can be a helper or a hinderance to meeting these TMDL regulations,” Wrenn said.

Story Source:

Materials provided by Penn State. Note: Content may be edited for style and length.

Go to Source


Creating learning resources for blind students

Mathematics and science Braille textbooks are expensive and require an enormous effort to produce — until now. A team of researchers has developed a method for easily creating textbooks in Braille, with an initial focus on math textbooks. The new process is made possible by a new authoring system which serves as a “universal translator” for textbook formats, combined with enhancements to the standard method for putting mathematics in a Web page. Basing the new work on established systems will ensure that the production of Braille textbooks will become easy, inexpensive, and widespread.

“This project is about equity and equal access to knowledge,” said Martha Siegel, a Professor Emerita from Towson University in Maryland. Siegel met a blind student who needed a statistics textbook for a required course. The book was ordered but took six months (and several thousand dollars) to prepare, causing the student significant delay in her studies. Siegel and Al Maneki, a retired NSA mathematician who serves as senior STEM advisor to the National Federation of the Blind and who is blind himself, decided to do something about it.

“Given the amazing technology available today, we thought it would be easy to piece together existing tools into an automated process,” said Alexei Kolesnikov. Kolesnikov, a colleague of Siegel at Towson University, was recruited to the project in the Summer of 2018. Automating the process is the key, because currently Braille books are created by skilled people retyping from the printed version, which involves considerable time and cost. Converting the words is easy: Braille is just another alphabet. The hard part is conveying the structure of the book in a non-visual way, converting the mathematics formulas, and converting the graphs and diagrams.

The collaboration which solved the problem was formed in January, 2019, with the help of the American Institute of Mathematics, through its connections in the math research and math education communities.

“Mathematics teachers who have worked with visually impaired students understand the unique challenges they face,” said Henry Warchall, Senior Adviser in the Division of Mathematical Sciences at the National Science Foundation, which funds the American Institute of Mathematics. “By developing an automated way to create Braille mathematics textbooks, this project is making mathematics significantly more accessible, advancing NSF’s goal of broadening participation in the nation’s scientific enterprise.”

There are three main problems to solve when producing a Braille version of a textbook. First is the overall structure. A typical textbook uses visual clues to indicate chapters, sections, captions, and other landmarks. In Braille all the letters are the same size and shape, so these structural elements are described with special symbols. The other key issues are accurately conveying complicated mathematics formulas, and providing a non-visual way to represent graphs and diagrams.

The first problem was solved by a system developed by team member Rob Beezer, a math professor at the University of Puget Sound in Washington. Beezer sees this work as a natural extension of a dream he has been pursuing for several years. “We have been developing a system for writing textbooks which automatically produces print versions as well as online, EPUB, Jupyter, and other formats. Our mantra is Write once, read anywhere.” Beezer added Braille as an output format in his system, which is called PreTeXt. Approximately 100 books have been written in PreTeXt, all of which can now be converted to Braille.

Math formulas are represented using the Nemeth Braille Code, initially developed by the blind mathematician Abraham Nemeth in the 1950s. The Nemeth Braille in this project is produced by MathJax, a standard package for displaying math formulas on web pages. Team member Volker Sorge, of the School of Computer Science at the University of Birmingham, noted, “We have made great progress in having MathJax produce accessible math content on the Web, so the conversion to Braille was a natural extension of that work.” Sorge is a member of the MathJax consortium and the sole developer of Speech Rule Engine, the system that is at the core of the Nemeth translation and provides accessibility features in MathJax and other online tools.

“Some people have the mistaken notion that online versions and screen readers eliminate the need for Braille,” commented project co-leader Al Maneki. Sighted learners need to spend time staring at formulas, looking back and forth and comparing different parts. In the same way, a Braille formula enables a person to touch and compare various pieces. Having the computer pronounce a formula for you is not adequate for a blind reader, any more than it would be adequate for a sighted reader.

It will be particularly useful for visually impaired students to have simultaneous access to both the printed Braille and an online version.

Graphs and diagrams remain a unique challenge for representing non-visually. Many of the usual tools of presenting information using color or thickness of a line, shading, etc., are not available in tactile graphics. The tips of our fingers have a much lower resolution than our eyes, so the size of the image has to be bigger (yet still fit on the page). The labels that are included in the picture have to be translated to Braille, and placed so that they do not interfere with the drawn lines. Diagrams that show three-dimensional shapes are particularly hard to “read” in a tactile format. Ongoing work will automate the process of converting images to tactile graphics.

This work is part of a growing effort to create high-quality free textbooks. Many of the textbooks authored with PreTeXt are available at no cost in highly interactive online versions, in addition to traditional PDF and printed versions. Having Braille as an additional format, produced automatically, will make these inexpensive textbooks also available to blind students.

The group has begun discussions with professional organizations to incorporate Braille output into the production system for their publications.

Details of this work will be announced during three talks on Thursday, January 16, 2020, at the conference Joint Mathematics Meetings in Denver, Colorado.

Further information:

The structural components are handled by the PreTeXt authoring system. Rob Beezer, a math professor at the University of Puget Sound in Washington, is the inventor of PreTeXt and also developed the enhancements to PreTeXt which were required for this project.

The Braille math formulas are handled by MathJax, a system originally designed for displaying math formulas on a web page. Volker Sorge, a Reader in Scientific Document Analysis in the School of Computer Science at the University of Birmingham in the UK, is the lead developer for adding accessibility features to MathJax, including the recent enhancements for producing Nemeth Braille.

The production of tactile images is the most difficult problem faced in producing Braille textbooks. Akexei Kolesnikov, a math professor at Towson University in Maryland, is the lead developer for the image processing in this project. Ongoing work, including a workshop at the American Institute of Mathematics in August, 2020, will create new ways for describing images with the goal of automating the production of non-visual representations.

Go to Source

IEEE Spectrum

Back To The Elusive Future

In the January issue, Spectrum’s editors make every effort to bring the coming year’s important technologies to your attention. Some we get right, others less so. Twelve years ago, IEEE Fellow, Marconi Prize winner, and beloved Spectrum columnist Robert W. Lucky wrote about the difficulty of predicting the technological future. We’ve reprinted his wise words here.

Why are we engineers so bad at making predictions?

In countless panel discussions on the future of technology, I’m not sure I ever got anything right. As I look back on technological progress, I experience first retrospective surprise, then surprise that I’m surprised, because it all crept up on me when I wasn’t looking. How can something like Google feel so inevitable and yet be impossible to predict?

I’m filled with wonder at all that we engineers have accomplished, and I take great communal pride in how we’ve changed the world in so many ways. Decades ago I never dreamed we would have satellite navigation, computers in our pockets, the Internet, cellphones, or robots that would explore Mars. How did all this happen, and what are we doing for our next trick?

The software pioneer Alan Kay has said that the best way to predict the future is to invent it, and that’s what we’ve been busy doing. The public understands that we’re creating the future, but they think that we know what we’re doing and that there’s a master plan in there somewhere. However, the world evolves haphazardly, bumbling along in unforeseen directions. Some seemingly great inventions just don’t take hold, while overlooked innovations proliferate, and still others are used in unpredicted ways.

When I joined Bell Labs, so many years ago, there were two great development projects under way that together were to shape the future—the Picturephone and the millimeter waveguide. The waveguide was an empty pipe, about 5 centimeters in diameter, that would carry across the country the 6-megahertz analog signals from those ubiquitous Picturephones.

Needless to say, this was an alternative future that never happened. Our technological landscape is littered with such failed bets. For decades engineers would say that the future of communications was video telephony. Now that we can have it for free, not many people even want it.

The millimeter waveguide never happened either. Out of the blue, optical fiber came along, and that was that. Oh, and analog didn’t last. Gordon Moore made his observation about integrated-circuit progress in the midst of this period, but of course we had a hard time believing it.

Analog switching overstayed its tenure because engineers didn’t quite believe the irresistible economics of Moore’s Law. Most engineers used the Internet in the early years and knew it was growing at an exponential rate. But, no, it would never grow up to be a big, reliable, commercial network.

The irony at Bell Labs is that we had some of the finest engineers in the world then, working on things like the integrated circuit and the Internet—in other words, engineers who were responsible for many of the innovations that upset the very future they and their associates had been working on. This is the way the future often evolves: Looking back, you say, “We should have known” or “We knew, but we didn’t believe.” And at the same time we were ignoring the exponential trends that were all around us, we hyped glamorous technologies like artificial intelligence and neural networks.

Yogi Berra, who should probably be in the National Academy of Sciences as well as the National Baseball Hall of Fame, once said, “It’s tough making predictions, especially about the future.” We aren’t even good at making predictions about the present, let alone the future.

Journalists are sometimes better than engineers about seeing the latent future embedded in the present. I often read articles telling me that there is a trend where a lot of people are doing this or that. I raise my eyebrows in mild surprise. I didn’t realize a lot of people were doing this or that. Perhaps something is afoot, and an amorphous social network is unconsciously shaping the future of technology.

Well, we’ve made a lot of misguided predictions in the past. But we’ve learned from those mistakes. Now we know. The future lies in quantum computers. And electronics will be a thing of the past, since we’ll be using optical processing. All this is just right around the corner. 

Reprinted from IEEE Spectrum, Vol. 45, September 2008.


Bio-inspired theoretical research may make robots more effective on the future battlefield

In an effort to make robots more effective and versatile teammates for Soldiers in combat, Army researchers are on a mission to understand the value of the molecular living functionality of muscle, and the fundamental mechanics that would need to be replicated in order to artificially achieve the capabilities arising from the proteins responsible for muscle contraction.

Bionanomotors, like myosins that move along actin networks, are responsible for most methods of motion in all life forms. Thus, the development of artificial nanomotors could be game-changing in the field of robotics research.

Researchers from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory have been looking to identify a design that would allow the artificial nanomotor to take advantage of Brownian motion, the property of particles to agitatedly move simply because they are warm.

The CCDC ARL researchers believe understanding and developing these fundamental mechanics are a necessary foundational step toward making informed decisions on the viability of new directions in robotics involving the blending of synthetic biology, robotics, and dynamics and controls engineering.

The Journal of Biomechanical Engineering recently featured their research.

“By controlling the stiffness of different geometrical features of a simple lever-arm design, we found that we could use Brownian motion to make the nanomotor more capable of reaching desirable positions for creating linear motion,” said Dean Culver, a researcher in CCDC ARL’s Vehicle Technology Directorate. “This nano-scale feature translates to more energetically efficient actuation at a macro scale, meaning robots that can do more for the warfighter over a longer amount of time.”

According to Culver, the descriptions of protein interactions in muscle contraction are typically fairly high-level. More specifically, rather than describing the forces that act on an individual protein to seek its counterpart, prescribed or empirical rate functions that dictate the conditions under which a binding or a release event occurs have been used by the research community to replicate this biomechanical process.

“These widely accepted muscle contraction models are akin to a black-box understanding of a car engine,” Culver said. “More gas, more power. It weighs this much and takes up this much space. Combustion is involved. But, you can’t design a car engine with that kind of surface-level information. You need to understand how the pistons work, and how finely injection needs to be tuned. That’s a component-level understanding of the engine. We dive into the component-level mechanics of the built-up protein system and show the design and control value of living functionality as well as a clearer understanding of design parameters that would be key to synthetically reproducing such living functionality.”

Culver stated that the capacity for Brownian motion to kick a tethered particle from a disadvantageous elastic position to an advantageous one, in terms of energy production for a molecular motor, has been illustrated by ARL at a component level, a crucial step in the design of artificial nanomotors that offer the same performance capabilities as biological ones.

“This research adds a key piece of the puzzle for fast, versatile robots that can perform autonomous tactical maneuver and reconnaissance functions,” Culver said. “These models will be integral to the design of distributed actuators that are silent, low thermal signature and efficient — features that will make these robots more impactful in the field.”

Culver noted that they are silent because the muscles don’t make a lot of noise when they actuate, especially compared to motors or servos, cold because the amount of heat generation in a muscle is far less than a comparable motor, and efficient because of the advantages of the distributed chemical energy model and potential escape via Brownian motion.

According to Culver, the breadth of applications for actuators inspired by the biomolecular machines in animal muscles is still unknown, but many of the existing application spaces have clear Army applications such as bio-inspired robotics, nanomachines and energy harvesting.

“Fundamental and exploratory research in this area is therefore a wise investment for our future warfighter capabilities,” Culver said.

Moving forward, there are two primary extensions of this research.

“First, we need to better understand how molecules, like the tethered particle discussed in our paper, interact with each other in more complicated environments,” Culver said. “In the paper, we see how a tethered particle can usefully harness Brownian motion to benefit the contraction of the muscle overall, but the particle in this first model is in an idealized environment. In our bodies, it’s submerged in a fluid carrying many different ions and energy-bearing molecules in solution. That’s the last piece of the puzzle for the single-motor, nano-scale models of molecular motors.”

The second extension, stated Culver, is to repeat this study with a full 3-D model, paving the way to scaling up to practical designs.

Also notable is the fact that because this research is so young, ARL researchers used this project to establish relationships with other investigators in the academic community.

“Leaning on their expertise will be critical in the years to come, and we’ve done a great job of reaching out to faculty members and researchers from places like the University of Washington, Duke University and Carnegie Mellon University,” Culver said.

According to Culver, taking this research project into the next steps with help from collaborative partners will lead to tremendous capabilities for future Soldiers in combat, a critical requirement considering the nature of the ever-changing battlefield.

Go to Source

IEEE Spectrum

Universal Robots Introduces Its Strongest Robotic Arm Yet

Universal Robots, already the dominant force in collaborative robots, is flexing its muscles in an effort to further expand its reach in the cobots market. The Danish company is introducing today the UR16e, its strongest robotic arm yet, with a payload capability of 16 kilograms (35.3 lbs), reach of 900 millimeters, and repeatability of +/- 0.05 mm. Universal says the new “heavy duty payload cobot” will allow customers to automate a broader range of processes, including packaging and palletizing, nut and screw driving, and high-payload and CNC machine tending.