Four Tips Developers Should Follow When Building Location-Based Apps

As the Internet continues to evolve and mature, so do development tools and the technologists who use them. But what does a more mature approach to application development look like today? How do application developers manage growing complexity, and deliver stellar experiences for their users? 

Apps that use location data present extra challenges. Users expect seamless experiences with up-to-date, reliable data – a significant design and engineering challenge. To help developers, here are my four best practices for building location-based apps.

Ease the burden with APIs

Developers building an application or feature that requires the use of external or third-party data are often faced with the problem of having to download large datasets from which relevant data must be extracted. Geospatial information, for example, is often delivered in sizable datasets, which must be manually processed, stored, managed and updated when new updates are released by the provider. This technical overhead requires a significant investment of time on the side of a developer, making many potentially valuable datasets infeasible to use.

A much easier and more targeted solution is to consume the data that’s needed, when it’s needed, which is where APIs come into play. Not only are APIs often a more efficient way for users to consume data, they also tend to lower the total cost of building an app. The API provider hosts the systems that contain the relevant data and takes charge of the updates and management, freeing up time for developers to focus on other tasks. APIs also help ensure that the most up-to-date data is being consumed, which can often define the entire proposition of an app, especially if it’s one that relies on accurate location information. 

Mapping data visualizations for example often only fulfill their purpose if they reflect the most recent data. APIs can ease the burden on developers by offering access to reliable, well-maintained data sources. 

Validate before you integrate  

Application development today necessitates the use of a variety of datasets, many of which come in incompatible formats. In fact, the chances that all required datasets are interoperable is very low, which could mean a significant amount of heavy lifting at the data integration stage. When it comes to geospatial data on the web, for example, the de facto standard for vector features is GeoJSON. However, spatial data that developers will need might come as shapefiles, which are designed for GIS applications, and others, such as Geographical Markup Language (GML), KML, GPX, encoded polylines, vector tiles and so on – each requiring special tools to work with.

Researching datasets, formats and libraries is not a controversial tip by any means, but it bears emphasizing. Planning and validating use cases before breaking ground on code will save time, cost and sanity. After all, it’s easier to change a wire frame than to rewrite code – or worse, realize that you have chosen to incorporate a poorly supported JavaScript library or an incomplete data source. 

Every developer knows well the pain of having to solve a seemingly entirely unique problem – and the joy of finding a complex problem has already been solved. There are many active and thriving open source communities online that can provide support for spatial web developers. GeoJSON files can be easily and natively visualized with the popular Javascript mapping libraries, like Leaflet, Mapbox GL JS and OpenLayers – there is a wealth of information online on doing so. Other formats do not have the same support, which may make the process of converting an incompatible format into one that can be easily integrated with the rest of an application’s tech stack more difficult. Tools like mapshaper, QGIS, GDAL, and Turf.js can help developers convert between formats, reproject coordinates, and perform spatial analytics and manipulation processes. 

It’s a cliché, but working smarter, not harder, is what developers should strive for. 

Managing asynchronicity

The web presents some interesting challenges for developers. For one, we don’t know how long some processes will take. For example, how quickly data is fetched from an API depends on bandwidth, the amount of data, the server, and so on. This is especially relevant in apps using spatial data as datasets are often fetched from external APIs, and are often sizable.

This is complicated by the fact that JavaScript code often relies on assets loaded earlier – to use a variable x, x has to be declared and assigned a value. When that assignment operation takes an indeterminate amount of time, how do you know when to proceed to computing x + 1?

Fortunately, the JavaScript community has designed a sophisticated suite of solutions to this problem. A promise holds the place of a value that is not yet known – the eventual result of an asynchronous operation. When the operation finishes the promise is resolved – or rejected. By chaining promises together, programmers can create programs that handle asynchronous operations efficiently and cleanly. 

Building on promises, ECMAScript 2017’s async / await syntax makes “asynchronous code easier to write and to read afterwards”, enhancing the toolbelt devs can use to deal with these asynchronous operations.

To write performant apps, developers often need to fetch data from multiple sources, handle asynchronous operations, but not wait for one process to finish before starting another – that is, they need to run operations concurrently. One tool for this is the Promise.all() method, which processes an array of promises all at once, only resolving when all operations have finished. 

Understanding tooling and techniques is therefore essential. When it comes to asynchronous data, for example, JavaScript has a lot of data management tools built into the language itself, which can vastly reduce the potential complexity, improve performance, and result in better applications.

Don’t neglect platforms

A key challenge for designers and developers is creating a coherent experience across different platforms. Building an incredible data visualization feature for a webpage might work well on a laptop screen, but how well does this translate when viewed on a smartphone? For B2B applications, use cases are generally geared toward users sitting at a PC in an office. But increasingly, compatibility with portable devices, such as smartphones, is a requirement. 

For GIS developers, making compelling and usable mapping data visualizations that work on desktop and touchscreen devices of all sizes can be challenging. The trick is to design your essential interface interactions for touch first. This means initially excluding mouse-rollover, or right-click interactions. You can add those interactions later, but only for non-essential actions like shortcuts to things that are otherwise available through other click/touch-events. This is a tough ask for dense interfaces like GIS applications that often rely on right-click menus and rollovers to expose contextual information about a geographical feature.

A reduced set of interaction events is an integral part of the “mobile-first” design philosophy. It goes hand in hand with small screen real estate, and finger-sized hit-areas. 

Of course, it’s not possible to ignore mobile users, so design thinking must inform the early stages of any application that requires compelling data visualizations. Sometimes – especially with mapping visualizations meant to support routing or wayfinding – a mobile-first approach should be taken. Either way, think through the user needs early, and solicit feedback through regular user testing. 

So, there you have it. Just a few tips to consider before embarking on your journey to creating a location-based application.

Go to Source
Author: <a href="">johnx25bd</a>


Meteorite strikes may create unexpected form of silica

When a meteorite hurtles through the atmosphere and crashes to Earth, how does its violent impact alter the minerals found at the landing site? What can the short-lived chemical phases created by these extreme impacts teach scientists about the minerals existing at the high-temperature and pressure conditions found deep inside the planet?

New work led by Carnegie’s Sally June Tracy examined the crystal structure of the silica mineral quartz under shock compression and is challenging longstanding assumptions about how this ubiquitous material behaves under such intense conditions. The results are published in Science Advances.

“Quartz is one of the most abundant minerals in Earth’s crust, found in a multitude of different rock types,” Tracy explained. “In the lab, we can mimic a meteorite impact and see what happens.”

Tracy and her colleagues — Washington State University’s (WSU) Stefan Turneaure and Princeton University’s Thomas Duffy, a former Carnegie Fellow — used a specialized cannon-like gas gun to accelerate projectiles into quartz samples at extremely high speeds — several times faster than a bullet fired from a rifle. Special x-ray instruments were used to discern the crystal structure of the material that forms less than one-millionth of a second after impact. Experiments were carried out at the Dynamic Compression Sector (DCS), which is operated by WSU and located at the Advanced Photon Source, Argonne National Laboratory.

Quartz is made up of one silicon atom and two oxygen atoms arranged in a tetrahedral lattice structure. Because these elements are also common in the silicate-rich mantle of the Earth, discovering the changes quartz undergoes at high-pressure and -temperature conditions, like those found in the Earth’s interior, could also reveal details about the planet’s geologic history.

When a material is subjected to extreme pressures and temperatures, its internal atomic structure can be re-shaped, causing its properties to shift. For example, both graphite and diamond are made from carbon. But graphite, which forms at low pressure, is soft and opaque, and diamond, which forms at high pressure, is super-hard and transparent. The different arrangements of carbon atoms determine their structures and their properties, and that in turn affects how we engage with and use them.

Despite decades of research, there has been a long-standing debate in the scientific community about what form silica would take during an impact event, or under dynamic compression conditions such as those deployed by Tracy and her collaborators. Under shock loading, silica is often assumed to transform to a dense crystalline form known as stishovite — a structure believed to exist in the deep Earth. Others have argued that because of the fast timescale of the shock the material will instead adopt a dense, glassy structure.

Tracy and her team were able to demonstrate that counter to expectations, when subjected to a dynamic shock of greater than 300,000 times normal atmospheric pressure, quartz undergoes a transition to a novel disordered crystalline phase, whose structure is intermediate between fully crystalline stishovite and a fully disordered glass. However, the new structure cannot last once the burst of intense pressure has subsided.

“Dynamic compression experiments allowed us to put this longstanding debate to bed,” Tracy concluded. “What’s more, impact events are an important part of understanding planetary formation and evolution and continued investigations can reveal new information about these processes.”

This research was supported by the Defense Threat Reduction Agency and the NSF. Washington State University (WSU) provided experimental support through awards from the U.S. Department of Energy (DOE)/National Nuclear Security Agency (NNSA).

This work is based on experiments performed at the Dynamic Compression Sector, operated by WSU under a DOE/ NNSA award. This research used the resources of the Advanced Photon Source, a Department of Energy Office of Science User Facility operated for the DOE Office of Science by the Argonne National .

Go to Source


Is Embedding Content via Instagram’s API Legal?

Ars Technica received notification via email earlier this week that Instagram does not extend its copyright license to cover content that is embedded on external websites via the Instagram API. This statement comes in light of an ongoing legal battle related to third-party use of the social platform’s data. 

Ars Technica writer Timothy B. Lee quoted an Instagram representative as saying that: 

“While our terms allow us to grant a sub-license, we do not grant one for our embeds API … Our platform policies require third parties to have the necessary rights from applicable rights holders. This includes ensuring they have a license to share this content, if a license is required by law.”

The revelation of this distinction comes to light after was sued by a photographer for embedding his photograph on their website via Instagram. Mashable claimed that Instagram’s copyright license noted in their terms of service is sufficient to protect companies that embed this content on their website. Instagram’s new stance on this issue puts Mashable’s position, and possible their defense of the lawsuit, on thin ice.

This situation further highlights the need to properly vet data sources for potential legal conflicts. It is very possible that Instagram could provide this level of protection to their API users, but whether they will or not seems unclear. 

Make sure to check out the original article for more detail on the situation. 

Go to Source
Author: <a href="">KevinSundstrom</a>

3D Printing Industry

Oak Ridge National Laboratory is developing a 3D printed nuclear reactor core

Researchers at the US Department of Energy (DOE)’s Oak Ridge National Laboratory (ORNL) are developing a nuclear reactor core using 3D printing. As part of its Transformational Challenge Reactor (TCR) Demonstration Program, which aims to build an additively manufactured microreactor, ORNL has refined its design of the reactor core, while also scaling up the additive […]

Go to Source
Author: Anas Essop


Smart contact lenses that diagnose and treat diabetes

Diabetes is called an incurable disease because once it develops, it does not disappear regardless of treatment in modern medicine. Having diabetes means a life-long obligation of insulin shots and monitoring of blood glucose levels. But what if you could control the secretion of insulin just by wearing contact lenses?

Recently, a research team at POSTECH developed wirelessly driven ‘smart contact lens’ technology that can detect diabetes and further treat diabetic retinopathy just by wearing them.

Professor Sei Kwang Hahn and graduate students Do Hee Keum and Su-Kyoung Kim of POSTECH’s Department of Materials Science and Engineering, and Professor Jae-Yoon Sim and graduate student Jahyun Koo of Department of Electronics and Electrical Engineering have developed a wireless powered smart contact lens that can diagnose and treat diabetes by controlling drug delivery with electrical signals. The findings were recently published in Science Advances. The smart contact lenses developed by the research team are made of biocompatible polymers and integrate biosensors and drug delivery and data communication systems.

The research team verified that the glucose level in tears of diabetic rabbits analyzed by smart contact lenses matched their blood glucose level using a conventional glucose sensor that utilize drawn blood. The team additionally confirmed that the drugs encased in smart contact lenses could treat diabetic retinopathy.

Recently, by applying the platform technology of these smart contact lenses, a research has been conducted to expand the scope of electroceuticals that use electrical stimulations to treat brain disorders such as Alzheimer’s and Parkinson’s diseases, and mental illnesses including depression.

The research team expects this development of self-controlled therapeutic smart contact lenses with real-time biometric analysis to be quickly applied to wearable healthcare industries.

Professor Sei Kwang Han who led the research stated, “Despite the full-fledged research and development of wearable devices from global companies, the commercialization of wireless-powered medical devices for diagnosis and treatment of diabetes and retinopathy is insufficient.” He added, “We expect that this research will greatly contribute to the advancement of related industries by being the first in developing wireless-powered smart contact lenses equipped with drug delivery system for diagnosis and treatment of diabetes, and treatment of retinopathy.”

This research was financially supported by Samsung Science and Technology Foundation, the Global Frontier Project (Director: Professor Kilwon Cho), the Mid-career Researcher Program from the National Research Foundation of Korea, and World Class 300 Project of the Ministry of SMEs and Startups. The research findings on smart contact lens-based technologies were introduced in the January issue of Nature Reviews Materials, which drew attention from the academic circles. The research team is preparing to carry out clinical trials for the safety and validity assessment for commercialization of smart contact lenses in collaboration with Interojo Inc.

Story Source:

Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length.

Go to Source


Ear’s inner secrets revealed with new technology

What does it actually look like deep inside our ears? This has been very difficult to study as the inner ear is protected by the hardest bone in the body. But with the help of synchrotron X-rays, it is now possible to depict details inside the ear three-dimensionally. Together with Canadian colleagues, researchers from Uppsala University have used the method to map the blood vessels of the inner ear.

The study, which was published in the scientific journal Scientific Reports, can provide an explanation for why it is so effective to treat deafness in people with cochlear implants (CI). This is a method that means that an electrode that electrically stimulates the auditory nerve is operated into the inner ear. To-date, around 500,000 people worldwide have been treated with this technique. In Uppsala, the operation is also performed on patients with severe hearing loss, but who can perceive sounds with lower frequencies.

“We need to get better at understanding the micro-anatomy of the human auditory organ and how electrodes operated in affect structures in the cochlea. It can lead to an improved electrode design and better hearing results. 3D reconstructions mean that we can study new surgical paths to the auditory nerve,” says Helge Rask-Andersen, Senior Professor in Experimental Otology at the Department of Surgical Sciences.

To be able to study the blood vessels in the inner auditory organ, the researchers used the synchrotron system in Saskatoon, Saskatchewan, Canada. The system, which is one of eight in the world, is as large as a football pitch and accelerates particles with very high energy. This makes it possible to create pictures of the smallest parts of the inner ear. Through computer processing, the images can then be made three-dimensional.

The researchers hope the method in the future can contribute to new knowledge about diseases of the ear, such as Meniere’s disease, sudden deafness and tinnitus, the causes of which are still largely unknown. But as yet, it is not possible to study living patients with this technique. The radiation is too strong.

“We study specimens from the deceased, meaning donated temporal bones. We hope that the technology can be modified in the future to achieve better resolution than today,” says Helge Rask-Andersen.

Story Source:

Materials provided by Uppsala University. Note: Content may be edited for style and length.

Go to Source


Contentful Offers “API Accelerators" To Speed App Dev On Its Headless CMS (includes video)

As the API economy evolves, so too does the art form of getting developers from zero to “Hello API.” So much so that ProgrammableWeb editor Wendell Santos is marshalling   Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud

Full-Text Transcript of Interview with Benjamin Keyser

David Berlind: Hi, I’m Dave Berlind, editor in chief of ProgrammableWeb. Today is Thursday, February 20th, 2020, a lot of 20s in there. And this is ProgrammableWeb’s Developers Rock Podcast. With me today is Benjamin Keyser. He is the Vice President of Product for Contentful, a major API provider out there that we cover from time to time. They’re in the content management system space. Benjamin, welcome to the show.

Benjamin Keyser: Thanks very much.

David: It’s great to have you. Let’s start off real quickly, what is it that Contentful does?

Benjamin: Contentful is a headless CMS. We are a content platform that you can integrate into your modern tech stack and deliver content throughout your whole technology ecosystem.

David: Headless CMS. What is a headless CMS? What does that mean?

Benjamin: We’re a platform-first and API-first CMS. So we manage content, help you develop content models and do all the content delivery. But we’re API-first so we’re really focused on delivering content at the API level. Headless CMSes generally don’t come with complex interfaces where you do all the settings and add content, although for Contentful you could actually build those and add them yourself if you need to, but the focus on our software is platform.

David: Okay. So a good example of a CMS with a head on it is the one that we used to run ProgrammableWeb. We use Drupal, and it technically you could break the CMS side of it off and put your own head on top of it. But it also comes with its own user interface that is programmable as well. So that’s more of a head CMS, one with the head on it. But you’re headless, which means that I can build my own user experience on top of your content management system and I just access all of the content that’s in there through the APIs that you provide, right?

Benjamin: Exactly. I guess the main advantage that we have is that coming from the perspective of API-first, all of the power and capabilities of the CMS are designed as APIs before they’re designed as UXes. So there’s nothing that you can do with our software that you can’t do through an API.

David: Okay, terrific. So you mentioned in your little preamble there something about a modern tech stack, and I remember seeing that in the press release coming up to this interview. You’ve got an announcement here. What do you mean by a modern tech stack?

Benjamin: Developers want to be able to use whatever system and whatever APIs make the most sense for the organization and for the problem they’re trying to solve. So the modern tech stacks lets them match up as many different kinds of APIs as they need to create the capabilities that they want.

David: Okay. Most developers can do that already. A lot of the applications we see out there — mobile applications — are already accessing multiple dissimilar APIs from different across the web. What makes what you’re launching today any different from those?

Benjamin: I think it’s possible to build applications on any kind of APIs. What’s different about the App Framework is really that it’s just an accelerator. We provide a way for developers to use 60 new blueprint apps that are open-source that they can look at to get a head start. We’re also providing two SDKs that will help them get a head start in building common kinds of functionality.

The two libraries that are coming up, one is digital asset management, another one is related to eCommerce. And that will just help really make it go way faster for developers who want to build something new.

David: So they sort of prebuilt applications that you’re talking about, reference applications. Do they involve APIs that are not from Contentful? Are they showing developers how much easier it can be to take these other APIs and mashing them up with Contentful’s APIs?

Benjamin: Absolutely. I mean, I brought a list here of the apps, I can just pull out a couple here. We have a Dropbox app, an Optimizely app, Cloudinary, Bynder, Netlify. Those are all example apps that are bringing together sets of APIs from other providers into Contentful and creating new capabilities.

David: Terrific. And so you’ve got these sample applications. Are they in certain languages, all languages? What kind of developers are going to come and say, “Wow, that’s really going to help me accelerate my productivity here.”

Benjamin:The applications are all written in JavaScript and they’re all available for you to walk through them and see how the techniques are done. So it makes it very easy to learn how to work with the Contentful APIs very quickly.

David: Terrific. So when you say JavaScript, you’re talking about Node.js, which is based on a JavaScript, the server-side version of JavaScript. That’s primarily how people mashup APIs into something, I’m assuming?

Benjamin: That’s correct.

David: Yeah, terrific. So you’ve got a whole bunch of these sort of prebuilt integrations, example applications. And do they comprise this App Framework that you’re talking about or is there more to this App Framework that’s mentioned in your press release?

Benjamin: There’s a couple of new APIs that we’re offering. They give applications a special place in the system, and then we’re also providing those two libraries that I mentioned, plus all of the open-source blueprint apps that help people get a jump-start. So the App Framework is really a collection of tools that will help developers put together the kind of functionality that they need for their organization, or they could take one of these off the shelf and just adapt it a little bit.

David: Okay. The libraries, let’s talk a little bit about those. What are the libraries actually do? What’s the background on those?

Benjamin: We have a digital asset management and e-commerce library, two separate libraries. They just encapsulate some of the most common things that you do with those systems to help accelerate development around those areas.

David: Are the eCommerce capabilities coming from your APIs, or are their eCommerce capabilities that are coming from external APIs and you’ve build a library. It’s sort of an SDK that just pre-bundles all of that together?

Benjamin: We have a Commerce Layer App and we also have… I just want to make sure I get the name right, the Commerce Tools App. Both are example apps that will show you what those libraries are capable of doing, but it’s to help connect together Contentful APIs with those commerce API.

David: Terrific. So the commerce APIs are not your APIs, they’er external APIs?

Benjamin: No, you’re correct.

David: Okay, great. Do you know what organizations those APIs are coming from? You have Shopify, Wix, et cetera. I don’t know if those are the ones, but do you know which ones they…

Benjamin: Yeah, we have a Shopify App, a Commerce Tools App and… Let’s see, I’m just looking to see if there are any other additional eCommerce providers here. Those are the ones in the example apps that are coming out.

David: Great. Well, this is great because developers love to be more productive. We have lots of developers who come to ProgrammableWab who search through our directory of APIs looking for what API they’re going to include in the next application that they build. Or what APIs I should say, because most are relying on multiple APIs. Anything that any API provider could do to make it easier and faster to build those applications is always welcomed. And so it sounds like you guys are really enabling developers and focusing on their needs before focusing on like something like a front end for your content management system. That’s terrific.

Benjamin: I think you put your finger on it. This is really to help developers go faster and to bring more powerful capabilities into a whatever kind of work they’re doing that involves Contentful and content. As I mentioned, we’re always thinking about things in an API-first way and we’re trying to find the best way to empower developers to do the next generation of what they need to work on.

David: Now, developers work in other languages besides Node and JavaScript. Do you have any more plans to develop those sample applications in languages that other developers might be using, instead of forcing them into the JavaScript side of things?

Benjamin: Not at this time, and it’s a great suggestion and definitely something we’re thinking about.

David: Okay. And what’s coming next? Are you just going to keep building out more of these sample applications in this Application Framework? More libraries?

Benjamin: Well, one of the things that we are really focused on is making the App Framework more powerful over time. So we’re trying to find other ways that applications might be able to interact with Contentful in even more functional, capable ways. So we’re generally trying to just build up the power there, I think.

David: Terrific. Well, Benjamin Keyser, Vice President of Product for Contentful, coming to us from Berlin, Germany. Thank you very much for joining us today.

Benjamin: Thanks very much.

David: We’ve been speaking with Benjamin Keyser, Vice President of Product at Contentful. I’m David Berlind, editor in chief, ProgrammableWeb. This is the ProgrammableWeb’s Developers Rock Podcast. For more videos like this one, you can go to our YouTube channel at, or you can find this video and others on The video will be embedded in article that include not only the full transcript of everything that Benjamin said, but also the audio-only version of this. You can listen to it as a podcast. It’s available through iTunes and Google Play Music. Thanks very much for joining us. We’ll see you at the next video.

Go to Source
Author: <a href="">david_berlind</a>


Ancient shell shows days were half-hour shorter 70 million years ago

Earth turned faster at the end of the time of the dinosaurs than it does today, rotating 372 times a year, compared to the current 365, according to a new study of fossil mollusk shells from the late Cretaceous. This means a day lasted only 23 and a half hours, according to the new study in AGU’s journal Paleoceanography and Paleoclimatology.

The ancient mollusk, from an extinct and wildly diverse group known as rudist clams, grew fast, laying down daily growth rings. The new study used lasers to sample minute slices of shell and count the growth rings more accurately than human researchers with microscopes.

The growth rings allowed the researchers to determine the number of days in a year and more accurately calculate the length of a day 70 million years ago. The new measurement informs models of how the Moon formed and how close to Earth it has been over the 4.5-billion-year history of the Earth-Moon gravitational dance.

The new study also found corroborating evidence that the mollusks harbored photosynthetic symbionts that may have fueled reef-building on the scale of modern-day corals.

The high resolution obtained in the new study combined with the fast growth rate of the ancient bivalves revealed unprecedented detail about how the animal lived and the water conditions it grew in, down to a fraction of a day.

“We have about four to five datapoints per day, and this is something that you almost never get in geological history. We can basically look at a day 70 million years ago. It’s pretty amazing,” said Niels de Winter, an analytical geochemist at Vrije Universiteit Brussel and the lead author of the new study.

Climate reconstructions of the deep past typically describe long term changes that occur on the scale of tens of thousands of years. Studies like this one give a glimpse of change on the timescale of living things and have the potential to bridge the gap between climate and weather models.

Chemical analysis of the shell indicates ocean temperatures were warmer in the Late Cretaceous than previously appreciated, reaching 40 degrees Celsius (104 degrees Fahrenheit) in summer and exceeding 30 degrees Celsius (86 degrees Fahrenheit) in winter. The summer high temperatures likely approached the physiological limits for mollusks, de Winter said.

“The high fidelity of this data-set has allowed the authors to draw two particularly interesting inferences that help to sharpen our understanding of both Cretaceous astrochronology and rudist palaeobiology,” said Peter Skelton, a retired lecturer of palaeobiology at The Open University and a rudist expert unaffiliated with the new study.

Ancient reef-builders

The new study analyzed a single individual that lived for over nine years in a shallow seabed in the tropics — a location which is now, 70-million-years later, dry land in the mountains of Oman.

Torreites sanchezi mollusks look like tall pint glasses with lids shaped like bear claw pastries. The ancient mollusks had two shells, or valves, that met in a hinge, like asymmetrical clams, and grew in dense reefs, like modern oysters. They thrived in water several degrees warmer worldwide than modern oceans.

In the late Cretaceous, rudists like T. sanchezi dominated the reef-building niche in tropical waters around the world, filling the role held by corals today. They disappeared in the same event that killed the non-avian dinosaurs 66 million years ago.

“Rudists are quite special bivalves. There’s nothing like it living today,” de Winter said. “In the late Cretaceous especially, worldwide most of the reef builders are these bivalves. So they really took on the ecosystem building role that the corals have nowadays.”

The new method focused a laser on small bits of shell, making holes 10 micrometers in diameter, or about as wide as a red blood cell. Trace elements in these tiny samples reveal information about the temperature and chemistry of the water at the time the shell formed. The analysis provided accurate measurements of the width and number of daily growth rings as well as seasonal patterns. The researchers used seasonal variations in the fossilized shell to identify years.

The new study found the composition of the shell changed more over the course of a day than over seasons, or with the cycles of ocean tides. The fine-scale resolution of the daily layers shows the shell grew much faster during the day than at night

“This bivalve had a very strong dependence on this daily cycle, which suggests that it had photosymbionts,” de Winter said. “You have the day-night rhythm of the light being recorded in the shell.”

This result suggests daylight was more important to the lifestyle of the ancient mollusk than might be expected if it fed itself primarily by filtering food from the water, like modern day clams and oysters, according to the authors. De Winter said the mollusks likely had a relationship with an indwelling symbiotic species that fed on sunlight, similar to living giant clams, which harbor symbiotic algae.

“Until now, all published arguments for photosymbiosis in rudists have been essentially speculative, based on merely suggestive morphological traits, and in some cases were demonstrably erroneous. This paper is the first to provide convincing evidence in favor of the hypothesis,” Skelton said, but cautioned that the new study’s conclusion was specific to Torreites and could not be generalized to other rudists.

Moon retreat

De Winter’s careful count of the number of daily layers found 372 for each yearly interval. This was not a surprise, because scientists know days were shorter in the past. The result is, however, the most accurate now available for the late Cretaceous, and has a surprising application to modeling the evolution of the Earth-Moon system.

The length of a year has been constant over Earth’s history, because Earth’s orbit around the Sun does not change. But the number of days within a year has been shortening over time because days have been growing longer. The length of a day has been growing steadily longer as friction from ocean tides, caused by the Moon’s gravity, slows Earth’s rotation.

The pull of the tides accelerates the Moon a little in its orbit, so as Earth’s spin slows, the Moon moves farther away. The moon is pulling away from Earth at 3.82 centimeters (1.5 inches) per year. Precise laser measurements of distance to the Moon from Earth have demonstrated this increasing distance since the Apollo program left helpful reflectors on the Moon’s surface.

But scientists conclude the Moon could not have been receding at this rate throughout its history, because projecting its progress linearly back in time would put the Moon inside the Earth only 1.4 billion years ago. Scientists know from other evidence that the Moon has been with us much longer, most likely coalescing in the wake of a massive collision early in Earth’s history, over 4.5 billion years ago. So the Moon’s rate of retreat has changed over time, and information from the past, like a year in the life of an ancient clam, helps researchers reconstruct that history and model of the formation of the moon.

Because in the history of the Moon, 70 million years is a blink in time, de Winter and his colleagues hope to apply their new method to older fossils and catch snapshots of days even deeper in time.

Go to Source


Top 10 Time APIs

What time is it? What time will we arrive? What time does the sun set?

There are many reasons for developers to add time components to applications. In order to do that, they need access to Application Programming Interfaces, or APIs, that are concerned with time. The best place to find these APIs is in the ProgrammableWeb directory.

In ProgrammableWeb’s Time category, developers can find APIs for timers, time zones, tide tables, sunset or sunrise, time fencing, device times, employee clocks, time stamping, travel route times, movie showtimes, travel wait times, and others.

This article examines the ten top Time APIs based on page visits to ProgrammableWeb. It’s about Time!

1. TimeStation API

TimeStation is a time and attendance system that runs on smartphones and tablets. The TimeStation APITrack this API allows customer applications to retrieve a variety of attendance and employee reports. The REST API returns CSV or XLS formatted data. Available reports include Current Employee Status, Daily Attendance & Absence, Employee Activity, Open Shifts, Payroll Export and more.

2. WorldTime API

The WorldTime API returns the local time for a given time zone in either JSON or plain text format. This API can also return information on whether a time zone is currently in Daylight Savings Time (DST), when DST starts and ends, and the UTC offset.

3. World Tides API

World Tide gives tide predictions for any location in the world. The World Tides APITrack this API will return information on the coordinates of the closest point where tidal information is available, the height of tides at a given time, or tidal data so that users can calculate tide heights at a given time.

Get tide information such as distance, height, start and end times via this API. Image: Fame-IT/Brainware

4. Prayer Times API

Prayer Times is an application built for Muslims who live in non-Islamic countries and cannot hear Adhan (or Azan)–the call to prayer–5 times a day. The Prayer Times APITrack this API supports prayer calendar, geolocation, and current time.

5. Sunrise and Sunset Times API

Sunrise-Sunset is a free online service that provides users with information on day length, twilight, sunrise times, and sunset times for any date and location in the world. The Sunrise Sunset Times APITrack this API allows users to retrieve exact sunrise and sunset times for a given latitude and longitude, and if wanted, a specified date.

Get sunset and sunrise times for any location on Earth via this API

Get sunset and sunrise times for any location on Earth via this API. Screenshot: Sunrise-Sunset

6. Clockify API

Clockify provides free time tracking software services. With the Clockify APITrack this API, developers can implement projects, reports, summary reports, tasks, time entries, users, groups, and workspace into applications.

7. Google Maps Time Zone API

The Google Maps Time Zone APITrack this API allows developers to retrieve time offset data for any location on Earth. Developers can request information for a specific latitude/longitude pair and date, and the API will return the time zone’s name, its offset from UTC, and its daylight savings offset. Results are returned in English by default, but other languages are available.

8. MoonCalc API

The MoonCalc APITrack this API can determine the course of the moon, moonrise, moon angle, full moon and lunar eclipse for any location and time. The API allows users to integrate the calculation of the moon’s location that is based on latitude, longitude, date, time and more.

9. Time and Date Daylight Saving Time (DST) Worldwide API

With Daylight Saving Time Worldwide APITrack this API, developers can manage dates, times and zone changes in multiple countries. With this API, partners can retrieve information of recognized parameters such as year, country, lang, listplaces, and timechanges.

10. Unix Timestamp Converter API

A UNIX timestamp is a ten digit number that signifies the number of seconds that have passed since midnight on the 1st January 1970, UTC time. It is useful to represent a universal date and time without the concern of timezones. The Unix Timestamp Converter APITrack this API coverts Unix Timestamps to DateTime objects and DateTime objects to Unix Timestamps.

Not pressed for time? Check out more than 80 APIs, 30 SDKs, and 30 Source Code Samples from the Time category on ProgrammableWeb.

Go to Source
Author: <a href="">joyc</a>


Organized cybercrime — not your average mafia

Does the common stereotype for “organized crime” hold up for organizations of hackers? Research from Michigan State University is one of the first to identify common attributes of cybercrime networks, revealing how these groups function and work together to cause an estimated $445-600 billion of harm globally per year.

“It’s not the ‘Tony Soprano mob boss type’ who’s ordering cybercrime against financial institutions,” said Thomas Holt, MSU professor of criminal justice and co-author of the study. “Certainly, there are different nation states and groups engaging in cybercrime, but the ones causing the most damage are loose groups of individuals who come together to do one thing, do it really well — and even for a period of time — then disappear.”

In cases like New York City’s “Five Families,” organized crime networks have historic validity, and are documented and traceable. In the online space, however, it’s a very difficult trail to follow, Holt said.

“We found that these cybercriminals work in organizations, but those organizations differ depending on the offense,” Holt said. “They may have relationships with each other, but they’re not multi-year, multi-generation, sophisticated groups that you associate with other organized crime networks.”

Holt explained that organized cybercrime networks are made up of hackers coming together because of functional skills that allow them to collaborate to commit the specific crime. So, if someone has specific expertise in password encryption and another can code in a specific programming language, they work together because they can be more effective — and cause greater disruption — together than alone.

“Many of these criminals connected online, at least initially, in order to communicate to find one another,” Holt said. “In some of the bigger cases that we had, there’s a core group of actors who know one another really well, who then develop an ancillary network of people who they can use for money muling or for converting the information that they obtained into actual cash.”

Holt and lead author E. R. Leukfeldt, researcher at the Netherlands Institute for the Study of Crime and Law Enforcement, reviewed 18 cases from the Netherlands in which individuals were prosecuted for cases related to phishing. Data came directly from police files and was gathered through wire and IP taps, undercover policing, observation and house searches.

Beyond accessing credit cards and banking information, Holt and Leukfeldt found that cybercriminals also worked together to create fake documents so they could obtain money from banks under fraudulent identities.

The research, published in International Journal of Offender Therapy and Comparative Criminology, also debunks common misconceptions that sophisticated organized criminal networks — such as the Russian mafia — are the ones creating cybercrime.

Looking ahead as law enforcement around the world takes steps to crack down on these hackers, Holt hopes his findings will help guide them in the right direction.

“As things move to the dark web and use cryptocurrencies and other avenues for payment, hacker behaviors change and become harder to fully identify, it’s going to become harder to understand some of these relational networks,” Holt said. “We hope to see better relationships between law enforcement and academia, better information sharing, and sourcing so we can better understand actor behaviors.”

Story Source:

Materials provided by Michigan State University. Note: Content may be edited for style and length.

Go to Source