Sensibill Launches Receipt Extraction API

Sensibill, a provider of SKU-level data and financial tools like digital receipt management that helps institutions better know and serve their customers, today announced the launch of its newest product: Receipt Extraction API. The machine learning-based solution automates and streamlines the transcription of receipts, allowing businesses to deepen customer engagement and loyalty at scale.

Sensibill’s Receipt Extraction API solution will benefit a wide range of businesses that need to quickly and accurately extract receipt data at scale. For example, enterprise accounting firms can use the service to reduce costs and maintain profitability, despite economic pressures. Financial services companies like accounting software and PFM providers can gain access to SKU-level data to drive personalization, using the technology to create an innovative edge and differentiate themselves from the competition. And, loyalty and reward companies that need near-perfect extraction capabilities can leverage Receipt Extraction API to help deliver rewards and value back to users more quickly, increasing efficiencies and improving product quality and accuracy.

“There is a new urgency around cost savings, efficiencies, digital engagement and innovation in otherwise mature markets,” explained Corey Gross, CEO of Sensibill. “Our Receipt Extraction API offering uses smart technology to extract receipts in bulk with speed and precision. At Sensibill, we are proven experts in SKU-level data; it’s what we’ve focused on for the past seven years and why leading institutions and digital banking and core providers across the globe have partnered with us. We are excited to help a broader range of organizations as they work to quickly and efficiently unlock the power of SKU-level data to drive deeper digital engagement and loyalty with their customers.”

Sensibill’s combination of deep SKU-level data expertise and leading AI and machine learning technology makes it uniquely positioned to deliver this solution to the market. Receipt Extraction API is powered by multi-brain processing, leveraging multiple OCR engines and machine learning models to maximize accuracy. And, the solution is intuitive and easily deployable, allowing business to quickly and nimbly test and implement. To best position businesses for success, Sensibill offers customers strategic account management support and white-glove service for extraction capabilities as needed.


Go to Source
Author: <a href="">ProgrammableWeb PR</a>


Chemists make cellular forces visible at the molecular scale

Scientists have developed a new technique using tools made of luminescent DNA, lit up like fireflies, to visualize the mechanical forces of cells at the molecular level. Nature Methods published the work, led by chemists at Emory University, who demonstrated their technique on human blood platelets in laboratory experiments.

“Normally, an optical microscope cannot produce images that resolve objects smaller than the length of a light wave, which is about 500 nanometers,” says Khalid Salaita, Emory professor of chemistry and senior author of the study. “We found a way to leverage recent advances in optical imaging along with our molecular DNA sensors to capture forces at 25 nanometers. That resolution is akin to being on the moon and seeing the ripples caused by raindrops hitting the surface of a lake on the Earth.”

Almost every biological process involves a mechanical component, from cell division to blood clotting to mounting an immune response. “Understanding how cells apply forces and sense forces may help in the development of new therapies for many different disorders,” says Salaita, whose lab is a leader in devising ways to image and map bio-mechanical forces.

The first authors of the paper, Joshua Brockman and Hanquan Su, did the work as Emory graduate students in the Salaita lab. Both recently received their PhDs.

The researchers turned strands of synthetic DNA into molecular tension probes that contain hidden pockets. The probes are attached to receptors on a cell’s surface. Free-floating pieces of DNA tagged with fluorescence serve as imagers. As the unanchored pieces of DNA whizz about they create streaks of light in microscopy videos.

When the cell applies force at a particular receptor site, the attached probes stretch out causing their hidden pockets to open and release tendrils of DNA that are stored inside. The free-floating pieces of DNA are engineered to dock onto these DNA tendrils. When the florescent DNA pieces dock, they are briefly demobilized, showing up as still points of light in the microscopy videos.

Hours of microscopy video are taken of the process, then speeded up to show how the points of light change over time, providing the molecular-level view of the mechanical forces of the cell.

The researchers use a firefly analogy to describe the process.

“Imagine you’re in a field on a moonless night and there is a tree that you can’t see because it’s pitch black out,” says Brockman, who graduated from the Wallace H. Coulter Department of Biomedical Engineering, a joint program of Georgia Tech and Emory, and is now a post-doctoral fellow at Harvard. “For some reason, fireflies really like that tree. As they land on all the branches and along the trunk of the tree, you could slowly build up an image of the outline of the tree. And if you were really patient, you could even detect the branches of the tree waving in the wind by recording how the fireflies change their landing spots over time.”

“It’s extremely challenging to image the forces of a living cell at a high resolution,” says Su, who graduated from Emory’s Department of Chemistry and is now a post-doctoral fellow in the Salaita lab. “A big advantage of our technique is that it doesn’t interfere with the normal behavior or health of a cell.”

Another advantage, he adds, is that DNA bases of A, G, T and C, which naturally bind to one another in particular ways, can be engineered within the probe-and-imaging system to control specificity and map multiple forces at one time within a cell.

“Ultimately, we may be able to link various mechanical activities of a cell to specific proteins or to other parts of cellular machinery,” Brockman says. “That may allow us to determine how to alter the cell to change and control its forces.”

By using the technique to image and map the mechanical forces of platelets, the cells that control blood clotting at the site of a wound, the researchers discovered that platelets have a concentrated core of mechanical tension and a thin rim that continuously contracts. “We couldn’t see this pattern before but now we have a crisp image of it,” Salaita says. “How do these mechanical forces control thrombosis and coagulation? We’d like to study them more to see if they could serve as a way to predict a clotting disorder.”

Just as increasingly high-powered telescopes allow us to discover planets, stars and the forces of the universe, higher-powered microscopy allows us to make discoveries about our own biology.

“I hope this new technique leads to better ways to visualize not just the activity of single cells in a laboratory dish, but to learn about cell-to-cell interactions in actual physiological conditions,” Su says. “It’s like opening a new door onto a largely unexplored realm — the forces inside of us.”

Co-authors of the study include researchers from Children’s Healthcare of Atlanta, Ludwig Maximilian University in Munich, the Max Planck Institute and the University of Alabama at Birmingham. The work was funded by grants from the National Institutes of Health, the National Science Foundation, the Naito Foundation and the Uehara Memorial Foundation.

Go to Source


Biologists create new genetic systems to neutralize gene drives

In the past decade, researchers have engineered an array of new tools that control the balance of genetic inheritance. Based on CRISPR technology, such gene drives are poised to move from the laboratory into the wild where they are being engineered to suppress devastating diseases such as mosquito-borne malaria, dengue, Zika, chikungunya, yellow fever and West Nile. Gene drives carry the power to immunize mosquitoes against malarial parasites, or act as genetic insecticides that reduce mosquito populations.

Although the newest gene drives have been proven to spread efficiently as designed in laboratory settings, concerns have been raised regarding the safety of releasing such systems into wild populations. Questions have emerged about the predictability and controllability of gene drives and whether, once let loose, they can be recalled in the field if they spread beyond their intended application region.

Now, scientists at the University of California San Diego and their colleagues have developed two new active genetic systems that address such risks by halting or eliminating gene drives in the wild. On Sept.18, 2020 in the journal Molecular Cell, research led by Xiang-Ru Xu, Emily Bulger and Valentino Gantz in the Division of Biological Sciences offers two new solutions based on elements developed in the common fruit fly.

“One way to mitigate the perceived risks of gene drives is to develop approaches to halt their spread or to delete them if necessary,” said Distinguished Professor Ethan Bier, the paper’s senior author and science director for the Tata Institute for Genetics and Society. “There’s been a lot of concern that there are so many unknowns associated with gene drives. Now we have saturated the possibilities, both at the genetic and molecular levels, and developed mitigating elements.”

The first neutralizing system, called e-CHACR (erasing Constructs Hitchhiking on the Autocatalytic Chain Reaction) is designed to halt the spread of a gene drive by “shooting it with its own gun.” e-CHACRs use the CRISPR enzyme Cas9 carried on a gene drive to copy itself, while simultaneously mutating and inactivating the Cas9 gene. Xu says an e-CHACR can be placed anywhere in the genome.

“Without a source of Cas9, it is inherited like any other normal gene,” said Xu. “However, once an e-CHACR confronts a gene drive, it inactivates the gene drive in its tracks and continues to spread across several generations ‘chasing down’ the drive element until its function is lost from the population.”

The second neutralizing system, called ERACR (Element Reversing the Autocatalytic Chain Reaction), is designed to eliminate the gene drive altogether. ERACRs are designed to be inserted at the site of the gene drive, where they use the Cas9 from the gene drive to attack either side of the Cas9, cutting it out. Once the gene drive is deleted, the ERACR copies itself and replaces the gene-drive.

“If the ERACR is also given an edge by carrying a functional copy of a gene that is disrupted by the gene drive, then it races across the finish line, completely eliminating the gene drive with unflinching resolve,” said Bier.

The researchers rigorously tested and analyzed e-CHACRs and ERACRs, as well as the resulting DNA sequences, in meticulous detail at the molecular level. Bier estimates that the research team, which includes mathematical modelers from UC Berkeley, spent an estimated combined 15 years of effort to comprehensively develop and analyze the new systems. Still, he cautions there are unforeseen scenarios that could emerge, and the neutralizing systems should not be used with a false sense of security for field-implemented gene drives.

“Such braking elements should just be developed and kept in reserve in case they are needed since it is not known whether some of the rare exceptional interactions between these elements and the gene drives they are designed to corral might have unintended activities,” he said.

According to Bulger, gene drives have enormous potential to alleviate suffering, but responsibly deploying them depends on having control mechanisms in place should unforeseen consequences arise. ERACRs and eCHACRs offer ways to stop the gene drive from spreading and, in the case of the ERACR, can potentially revert an engineered DNA sequence to a state much closer to the naturally-occurring sequence.

“Because ERACRs and e-CHACRs do not possess their own source of Cas9, they will only spread as far as the gene drive itself and will not edit the wild type population,” said Bulger. “These technologies are not perfect, but we now have a much more comprehensive understanding of why and how unintended outcomes influence their function and we believe they have the potential to be powerful gene drive control mechanisms should the need arise.”

Go to Source


How to Create High-performance, Scalable Content Websites Using MACH Technologies

Websites are easy to build these days. There is an abundance of tools available that let you create websites in minutes. However, building websites that are fast, scalable, and flexible that deliver superior performance is a lot more complex than creating a simple website. This is especially true when developing content-heavy websites, such as a news site, knowledge-base platform, online magazine, communities, and so on.

In general, content-heavy websites are likely to have hundreds or even thousands of pages, with new content added every day. They may also attract high traffic as they act as a body of knowledge hosting not just text content but also other media resources such as research reports, interactive maps, videos, images, calculators for consumers, or other dynamic tools. Consequently, they require a structure that supports quick publishing and accommodates frequent changes in content models and functionalities. 

It requires meticulous planning, a well-planned architecture, and modern technologies to develop and maintain a massive website and ensure that it delivers super-fast performance for every interaction with its visitors. 

Adopting a MACH approach is one of the effective ways to implement this. MACH stands for microservices, API-first, cloud-native, and headless technologies. It promotes having an architecture where most components are scalable and pluggable, enabling continuous improvement and easy replacement of modules without impacting the performance of others. 

This article shows how you can harness the power of different MACH and serverless technologies to develop and maintain a high-performance content-heavy website.

Use APIs for Content Management, Content Delivery, and to Connect to Other Apps 

With the advent of new IoT technologies, companies now have more ways and channels to connect and engage with customers. However, the underlying technology needs to be robust and flexible enough to support the channels of today and tomorrow.

Content on most devices today can be powered by APIs. Therefore it makes sense to use an API-based headless content management system that provides content as a service. Such CMSs are backend-only, front-end-agnostic platforms, so you can attach any frontend to it and deliver content through APIs. They give developers full control over how the content needs to be presented and allow integration with third-party apps. 

Integrate Pluggable Apps With Microservices Architecture 

A microservices architecture is a modern, complex approach that brings together loosely coupled, independently deployable applications, making your application modular and agile. With this approach, it becomes easier to build, test, and deploy features or parts of your application. 

Each service in such a setup has an API to communicate with the rest and has its own database, making it truly decoupled. This separation ensures that changes or issues with one service don’t impact another, and can be replaced immediately without downtime. 

This approach works well for a content-heavy website. It complements the cloud or serverless setup by enabling different teams to innovate rapidly, have greater control over the technologies, manage release cycles, and eventually cut down the time to market. 

Fortunately, due to rapid evolution in the SaaS space, all the services you need for a content site have API-based alternatives that can quickly form your application’s foundation. 

Let’s look at some of the apps that you can seamlessly integrate with your applications:

Optimize Content Delivery With CDN Caching 

Your website server exists at one physical location. Content needs to travel the distance to be delivered at another location. The farther the requester, the longer it takes to deliver the content. For instance, if your web server is in New Jersey, visitors in San Francisco will get the content faster than the visitors in Sydney, Australia. 

To avoid this lag and make your content delivery blazing fast, consider using a content delivery network (CDN). A CDN has a lot of network servers scattered across the globe. These servers save cached copies of your website content and act as distributors for visitors requesting content from nearby locations. For instance, visitors from Sydney will get the content from a nearby server (e.g., Melbourne) instead of New Jersey. 

For a large, content-heavy website, having a CDN is highly recommended. It eases the load on the server, reduces latency, and cuts the wait time for your visitors considerably. It also helps to protect your site against Denial of Service (DoS) attacks, which have the potential to bring your site down.

Go With Serverless Infrastructure for Quick Scaling and Easy Management 

While a microservices architecture is much more flexible and scalable than a traditional or monolithic one, an app built using the former approach is no good if it uses a legacy infrastructure that is unable to scale efficiently. 

It makes much more business sense to move to serverless computing, where the cloud provider handles the infrastructure concerns, server space, scalability, etc. The provider is responsible for provisioning, scaling, and managing the infrastructure as needed, where you purchase backend service on a “pay-as-you-go” model. 

This serverless approach ensures that your developers can focus more on writing code and developing features for the application, and worry less about the underlying infrastructure or scalability. Such a model can help you cope with demand spikes of your content-heavy website and ensure high performance.

Choose Scalable Presentation or Frontend Tools

If you adopt MACH technologies for your website, you are most likely to use a headless content management system (CMS) to manage the content and deliver it to your web application via APIs. Using a headless CMS, the frontend (presentation layer) is separate from the CMS backend, making it possible to choose any front-end technology that suits your needs. 

When making this choice, it’s important to remember that your frontend needs to be flexible, scalable, and fast, to accommodate the future requirements that the rapid evolution in technology is likely to bring. 

Another viable option is adopting a JAMstack architecture. It’s a modern way of building websites that are fast, secure, and quickly scalable. Some of the popular JAMstack frameworks are Gatsby, Next.js, and Gridsome.

In conclusion 

By adopting a MACH and serverless architecture, each component of your website has a clearly-defined task, enabling better performance as a whole. The pluggable design allows you to replace components as the technology evolves, thereby future-proofing applications. And finally, the serverless infrastructure provides all the scalability and security you need for your application. With such a solid foundation, a content-heavy website of any scale can deliver peak performance.

Go to Source
Author: <a href="">MishraMayank</a>


Four Tips Developers Should Follow When Building Location-Based Apps

As the Internet continues to evolve and mature, so do development tools and the technologists who use them. But what does a more mature approach to application development look like today? How do application developers manage growing complexity, and deliver stellar experiences for their users? 

Apps that use location data present extra challenges. Users expect seamless experiences with up-to-date, reliable data – a significant design and engineering challenge. To help developers, here are my four best practices for building location-based apps.

Ease the burden with APIs

Developers building an application or feature that requires the use of external or third-party data are often faced with the problem of having to download large datasets from which relevant data must be extracted. Geospatial information, for example, is often delivered in sizable datasets, which must be manually processed, stored, managed and updated when new updates are released by the provider. This technical overhead requires a significant investment of time on the side of a developer, making many potentially valuable datasets infeasible to use.

A much easier and more targeted solution is to consume the data that’s needed, when it’s needed, which is where APIs come into play. Not only are APIs often a more efficient way for users to consume data, they also tend to lower the total cost of building an app. The API provider hosts the systems that contain the relevant data and takes charge of the updates and management, freeing up time for developers to focus on other tasks. APIs also help ensure that the most up-to-date data is being consumed, which can often define the entire proposition of an app, especially if it’s one that relies on accurate location information. 

Mapping data visualizations for example often only fulfill their purpose if they reflect the most recent data. APIs can ease the burden on developers by offering access to reliable, well-maintained data sources. 

Validate before you integrate  

Application development today necessitates the use of a variety of datasets, many of which come in incompatible formats. In fact, the chances that all required datasets are interoperable is very low, which could mean a significant amount of heavy lifting at the data integration stage. When it comes to geospatial data on the web, for example, the de facto standard for vector features is GeoJSON. However, spatial data that developers will need might come as shapefiles, which are designed for GIS applications, and others, such as Geographical Markup Language (GML), KML, GPX, encoded polylines, vector tiles and so on – each requiring special tools to work with.

Researching datasets, formats and libraries is not a controversial tip by any means, but it bears emphasizing. Planning and validating use cases before breaking ground on code will save time, cost and sanity. After all, it’s easier to change a wire frame than to rewrite code – or worse, realize that you have chosen to incorporate a poorly supported JavaScript library or an incomplete data source. 

Every developer knows well the pain of having to solve a seemingly entirely unique problem – and the joy of finding a complex problem has already been solved. There are many active and thriving open source communities online that can provide support for spatial web developers. GeoJSON files can be easily and natively visualized with the popular Javascript mapping libraries, like Leaflet, Mapbox GL JS and OpenLayers – there is a wealth of information online on doing so. Other formats do not have the same support, which may make the process of converting an incompatible format into one that can be easily integrated with the rest of an application’s tech stack more difficult. Tools like mapshaper, QGIS, GDAL, and Turf.js can help developers convert between formats, reproject coordinates, and perform spatial analytics and manipulation processes. 

It’s a cliché, but working smarter, not harder, is what developers should strive for. 

Managing asynchronicity

The web presents some interesting challenges for developers. For one, we don’t know how long some processes will take. For example, how quickly data is fetched from an API depends on bandwidth, the amount of data, the server, and so on. This is especially relevant in apps using spatial data as datasets are often fetched from external APIs, and are often sizable.

This is complicated by the fact that JavaScript code often relies on assets loaded earlier – to use a variable x, x has to be declared and assigned a value. When that assignment operation takes an indeterminate amount of time, how do you know when to proceed to computing x + 1?

Fortunately, the JavaScript community has designed a sophisticated suite of solutions to this problem. A promise holds the place of a value that is not yet known – the eventual result of an asynchronous operation. When the operation finishes the promise is resolved – or rejected. By chaining promises together, programmers can create programs that handle asynchronous operations efficiently and cleanly. 

Building on promises, ECMAScript 2017’s async / await syntax makes “asynchronous code easier to write and to read afterwards”, enhancing the toolbelt devs can use to deal with these asynchronous operations.

To write performant apps, developers often need to fetch data from multiple sources, handle asynchronous operations, but not wait for one process to finish before starting another – that is, they need to run operations concurrently. One tool for this is the Promise.all() method, which processes an array of promises all at once, only resolving when all operations have finished. 

Understanding tooling and techniques is therefore essential. When it comes to asynchronous data, for example, JavaScript has a lot of data management tools built into the language itself, which can vastly reduce the potential complexity, improve performance, and result in better applications.

Don’t neglect platforms

A key challenge for designers and developers is creating a coherent experience across different platforms. Building an incredible data visualization feature for a webpage might work well on a laptop screen, but how well does this translate when viewed on a smartphone? For B2B applications, use cases are generally geared toward users sitting at a PC in an office. But increasingly, compatibility with portable devices, such as smartphones, is a requirement. 

For GIS developers, making compelling and usable mapping data visualizations that work on desktop and touchscreen devices of all sizes can be challenging. The trick is to design your essential interface interactions for touch first. This means initially excluding mouse-rollover, or right-click interactions. You can add those interactions later, but only for non-essential actions like shortcuts to things that are otherwise available through other click/touch-events. This is a tough ask for dense interfaces like GIS applications that often rely on right-click menus and rollovers to expose contextual information about a geographical feature.

A reduced set of interaction events is an integral part of the “mobile-first” design philosophy. It goes hand in hand with small screen real estate, and finger-sized hit-areas. 

Of course, it’s not possible to ignore mobile users, so design thinking must inform the early stages of any application that requires compelling data visualizations. Sometimes – especially with mapping visualizations meant to support routing or wayfinding – a mobile-first approach should be taken. Either way, think through the user needs early, and solicit feedback through regular user testing. 

So, there you have it. Just a few tips to consider before embarking on your journey to creating a location-based application.

Go to Source
Author: <a href="">johnx25bd</a>


New tools catch and release molecules at the flip of a light switch

A Princeton team has developed a class of light-switchable, highly adaptable molecular tools with new capabilities to control cellular activities. The antibody-like proteins, called OptoBinders, allow researchers to rapidly control processes inside and outside of cells by directing their localization, with potential applications including protein purification, the improved production of biofuels, and new types of targeted cancer therapies.

In a pair of papers published Aug. 13 in Nature Communications, the researchers describe the creation of OptoBinders that can specifically latch onto a variety of proteins both inside and outside of cells. OptoBinders can bind or release their targets in response to blue light. The team reported that one type of OptoBinder changed its affinity for its target molecules up to 330-fold when shifted from dark to blue light conditions, while others showed a five-fold difference in binding affinity — all of which could be useful to researchers seeking to understand and engineer the behaviors of cells.

Crucially, OptoBinders can target proteins that are naturally present in cells, and their binding is easily reversible by changing light conditions — “a new capability that is not available to normal antibodies,” said co-author José Avalos, an assistant professor of chemical and biological engineering and the Andlinger Center for Energy and the Environment. “The ability to let go [of a target protein] is actually very valuable for many applications,” said Avalos, including engineering cells’ metabolisms, purifying proteins or potentially making biotherapeutics.

The new technique is the latest in a collaboration between Avalos and Jared Toettcher, an assistant professor of molecular biology. Both joined the Princeton faculty in 2015, and soon began working together on new ways to apply optogenetics — a set of techniques that introduce genes encoding light-responsive proteins to control cells’ behaviors.

“We hope that this is going to be the beginning of the next era of optogenetics, opening the door to light-sensitive proteins that can interface with virtually any protein in biology, either inside or outside of cells,” said Toettcher, the James A. Elkins, Jr. ’41 Preceptor in Molecular Biology.

Avalos and his team hope to use OptoBinders to control the metabolisms of yeast and bacteria to improve the production of biofuels and other renewable chemicals, while Toettcher’s lab is interested in the molecules’ potential to control signaling pathways involved in cancer.

The two papers describe different types of light-switchable binders: opto-nanobodies and opto-monobodies. Nanobodies are derived from the antibodies of camelids, the family of animals that includes camels, llamas and alpacas, which produce some antibodies that are smaller (hence the name nanobody) and simpler in structure than those of humans or other animals.

Nanobodies’ small size makes them more adaptable and easier to work with than traditional antibodies; they recently received attention for their potential as a COVID-19 therapy. Monobodies, on the other hand, are engineered pieces of human fibronectin, a large protein that forms part of the matrix between cells.

“These papers go hand in hand,” said Avalos. “The opto-nanobodies take advantage of the immune systems of these animals, and the monobodies have the advantage of being synthetic, which gives us opportunities to further engineer them in different ways.”

The two types of OptoBinders both incorporate a light-sensitive domain from a protein found in oat plants.

“When you turn the light on and off, these tools bind and release their target almost immediately, so that brings another level of control” that was not previously possible, said co-author César Carrasco-López, an associate research scholar in Avalos’ lab. “Whenever you are analyzing things as complex as metabolism, you need tools that allow you to control these processes in a complex way in order to understand what is happening.”

In principle, OptoBinders could be engineered to target any protein found in a cell. With most existing optogenetic systems, “you always had to genetically manipulate your target protein in a cell for each particular application,” said co-author Agnieszka Gil, a postdoctoral research fellow in Toettcher’s lab. “We wanted to develop an optogenetic binder that did not depend on additional genetic manipulation of the target protein.”

In a proof of principle, the researchers created an opto-nanobody that binds to actin, a major component of the cytoskeleton that allows cells to move, divide and respond to their environment. The opto-nanobody strongly bound to actin in the dark, but released its hold within two minutes in the presence of blue light. Actin proteins normally join together to form filaments just inside the cell membrane and networks of stress fibers that traverse the cell. In the dark, the opto-nanobody against actin binds to these fibers; in the light, these binding interactions are disrupted, causing the opto-nanobody to scatter throughout the cell. The researchers could even manipulate binding interactions on just one side of a cell — a level of localized control that opens new possibilities for cell biology research.

OptoBinders stand to unlock scores of innovative, previously inaccessible uses in cell biology and biotechnology, said Andreas Möglich, a professor of biochemistry at the University of Bayreuth in Germany who was not involved in the studies. But, Möglich said, “there is much more to the research” because the design strategy can be readily translated to other molecules, paving the way to an even wider repertoire of customized, light-sensitive binders.

“The impressive results mark a significant advance,” he said.

“Future applications will depend on being able to generate more OptoBinders” against a variety of target proteins, said Carrasco-López. “We are going to try to generate a platform so we can select OptoBinders against different targets” using a standardized, high-throughput protocol, he said, adding that this is among the first priorities for the team as they resume their experiments after lab research was halted this spring due to COVID-19.

Beyond applications that involve manipulating cell metabolism for microbial chemical production, Avalos said, OptoBinders could someday be used to design biomaterials whose properties can be changed by light.

The technology also holds promise as way to reduce side effects of drugs by focusing their action to a specific site in the body or adjusting dosages in real time, said Toettcher, who noted that applying light inside the body would require a device such as an implant. “There aren’t many ways to do spatial targeting with normal pharmacology or other techniques, so having that kind of capability for antibodies and therapeutic binders would be a really cool thing,” he said. “We think of this as a sea change in what sorts of processes can be placed under optogenetic control.”

Go to Source


Machine learning can predict market behavior

Machine learning can assess the effectiveness of mathematical tools used to predict the movements of financial markets, according to new Cornell research based on the largest dataset ever used in this area.

The researchers’ model could also predict future market movements, an extraordinarily difficult task because of markets’ massive amounts of information and high volatility.

“What we were trying to do is bring the power of machine learning techniques to not only evaluate how well our current methods and models work, but also to help us extend these in a way that we never could do without machine learning,” said Maureen O’Hara, the Robert W. Purcell Professor of Management at the SC Johnson College of Business.

O’Hara is co-author of “Microstructure in the Machine Age,” published July 7 in The Review of Financial Studies.

“Trying to estimate these sorts of things using standard techniques gets very tricky, because the databases are so big. The beauty of machine learning is that it’s a different way to analyze the data,” O’Hara said. “The key thing we show in this paper is that in some cases, these microstructure features that attach to one contract are so powerful, they can predict the movements of other contracts. So we can pick up the patterns of how markets affect other markets, which is very difficult to do using standard tools.”

Markets generate vast amounts of data, and billions of dollars are at stake in mining that data for patterns to shed light on future market behavior. Companies on Wall Street and elsewhere employ various algorithms, examining different variables and factors, to find such patterns and predict the future.

In the study, the researchers used what’s known as a random forest machine learning algorithm to better understand the effectiveness of some of these models. They assessed the tools using a dataset of 87 futures contracts — agreements to buy or sell assets in the future at predetermined prices.

“Our sample is basically all active futures contracts around the world for five years, and we use every single trade — tens of millions of them — in our analysis,” O’Hara said. “What we did is use machine learning to try to understand how well microstructure tools developed for less complex market settings work to predict the future price process both within a contract and then collectively across contracts. We find that some of the variables work very, very well — and some of them not so great.”

Machine learning has long been used in finance, but typically as a so-called “black box” — in which an artificial intelligence algorithm uses reams of data to predict future patterns but without revealing how it makes its determinations. This method can be effective in the short term, O’Hara said, but sheds little light on what actually causes market patterns.

“Our use for machine learning is: I have a theory about what moves markets, so how can I test it?” she said. “How can I really understand whether my theories are any good? And how can I use what I learned from this machine learning approach to help me build better models and understand things that I can’t model because it’s too complex?”

Huge amounts of historical market data are available — every trade has been recorded since the 1980’s — and vast volumes of information are generated every day. Increased computing power and greater availability of data have made it possible to perform more fine-grained and comprehensive analyses, but these datasets, and the computing power needed to analyze them, can be prohibitively expensive for scholars.

In this research, finance industry practitioners partnered with the academic researchers to provide the data and the computers for the study as well as expertise in machine learning algorithms used in practice.

“This partnership brings benefits to both,” said O’Hara, adding that the paper is one in a line of research she, Easley and Lopez de Prado have completed over the last decade. “It allows us to do research in ways generally unavailable to academic researchers.”

Story Source:

Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length.

Go to Source


Firebase Release New Productivity and Connectivity Tools

The Firebase team has been busy producing new tools and product updates. Since it launched Firebase Live last month to share new products, productivity tips, and tutorials, the company has released videos, codelabs, walkthrough videos, interactive demos, and tons of product updates. Here are the highlights.

Firebase recently beta launched the local emulator UI. The new offering came after developers requested a visual tool to complement the Firebase Emulator Suite. With the local emulator UI, users can run services locally on a machine through a web app and distinguishable UI.

The Emulator Suite supports instant code reload of security rules: the main line of defense between databases and untrustworthy clients. In the same vein, Firebase added other improvements to writing, debugging, and monitoring security tools. Further, it changed the rules language to be more expressive, streamlining rule logic, and adding local variables.

Next, Firebase Authentication offers a complete, customizable, end-to-end identity solution in less than 10 lines of code. It supports many popular identity providers including email and password, phone authentication, Facebook, Google, Twitter, and more. The last release added beta support for Sign in with Apple.

Firebase launched Extensions last year. Since launch, the company receives numerous requests for additional extensions. Stripe is a popular request, and now Firebase has built two new Stripe extensions.  Send Invoices with Stripe lets users programmatically create and send branded customer invoices using the Stripe payments platform. Run Subscription Payments with Stripe lets users create and sync subscriptions for web users with Stripe, as well as control access to subscription content via Firebase Authentication.

The ML Model Management API allows developers to deploy models programmatically instead of going through the TensorFlow console. This is especially useful when a machine learning pipeline automatically retrains models with new data. Learn more on how to use Firebase to enhance TensorFlow Lite deployments and try out our new codelabs (Android version or iOS version).

Firebase Remote Config allows users to dynamically alter app behavior and appearance without having to publish a new app version. New features to Remote Config provide a better understanding of active app configuration and better organizational and targeting tools.

Finally, users can stream Firebase Crashlytics data into BigQuery. This makes for better logging, analysis, and troubleshooting. Alerts occur in real-time and teams can automated much of the work needed for release monitoring. Stay up to date with allow Firebase announcement and updates through Firebase Live.

Go to Source
Author: <a href="">ecarter</a>


Google Debuts Actions SDK for Google Assistant

Google this week made more tools available to developers for customizing app interactions with Google Assistant. By releasing Actions Builder and Actions SDK, Google says developers can build their own conversational actions for Assistant faster than ever. Here’s what you need to know.

Google says there are more than 500 million active users of Google Assistant across some 90 countries. Google allows developers to tie their own voice-enabled apps and services to Assistant in ways that benefit users. That’s why it wants to make improvements to the way conversational actions work. 

Actions Builder, the first of the new tools, is a web-based IDE that allows developers to develop, test, and deploy their apps straight from the Actions console. Google says the refreshed graphical interface gives developers the power to visualize conversations, manage natural language understanding, and debug. 

The Actions SDK takes the web-powered tools of Actions Builder and brings them to the local level. It offers a file-based representation of Actions projects, which lets developers create natural language understanding training data, manage conversation flows, and import training data in bulk. Google says it updated the accompanying CLI so developers can build Actions entirely with code. 

Together, these two tools should cover the bases developers need when creating Actions for Google Assistant. Moreover, Google says developers can switch from one environment to the other to suit changing workflow needs. Codelabs, samples, and documentation are available to assist developers.

Google has more in store for developers and Assistant. It also added functionality to Home Storage and updated the Media API and Continuous Match Mode. 

Home Storage is a brand new feature that provides communal storage for Assistant devices connected to the home graph. Developers can save content for each individual user of the Assistant device, allowing for things such as saving the last played point in a game for every individual in a household. 

The updated Media API now supports longer-form media sessions. This means users can resume playback of select content across devices. For example, people can resume playback from a specific spot of a song or video, or choose to commence playback from a certain spot. 

Last, Continuous Match Mode lets Assistant respond immediately to users’ speech. This is meant to facilitate more fluid experiences, as Assistant can recognize defined words and phrases set by developers. This impacts the hardware, such as the mic, which will stay open temporarily so users can speak when they are ready without requiring them to sit through additional prompts from Assistant or the app.

Go to Source
Author: <a href="">EricZeman</a>


Salesforce, MuleSoft, and Tableau Unleash COVID-19 Data and Dev Resources

Salesforce, along with its subsidiaries MuleSoft and Tableau, recently launched a collection of tools designed to aid organizations in the development of safe back-to-work strategies and emergency response projects in an effort to mitigate the effects of the COVID-19 pandemic. These new offerings include a COVID-19 Data Platform and a Crisis Response Developer Portal.

(Disclosure: MuleSoft was acquired by Salesforce in 2018 and MuleSoft is the parent company to ProgrammableWeb)

ProgrammableWeb spoke with Liam Doyle (Senior Vice President, Product Management at Salesforce) and Uri Sarid (MuleSoft CTO) about the announcement and discussed some of the challenges facing all parties involved in combatting the pandemic and how they believe that the new Data Platform and Crisis Response Developer Portal can help accelerate the development of related solutions. Liam Doyle discussed Salesforce’s role in several communities and how data fragmentation can be a limiting factor in developing impactful solutions:

“When we think about the communities that we serve as Salesforce, the analyst community through Tableau, the developer community through MuleSoft, and through the Salesforce platform, we just noticed that the data that people are going to need to increasingly rely on in order to make good decisions as we go through the next phase of this crisis was highly fragmented”

Doyle went on to highlight three of the platform’s initial data sources (The New York Times, EUCDC, and the COVID Tracking Project) and noted that they all have entirely different schemas. As a result, aggregating this data can be challenging for organizations to get right and the hope is that by handling this initial roadblock Salesforce’s users can get a jumpstart on advancing solutions. 
Doyle acknowledged that the aggregate data is only as valuable as its source. When asked how the initial sources were selected he noted that the choices were based on “… feedback we had from organizations, whether they were Tableau analysts, customers in our Tableau community, or our own developers, about what were some of the primary data sources that were most important.”

The COVID-19 Data Platform is the summation of these efforts, including “the curation, the harmonization, standardization, storage, and then ultimately making that data available in all the formats it’s needed.” Salesforce seems committed to providing broad exposure to these resources, with the data currently available via the MuleSoft Anypoint Exchange, the Tableau Public repository, the AWS Data Exchange, and more. Additionally, there are plans for the data to be integrated internally with specific mention of and Salesforce Health Cloud. 

Currently, the platform features an endpoint dedicated to case tracking data, with plans for this data to be expanded to provide the potential for more granular analysis. Doyle noted specifically more detailed data globally from markets including Japan and Australia, France, Germany, and Brazil. Moving forward, the plan is to add additional endpoints. Doyle told ProgrammableWeb that next up is a predictive data model coming via academic institutions like M.I.T., Los Alamos, and the University of Texas.

The Crisis Response Developer Portal not only leverages the APIs and data models produced via the COVID-19 Data Platform, but it also provides a centralized location for myriad developer resources ranging from healthcare integration assets to curated third-party APIs. Uri Sarid spoke to the general value of application networks in his explanation of the role that the Developer Portal plays:

“An application network is a situation where you have the assets that you need in order to connect things, and all you need to do is to connect them. So in that sense, putting out as many assets as possible and leaving them non-proprietary, makes sure that the assets are more likely to be there when somebody wants to consume something, they don’t have to worry about barriers to entry.”

Additionally, the plan is for the Crisis Response Developer Portal to feature customer projects. Sarid noted that “The intent is really to offer a place where partners and others can feel free to collaborate on sharing those resources.” Although no projects have been selected yet, thousands of visualizations have already been created using the data on Tableau.

Go to Source
Author: <a href="">KevinSundstrom</a>