Categories
ScienceDaily

New process for efficient removal of steroid hormones from water

Micropollutants contaminate the water worldwide. Among them are steroid hormones that cannot be eliminated efficiently by conventional processes. Researchers of Karlsruhe Institute of Technology (KIT) have now developed an innovative filtration system that combines a polymer membrane with activated carbon. As the size of the carbon particles is very small, it is possible to reach the reference value of 1 nanogram estradiol — the physiologically most effective estrogen — per liter drinking water proposed by the European Commission. The improved method is reported in Water Research.

Supplying people with clean water is one of the biggest challenges of the 21st century worldwide. Often, drinking water is contaminated with micropollutants. Among them are steroid hormones that are used as medical substances and contraceptives. Their concentration in one liter water, into which treated wastewater is fed, may be a few nanograms only, but this small amount may already damage human health and affect the environment. Due to the low concentration and small size of the molecules, steroid hormones not only are difficult to detect, but also difficult to remove. Conventional sewage treatment technologies are not sufficient.

Reference Value of the European Commission Is Reached

Professor Andrea Iris Schäfer, Head of KIT’s Institute for Advanced Membrane Technology (IAMT) and her team have now developed an innovative method for the quick and energy-efficient elimination of steroid hormones from wastewater. Their technology combines a polymer membrane with activated carbon. “First, water is pressed through a semipermeable membrane that eliminates larger impurities and microorganisms,” Schäfer explains. “Then, water flows through the layer of carbon particles behind, which bind the hormone molecules.” At IAMT, researchers have further developed and improved this process together with filter manufacturer Blücher GmbH, Erkrath. Colleagues at KIT’s Institute of Functional Interfaces (IFG), Institute for Applied Materials (IAM), and the Karlsruhe Nano Micro Facility (KNMF) supported this further development by characterizing the material. This is reported by the scientists in Water Research. “Our technology allows to reach the reference value of 1 nanogram estradiol per liter drinking water proposed by the European Commission,” the Professor of Water Process Engineering says.

Particle Size and Oxygen Concentration Are Important

Scientists studied the processes in the activated carbon layer in more detail and used modified carbon particles (polymer-based spherical activated carbon — PBSAC). “It all depends on the diameter of the carbon particles,” Matteo Tagliavini of IAMT explains. He is the first author of the publication. “The smaller the particle diameter is, the larger is the external surface of the activated carbon layer available for adsorption of hormone molecules.” In an activated carbon layer of 2 mm thickness, the researchers decreased the particle diameter from 640 to 80 ?m and succeeded in eliminating 96% of the estradiol contained in the water. By increasing the oxygen concentration in the activated carbon, adsorption kinetics was further improved and a separation efficiency of estradiol of more than 99% was achieved. “The method allows for a high water flow rate at low pressure, is energy-efficient, and separates many molecules without producing any harmful by-products. It can be used flexibly in systems of variable size, from the tap to industrial facilities,” Schäfer says.

Story Source:

Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Four Tips Developers Should Follow When Building Location-Based Apps

As the Internet continues to evolve and mature, so do development tools and the technologists who use them. But what does a more mature approach to application development look like today? How do application developers manage growing complexity, and deliver stellar experiences for their users? 

Apps that use location data present extra challenges. Users expect seamless experiences with up-to-date, reliable data – a significant design and engineering challenge. To help developers, here are my four best practices for building location-based apps.

Ease the burden with APIs

Developers building an application or feature that requires the use of external or third-party data are often faced with the problem of having to download large datasets from which relevant data must be extracted. Geospatial information, for example, is often delivered in sizable datasets, which must be manually processed, stored, managed and updated when new updates are released by the provider. This technical overhead requires a significant investment of time on the side of a developer, making many potentially valuable datasets infeasible to use.

A much easier and more targeted solution is to consume the data that’s needed, when it’s needed, which is where APIs come into play. Not only are APIs often a more efficient way for users to consume data, they also tend to lower the total cost of building an app. The API provider hosts the systems that contain the relevant data and takes charge of the updates and management, freeing up time for developers to focus on other tasks. APIs also help ensure that the most up-to-date data is being consumed, which can often define the entire proposition of an app, especially if it’s one that relies on accurate location information. 

Mapping data visualizations for example often only fulfill their purpose if they reflect the most recent data. APIs can ease the burden on developers by offering access to reliable, well-maintained data sources. 

Validate before you integrate  

Application development today necessitates the use of a variety of datasets, many of which come in incompatible formats. In fact, the chances that all required datasets are interoperable is very low, which could mean a significant amount of heavy lifting at the data integration stage. When it comes to geospatial data on the web, for example, the de facto standard for vector features is GeoJSON. However, spatial data that developers will need might come as shapefiles, which are designed for GIS applications, and others, such as Geographical Markup Language (GML), KML, GPX, encoded polylines, vector tiles and so on – each requiring special tools to work with.

Researching datasets, formats and libraries is not a controversial tip by any means, but it bears emphasizing. Planning and validating use cases before breaking ground on code will save time, cost and sanity. After all, it’s easier to change a wire frame than to rewrite code – or worse, realize that you have chosen to incorporate a poorly supported JavaScript library or an incomplete data source. 

Every developer knows well the pain of having to solve a seemingly entirely unique problem – and the joy of finding a complex problem has already been solved. There are many active and thriving open source communities online that can provide support for spatial web developers. GeoJSON files can be easily and natively visualized with the popular Javascript mapping libraries, like Leaflet, Mapbox GL JS and OpenLayers – there is a wealth of information online on doing so. Other formats do not have the same support, which may make the process of converting an incompatible format into one that can be easily integrated with the rest of an application’s tech stack more difficult. Tools like mapshaper, QGIS, GDAL, and Turf.js can help developers convert between formats, reproject coordinates, and perform spatial analytics and manipulation processes. 

It’s a cliché, but working smarter, not harder, is what developers should strive for. 

Managing asynchronicity

The web presents some interesting challenges for developers. For one, we don’t know how long some processes will take. For example, how quickly data is fetched from an API depends on bandwidth, the amount of data, the server, and so on. This is especially relevant in apps using spatial data as datasets are often fetched from external APIs, and are often sizable.

This is complicated by the fact that JavaScript code often relies on assets loaded earlier – to use a variable x, x has to be declared and assigned a value. When that assignment operation takes an indeterminate amount of time, how do you know when to proceed to computing x + 1?

Fortunately, the JavaScript community has designed a sophisticated suite of solutions to this problem. A promise holds the place of a value that is not yet known – the eventual result of an asynchronous operation. When the operation finishes the promise is resolved – or rejected. By chaining promises together, programmers can create programs that handle asynchronous operations efficiently and cleanly. 

Building on promises, ECMAScript 2017’s async / await syntax makes “asynchronous code easier to write and to read afterwards”, enhancing the toolbelt devs can use to deal with these asynchronous operations.

To write performant apps, developers often need to fetch data from multiple sources, handle asynchronous operations, but not wait for one process to finish before starting another – that is, they need to run operations concurrently. One tool for this is the Promise.all() method, which processes an array of promises all at once, only resolving when all operations have finished. 

Understanding tooling and techniques is therefore essential. When it comes to asynchronous data, for example, JavaScript has a lot of data management tools built into the language itself, which can vastly reduce the potential complexity, improve performance, and result in better applications.

Don’t neglect platforms

A key challenge for designers and developers is creating a coherent experience across different platforms. Building an incredible data visualization feature for a webpage might work well on a laptop screen, but how well does this translate when viewed on a smartphone? For B2B applications, use cases are generally geared toward users sitting at a PC in an office. But increasingly, compatibility with portable devices, such as smartphones, is a requirement. 

For GIS developers, making compelling and usable mapping data visualizations that work on desktop and touchscreen devices of all sizes can be challenging. The trick is to design your essential interface interactions for touch first. This means initially excluding mouse-rollover, or right-click interactions. You can add those interactions later, but only for non-essential actions like shortcuts to things that are otherwise available through other click/touch-events. This is a tough ask for dense interfaces like GIS applications that often rely on right-click menus and rollovers to expose contextual information about a geographical feature.

A reduced set of interaction events is an integral part of the “mobile-first” design philosophy. It goes hand in hand with small screen real estate, and finger-sized hit-areas. 

Of course, it’s not possible to ignore mobile users, so design thinking must inform the early stages of any application that requires compelling data visualizations. Sometimes – especially with mapping visualizations meant to support routing or wayfinding – a mobile-first approach should be taken. Either way, think through the user needs early, and solicit feedback through regular user testing. 

So, there you have it. Just a few tips to consider before embarking on your journey to creating a location-based application.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">johnx25bd</a>

Categories
ScienceDaily

Exoskeleton research marches forward with study on fit

A shoddily tailored suit or a shrunken T-shirt may not be the most stylish, but wearing them is unlikely to hurt more than your reputation. An ill-fitting robotic exoskeleton on the battlefield or factory floor, however, could be a much bigger problem than a fashion faux pas.

Exoskeletons, many of which are powered by springs or motors, can cause pain or injury if their joints are not aligned with the user’s. To help manufacturers and consumers mitigate these risks, researchers at the National Institute of Standards and Technology (NIST) developed a new measurement method to test whether an exoskeleton and the person wearing it are moving smoothly and in harmony.

In a new report, the researchers describe an optical tracking system (OTS) not unlike the motion capture techniques used by filmmakers to bring computer-generated characters to life.

The OTS uses special cameras that emit light and capture what is reflected back by spherical markers arranged on objects of interest. A computer calculates the position of the labeled objects in 3D space. Here, this approach was used to track the movement of an exoskeleton and test pieces, called “artifacts,” fastened to its user.

“The ultimate goal is to strap these artifacts on to the person, put on the exoskeleton, compare the difference in the person wearing these artifacts versus the exoskeleton and see if they move the same,” said Roger Bostelman, a robotics engineer at NIST and lead author of the study. “If they move in concert with one another, then it fits correctly. If they move differently, it’s not fit correctly, and you could determine adjustments from there.”

In the new study, the NIST researchers aimed to capture the motion of the knee — one of the body’s relatively simple joints, Bostelman said. To assess the measurement uncertainty of their new approach, they constructed two artificial legs as test beds. One featured an off-the-shelf prosthetic knee, while the other incorporated a 3D-printed knee that more closely mimicked the real thing. Metal plates were also fastened to the legs with bungee cords to represent exoskeletal limbs or test artifacts attached to the body.

After fixing markers to the legs and plates, the team used the OTS and a digital protractor to measure knee angles throughout their full range of motion. By comparing the two sets of measurements, they were able to determine that their system was capable of accurately tracking leg position.

The tests also established that their system could calculate the separate motions of the legs and exoskeletal plates, allowing the researchers to show how closely aligned the two are while moving.

To adapt their method to be used on an actual person’s leg, the team designed and 3D-printed adjustable artifacts that — like a knee brace — fit to the user’s thigh and shin. Unlike the skin, which shifts due to its own elasticity and contracting muscles underneath, or skin-tight clothing that may be uncomfortable for some, these artifacts offer a rigid surface to stably and consistently place markers on different people, Bostelman said.

The team mounted the knee artifacts and a full-body exoskeleton garnished in reflective markers onto Bostelman. With the OTS keeping a close eye on his legs, he proceeded to perform several sets of squats.

The tests showed that most of the time, Bostelman’s leg and the exoskeleton moved in harmony. But for brief moments, his body moved while the exoskeleton didn’t. These pauses could be explained by the way in which this exoskeleton works.

To provide extra strength, it uses springs, which engage and disengage as the person moves. The exoskeleton pauses when the springs shift modes, however, temporarily resisting the user’s movement. By detecting the nuances of the exoskeleton’s function, the new measurement method demonstrated its attention to detail.

The raw data alone doesn’t always reveal whether a fit is adequate. To improve the accuracy of their method, Bostelman and his team will also use computational algorithms to analyze the positional data.

“The next steps are to develop artifacts for the arm, for the hip and basically all the joints this exoskeleton is supposed to be in line with and then perform similar tests,” Bostelman said.

Go to Source
Author:

Categories
ScienceDaily

Linking sight and movement

To get a better look at the world around them, animals constantly are in motion. Primates and people use complex eye movements to focus their vision (as humans do when reading, for instance); birds, insects, and rodents do the same by moving their heads, and can even estimate distances that way. Yet how these movements play out in the elaborate circuitry of neurons that the brain uses to “see” is largely unknown. And it could be a potential problem area as scientists create artificial neural networks that mimic how vision works in self-driving cars.

To better understand the relationship between movement and vision, a team of Harvard researchers looked at what happens in one of the brain’s primary regions for analyzing imagery when animals are free to roam naturally. The results of the study, published Tuesday in the journal Neuron, suggest that image-processing circuits in the primary visual cortex not only are more active when animals move, but that they receive signals from a movement-controlling region of the brain that is independent from the region that processes what the animal is looking at. In fact, the researchers describe two sets of movement-related patterns in the visual cortex that are based on head motion and whether an animal is in the light or the dark.

The movement-related findings were unexpected, since vision tends to be thought of as a feed-forward computation system in which visual information enters through the retina and travels on neural circuits that operate on a one-way path, processing the information piece by piece. What the researchers saw here is more evidence that the visual system has many more feedback components where information can travel in opposite directions than had been thought.

These results offer a nuanced glimpse into how neural activity works in a sensory region of the brain, and add to a growing body of research that is rewriting the textbook model of vision in the brain.

“It was really surprising to see this type of [movement-related] information in the visual cortex because traditionally people have thought of the visual cortex as something that only processes images,” said Grigori Guitchounts, a postdoctoral researcher in the Neurobiology Department at Harvard Medical School and the study’s lead author. “It was mysterious, at first, why this sensory region would have this representation of the specific types of movements the animal was making.”

While the scientists weren’t able to definitively say why this happens, they believe it has to do with how the brain perceives what’s around it.

“The model explanation for this is that the brain somehow needs to coordinate perception and action,” Guitchounts said. “You need to know when a sensory input is caused by your own action as opposed to when it’s caused by something out there in the world.”

For the study, Guitchounts teamed up with former Department of Molecular and Cellular Biology Professor David Cox, alumnus Javier Masis, M.A. ’15, Ph.D. ’18, and postdoctoral researcher Steffen B.E. Wolff. The work started in 2017 and wrapped up in 2019 while Guitchounts was a graduate researcher in Cox’s lab. A preprint version of the paper published in January.

The typical setup of past experiments on vision worked like this: Animals, like mice or monkeys, were sedated, restrained so their heads were in fixed positions, and then given visual stimuli, like photographs, so researchers could see which neurons in the brain reacted. The approach was pioneered by Harvard scientists David H. Hubel and Torsten N. Wiesel in the 1960s, and in 1981 they won a Nobel Prize in medicine for their efforts. Many experiments since then have followed their model, but it did not illuminate how movement affects the neurons that analyze.

Researchers in this latest experiment wanted to explore that, so they watched 10 rats going about their days and nights. The scientists placed each rat in an enclosure, which doubled as its home, and continuously recorded their head movements. Using implanted electrodes, they measured the brain activity in the primary visual cortex as the rats moved.

Half of the recordings were taken with the lights on. The other half were recorded in total darkness. The researchers wanted to compare what the visual cortex was doing when there was visual input versus when there wasn’t. To be sure the room was pitch black, they taped shut any crevice that could let in light, since rats have notoriously good vision at night.

The data showed that on average, neurons in the rats’ visual cortices were more active when the animals moved than when they rested, even in the dark. That caught the researchers off guard: In a pitch-black room, there is no visual data to process. This meant that the activity was coming from the motor cortex, not an external image.

The team also noticed that the neural patterns in the visual cortex that were firing during movement differed in the dark and light, meaning they weren’t directly connected. Some neurons that were ready to activate in the dark were in a kind of sleep mode in the light.

Using a machine-learning algorithm, the researchers encoded both patterns. That let them not only tell which way a rat was moving its head by just looking at the neural activity in its visual cortex, but also predict the movement several hundred milliseconds before the rat made it.

The researchers confirmed that the movement signals came from the motor area of the brain by focusing on the secondary motor cortex. They surgically destroyed it in several rats, then ran the experiments again. The rats in which this area of the brain was lesioned no longer gave off signals in the visual cortex. However, the researchers were not able to determine if the signal originates in the secondary motor cortex. It could be only where it passes through, they said.

Furthermore, the scientists pointed out some limitations in their findings. For instance, they only measured the movement of the head, and did not measure eye movement. The study is also based on rodents, which are nocturnal. Their visual systems share similarities with humans and primates, but differ in complexity. Still, the paper adds to new lines of research and the findings could potentially be applied to neural networks that control machine vision, like those in autonomous vehicles.

“It’s all to better understand how vision actually works,” Guitchounts said. “Neuroscience is entering into a new era where we understand that perception and action are intertwined loops. … There’s no action without perception and no perception without action. We have the technology now to measure this.”

This work was supported by the Harvard Center for Nanoscale Systems and the National Science Foundation Graduate Research Fellowship.

Go to Source
Author:

Categories
ScienceDaily

Layer of nanoparticles could improve LED performance and lifetime

Adding a layer of nanoparticles to LED designs could help them produce more light for the same energy, and also increase their lifetime.

This is according to a team from Imperial College London and the Indian Institute of Technology (IIT) Guwahati who have found a new way to boost the amount of light LEDs produce. They report their innovation in the journal Light Science & Applications.

Making light-emitting diode (LED) light sources more efficient and longer-lasting will mean they use less energy, reducing the environmental impact of their electricity use. LEDs are used in a wide range of applications, from traffic lights and backlighting for electronic displays, smartphones, large outdoor screens, and general decorative lighting, to sensing, water purification, and decontamination of infected surfaces.

The team modelled the impact of placing a two-dimensional (single layer) of nanoparticles between the LED chip, which produces the light, and the transparent casing that protects the chip. Although the casing is necessary, it can cause unwanted reflections of the light emitted from the LED chip, meaning not all the light escapes.

They found that adding a layer of finely tuned nanoparticles could reduce these reflections, allowing up to 20 percent more light to be emitted. The reflections also increase the heat inside the device, degrading the LED chip faster, so reducing the reflections could also reduce the heat and increase the lifetime of LED chips.

Co-author Dr Debabrata Sikdar from IIT Guwahati, formerly a European Commission Marie Curie-Sklodowska Fellow at Imperial, commented: “While improvements to the casing have been suggested previously, most make the LED bulkier or more difficult to manufacture, diminishing the economic effect of the improvement.

“We think that our innovation, based on fundamental theory and the detailed, balanced optimization analysis we performed, could be introduced into existing manufacturing processes with little disruption or added bulk.”

Co-author Professor Sir John Pendry, from the Department of Physics at Imperial, said: “The simplicity of the proposed scheme and the clear physics underpinning it should make it robust and, hopefully, easily adaptable to the existing LED manufacturing process.

“It is obvious that with larger light extraction efficiency, LEDs will provide greater energy savings as well as longer lifetime of the devices. This will definitely have a global impact on the versatile LED-based applications and their multi-billion-dollar market worldwide.”

Co-author Professor Alexei Kornyshev, from the Department of Chemistry at Imperial, commented: “The predicted effect is a result of development of a systematic theory of various photonic effects related to nanoparticle arrays at interfaces, applied and experimentally tested in the context of earlier reported switchable mirror-windows, tuneable-colour mirrors, and optical filters.”

The next stage for the research will be manufacturing a prototype LED device with a nanoparticle layer, testing the best configurations predicted by the theory — including the size, shape, material and spacing of the nanoparticles, and how far the layer should be from the LED chip.

The authors believe that the principles used can work along with other existing schemes implemented for enhancing light extraction efficiency of LEDs. The same scheme could also apply to other optical devices where the transmission of light across interfaces is crucial, such as in solar cells.

Story Source:

Materials provided by Imperial College London. Original written by Hayley Dunning. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Spintronics: Researchers show how to make non-magnetic materials magnetic

A complex process can modify non-magnetic oxide materials in such a way to make them magnetic. The basis for this new phenomenon is controlled layer-by-layer growth of each material. An international research team with researchers from Martin Luther University Halle-Wittenberg (MLU) reported on their unexpected findings in the journal Nature Communications.

In solid-state physics, oxide layers only a few nanometres thick are known to form a so-called two-dimensional electron gas. These thin layers, separated from one another, are transparent and electrically insulating materials. However, when one thin layer grows on top of the other, a conductive area forms under certain conditions at the interface, which has a metallic shine. “Normally this system remains non-magnetic,” says Professor Ingrid Mertig from the Institute of Physics at MLU. The research team has succeeded in controlling conditions during layer growth so that vacancies are created in the atomic layers near the interface. These are later filled in by other atoms from adjoining atomic layers.

The theoretical calculations and explanations for this newly discovered phenomenon were made by Ingrid Mertig’s team of physicists. The method was then experimentally tested by several research groups throughout Europe — including a group led by Professor Kathrin Dörr from MLU. They were able to prove the magnetism in the materials. “This combination of computer simulations and experiments enabled us to decipher the complex mechanism responsible for the development of magnetism,” explains Mertig.

Story Source:

Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

Student start-up Legendary Vish to commercialize vegan-friendly 3D printed salmon 

A group of international students has developed a 3D printing technique that enables them to print complex binders and proteins into plant-based fish alternatives.  Having begun working together on an EU-backed AM research project in 2017, the Danish-based band of students has recently innovated an extrusion-based 3D printing process for fabricating salmon. Now trading under […]

Go to Source
Author: Paul Hanaphy

Categories
3D Printing Industry

University of Minnesota researchers use 3D bioprinting to create beating human heart 

Researchers from the University of Minnesota have developed a novel bio-ink, enabling them to create a functional 3D printed beating human heart.  The cell-laden biomaterial, produced using pluripotent stem cells, allowed the research team to 3D print an aortic replica with more chambers, ventricles and a higher cell wall thickness than was previously possible. In […]

Go to Source
Author: Paul Hanaphy

Categories
ScienceDaily

Implants: Can special coatings reduce complications after implant surgery?

New coatings on implants could help make them more compatible. Researchers at the Martin Luther University Halle-Wittenberg (MLU) have developed a new method of applying anti-inflammatory substances to implants in order to inhibit undesirable inflammatory reactions in the body. Their study was recently published in the International Journal of Molecular Sciences.

Implants, such as pacemakers or insulin pumps, are a regular part of modern medicine. However, it is not uncommon for complications to arise after implantation. The immune system identifies the implant as a foreign body and attempts to remove it. “This is actually a completely natural and useful reaction by the immune system,” says Professor Thomas Groth, a biophysicist at MLU. It helps to heal wounds and kills harmful pathogens. If this reaction does not subside on its own after a few weeks, it can lead to chronic inflammation and more serious complications. “The immune system attracts various cells that try to isolate or remove the foreign entity. These include macrophages, a type of phagocyte, and other types of white blood cells and connective tissue cells,” explains Groth. Implants can become encapsulated by connective tissue, which can be very painful for those affected. In addition, the implant is no longer able to function properly. Drugs that suppress the immune response in a systemic manner are often used to treat chronic inflammation, but may have undesired side effects.

Thomas Groth’s team was looking for a simple way to modify the immune system’s response to an implant in advance. “This is kind of tricky, because we obviously do not want to completely turn off the immune system as its processes are vital for healing wounds and killing pathogens. So, in fact we only wanted to modulate it,” says the researcher. To do this, his team developed a new coating for implants that contains anti-inflammatory substances. For their new study, the team used two substances that are already known to have an anti-inflammatory effect: heparin and hyaluronic acid.

In the laboratory, the scientists treated a surface with the two substances by applying a layer that was only a few nanometres thick. “The layer is so thin that it does not affect how the implant functions. However, it must contain enough active substance to control the reaction of the immune system until the inflammatory reaction has subsided,” adds Groth. In cell experiments, the researchers observed how the two substances were absorbed by the macrophages, thereby reducing inflammation in the cell cultures. The untreated cells showed clear signs of a pronounced inflammatory reaction. This is because the active substances inside the macrophages interfere with a specific signalling pathway that is crucial for the immune response and cell death. “Both heparin and hyaluronic acid prevent the release of certain pro-inflammatory messenger substances. Heparin is even more effective because it can be absorbed by macrophage cells,” Groth concludes.

So far, the researchers have only tested the method on model surfaces and in cell cultures. Further studies on real implants and in model organisms are to follow.

Story Source:

Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Google Simplifies ML Kit SDK, Adds APIs

Google has made a standalone version of its Machine Learning Kit SDK available to developers, allowing them to create AI-assisted apps directly on devices. The big change includes two new APIs that make it possible to create web-connected ML Kit apps without requiring Firebase.

Google first debuted ML Kit at I/O two years ago. Since then, some 25,000 applications on both Android and iOS have come to depend on ML Kit’s features. Google believes the changes revealed this week will simplify the process of coding ML Kit apps.

The first release of ML Kit depended heavily on Firebase. Google says many developers asked for more flexibility. This is the primary reason Google is decoupling ML Kit from Firebase. On-device APIs in the new ML Kit SDK no longer necessitate a Firebase project, though both can still be used together should you wish. 

ML Kit’s APIs are meant to assist developers when it comes to Vision and Natural Language domains. This means ML Kit helps scan barcodes, recognize text, track and classify objects in real-time, translate text, and similar. It is now fully focused on on-device machine learning. Google says it’s fast since there is no network latency. It can perform inferences on a stream of images or video multiple times per second. It works offline. All the APIs maintain functionality no matter the network connection. Privacy is still top of mind. Thanks to the local processing, there’s no need for user data to be sent to a remote server over the network. 

First step? Google suggests developers migrate from the Firebase on-device APIs to the standalone ML Kit SDK. Instructions are available here. Once migrated, developers will find several new functionalities. 

For example, devs can shrink their app footprint via Google Play Services. By adding Face detection/contour APIs to ML Kit, developers can cram more functionality into their APK upon compiling. 

Google added Android Jetpack Lifecycle support to all the APIs. This means developers can put addObserver to use for automatically managing ML Kit API teardowns as apps go through actions such as screen rotations. This simplifies CameraX integration, which Google says developers should also consider adopting throughout their ML apps. 

Last, two new APIs are part of the early access program. The first is Entity Extraction, which detects entities in text and makes them actionable. Think addresses, phone numbers, and the like. The second is Pose Detection, which is a low-latency pose detection supporting 33 skeletal points, including hands and feet. Details are available here

Google says all the ML Kit resources are available on a refreshed website where samples, support documentation, and community channels are easily accessed.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">EricZeman</a>