Categories
ProgrammableWeb

Google Announces Business Application Platform for No-Code Application Development

This week at Google’s Cloud Next virtual conference, Google Cloud Platform is outlining a new solution aimed at enabling citizen developers to automate processes and create applications in a no-code environment. The new Business Application Platform intends to leverage APIs to help enterprise customers modernize legacy apps and create new business channels. 

Google Business Application Platform will lean on existing Google products in order to support hybrid and multi-cloud implementations, AI/ML lifecycle management, and collaboration functionality. 

Amit Zavery, VP/GM and Head of Platform for Google Cloud, summarized the goal of the new product:

“Our mission is to develop a unified solution that empowers both technical developers as well as business developers with the ability to create and extend applications, build and automate business workflows, and connect and modernize legacy applications.”

This new platform will have security features that include customer-managed encryption keys and support for VPC-SC. The Business Application Platform is an ongoing project with a roadmap that hints at future functionality including a newly announced API Gateway. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ScienceDaily

Zooming in on dark matter

Cosmologists have zoomed in on the smallest clumps of dark matter in a virtual universe — which could help us to find the real thing in space.

An international team of researchers, including Durham University, UK, used supercomputers in Europe and China to focus on a typical region of a computer-generated universe.

The zoom they were able to achieve is the equivalent of being able to see a flea on the surface of the Moon.

This allowed them to make detailed pictures and analyses of hundreds of virtual dark matter clumps (or haloes) from the very largest to the tiniest.

Dark matter particles can collide with dark matter anti-particles near the centre of haloes where, according to some theories, they are converted into a burst of energetic gamma-ray radiation.

Their findings, published in the journal Nature, could mean that these very small haloes could be identified in future observations by the radiation they are thought to give out.

Co-author Professor Carlos Frenk, Ogden Professor of Fundamental Physics at the Institute for Computational Cosmology, at Durham University, UK, said: “By zooming in on these relatively tiny dark matter haloes we can calculate the amount of radiation expected to come from different sized haloes.

“Most of this radiation would be emitted by dark matter haloes too small to contain stars and future gamma-ray observatories might be able to detect these emissions, making these small objects individually or collectively ‘visible’.

“This would confirm the hypothesised nature of the dark matter, which may not be entirely dark after all.”

Most of the matter in the universe is dark (apart from the gamma radiation they emit in exceptional circumstances) and completely different in nature from the matter that makes up stars, planets and people.

The universe is made of approximately 27 per cent dark matter with the rest largely consisting of the equally mysterious dark energy. Normal matter, such as planets and stars, makes up a relatively small five per cent of the universe.

Galaxies formed and grew when gas cooled and condensed at the centre of enormous clumps of this dark matter — so-called dark matter haloes.

Astronomers can infer the structure of large dark matter haloes from the properties of the galaxies and gas within them.

The biggest haloes contain huge collections of hundreds of bright galaxies, called galaxy clusters, weighing a 1,000 trillion times more than our Sun.

However, scientists have no direct information about smaller dark matter haloes that are too tiny to contain a galaxy. These can only be studied by simulating the evolution of the Universe in a large supercomputer.

The smallest are thought to have the same mass as the Earth according to current popular scientific theories about dark matter that underlie the new research.

The simulations were carried out using the Cosmology Machine supercomputer, part of the DiRAC High-Performance Computing facility in Durham, funded by the Science and Technology Facilities Council (STFC), and computers at the Chinese Academy of Sciences.

By zooming-in on the virtual universe in such microscopic detail, the researchers were able to study the structure of dark matter haloes ranging in mass from that of the Earth to a big galaxy cluster.

Surprisingly, they found that haloes of all sizes have a very similar internal structure and are extremely dense at the centre, becoming increasingly spread out, with smaller clumps orbiting in their outer regions.

The researchers said that without a measure scale it was almost impossible to tell an image of a dark matter halo of a massive galaxy from one of a halo with a mass a fraction of the Sun’s.

Co-author Professor Simon White, of the Max Planck Institute of Astrophysics, Germany, said: “We expect that small dark matter haloes would be extremely numerous, containing a substantial fraction of all the dark matter in the universe, but they would remain mostly dark throughout cosmic history because stars and galaxies grow only in haloes more than a million times as massive as the Sun.

“Our research sheds light on these small haloes as we seek to learn more about what dark matter is and the role it plays in the evolution of the universe.”

The research team, led by the National Astronomical Observatories of the Chinese Academy of Sciences, and including Durham University, UK, the Max Planck Institute for Astrophysics, Germany, and the Center for Astrophysics in Harvard, USA, took five years to develop, test and carry out their cosmic zoom.

The research was funded by the STFC, the European Research Council, the Chinese Academy of Sciences, the Max Planck Society and Harvard University.

Go to Source
Author:

Categories
ScienceDaily

Virtual imaging trials optimize CT, radiography for COVID-19

An open-access article in ARRS’ American Journal of Roentgenology (AJR) established a foundation for the use of virtual imaging trials in effective assessment and optimization of CT and radiography acquisitions and analysis tools to help manage the coronavirus disease (COVID-19) pandemic.

Virtual imaging trials have two main components–representative models of targeted subjects and realistic models of imaging scanners–and the authors of this AJR article developed the first computational models of patients with COVID-19, while showing, as proof of principle, how they can be combined with imaging simulators for COVID-19 imaging studies.

“For the body habitus of the models,” lead author Ehsan Abadi explained, “we used the 4D extended cardiac-torso (XCAT) model that was developed at Duke University.”

Abadi and his Duke colleagues then segmented the morphologic features of COVID-19 abnormalities from 20 CT images of patients with multidiagnostic confirmation of SARS-CoV-2 infection and incorporated them into XCAT models.

“Within a given disease area, the texture and material of the lung parenchyma in the XCAT were modified to match the properties observed in the clinical images,” Abadi et al. continued.

Using a specific CT scanner (Definition Flash, Siemens Healthineers) and validated radiography simulator (DukeSim) to help illustrate utility, the team virtually imaged three developed COVID-19 computational phantoms.

“Subjectively,” the authors concluded, “the simulated abnormalities were realistic in terms of shape and texture,” adding their preliminary results showed that the contrast-to-noise ratios in the abnormal regions were 1.6, 3.0, and 3.6 for 5-, 25-, and 50-mAs images, respectively.

 

 

Story Source:

Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

A Raspberry Pi-based virtual reality system for small animals

The Raspberry Pi Virtual Reality system (PiVR) is a versatile tool for presenting virtual reality environments to small, freely moving animals (such as flies and fish larvae), according to a study published July 14, 2020 in the open-access journal PLOS Biology by David Tadres and Matthieu Louis of the University of California, Santa Barbara. The use of PiVR, together with techniques like optogenetics, will facilitate the mapping and characterization of neural circuits involved in behavior.

PiVR consists of a behavioral arena, a camera, a Raspberry Pi microcomputer, an LED controller, and a touchscreen. This system can implement a feedback loop between real-time behavioral tracking and delivery of a stimulus. PiVR is a versatile, customizable system that costs less than $500, takes less than six hours to build (using a 3D printer), and was designed to be accessible to a wide range of neuroscience researchers.

In the new study, Tadres and Louis used their PiVR system to present virtual realities to small, freely moving animals during optogenetic experiments. Optogenetics is a technique that enables researchers to use light to control the activity of neurons in living animals, allowing them to examine causal relationships between the activity of genetically-labeled neurons and specific behaviors.

As a proof-of-concept, Tadres and Louis used PiVR to study sensory navigation in response to gradients of chemicals and light in a range of animals. They showed how fruit fly larvae change their movements in response to real and virtual odor gradients. They then demonstrated how adult flies adapt their speed of movement to avoid locations associated with bitter tastes evoked by optogenetic activation of their bitter-sensing neurons. In addition, they showed that zebrafish larvae modify their turning maneuvers in response to changes in the intensity of light mimicking spatial gradients. According to the authors, PiVR represents a low-barrier technology that should empower many labs to characterize animal behavior and study the functions of neural circuits.

“More than ever,” the authors note, “neuroscience is technology-driven. In recent years, we have witnessed a boom in the use of closed-loop tracking and optogenetics to create virtual sensory realities. Integrating new interdisciplinary methodology in the lab can be daunting. With PiVR, our goal has been to make virtual reality paradigms accessible to everyone, from professional scientists to high-school students. PiVR should help democratize cutting-edge technology to study behavior and brain functions.”

Story Source:

Materials provided by PLOS. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Getting real with immersive sword fights

Sword fights are often the weak link in virtual reality (VR) fighting games, with digital avatars engaging in battle using imprecise, pre-recorded movements that barely reflect the player’s actions or intentions. Now a team at the University of Bath, in collaboration with the game development studio Ninja Theory, has found a solution to the challenges of creating realistic VR sword fights: Touche — a data-driven computer model based on machine learning.

Dr Christof Lutteroth, who created Touche with colleague Dr Julian Padget and EngD student Javier Dehesa, said: “Touche increases the realism of a sword fight by generating responsive animations against attacks and eliminating non-reactive behaviour from characters.

“Using our model, a game character can anticipate all possible fight situations and react to them, resulting in a more enjoyable and immersive game experience.”

The unpredictability of user actions presents a major conundrum for designers of VR games, explained Dr Lutteroth, who is a senior lecturer in Computer Science, director of Real and Virtual Environments Augmentation Labs (REVEAL) and co-investigator at the Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA). “VR games offer new freedom for players to interact naturally using motion, but this makes it harder to design games that react to player motions convincingly,” he said.

He added: “There are different expectations for screen-based video games. With these, a player presses ‘attack’ and their character displays a sequence of animations. But in a VR game, the player input is much harder to process.”

The Touche framework for VR sword fighting simplifies the necessary technical work to achieve a convincing simulation. It eliminates the need for game designers to add layer upon layer of detail when programming how a character should move in a particular situation (for instance, to block a particular sword attack). Instead, actors wearing motion capture equipment are asked to perform a range of sword fighting movements, and Touche builds a model from these movements. The virtual version of the actor is able to react to different situations in a similar fashion to a flesh-and-blood fighter. Game designers can then fine-tune this model to meet their needs by adjusting high-level parameters, such as how skilled and aggressive the game character should be. All this saves game developers a lot of time and leads to more realistic results.

For the Bath study, 12 volunteers were asked to take part in two three-minute sword fights: for the first fight, they used technology that is currently available and for the second, they used Touche. Touche had a strong positive effect on realism and the perceived sword fighting skills of game characters. Feedback from participants pointed to a convincing preference for Touche, with current sword fights being described as ‘unresponsive’ and ‘clumsy’ by comparison.

“Based on this, we are convinced that Touche can deliver more enjoyable, realistic and immersive sword fighting experiences, presenting a more skilled and less repetitive opponent behaviour,” said Dr Lutteroth. “I’m convinced this framework is the future for games — not only for sword fighting but also for other types of interaction between game characters. It will save developers a lot of time.”

Javier Dehesa Javier, who is based at the Centre for Digital Entertainment, interviewed game developers who had tested this new technology. He said: “Developers see the Touche framework as an important practical step in the industry towards data-driven interaction techniques. We could see this technology appear in commercial games very soon.”

Video to accompany press release: https://vimeo.com/430682565

Story Source:

Materials provided by University of Bath. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Bluestream Health Launches Bluestream+ for Virtual Healthcare Visits

Bluestream Health is launching Bluestream+, a  “virtual visit” API platform for all healthcare organizations and technology companies. With zero upfront or implementation cost and an average deployment time of around two weeks, Bluestream+ API subscribers will be able to rapidly transform their existing applications and websites into world-class telehealth platforms without significant retooling of their existing solutions.

The Bluestream Health RESTful API is secure, comprehensive and well-documented. It includes best-in-class workflows, security, transactional reporting, and other key elements built in, so users can focus on core competencies instead of reinventing the wheel.   

Previous telehealth API solutions have suffered from security issues, high deployment costs, and limited customization for workflows – all leaving technology adopters behind their competition and at risk of exposing sensitive patient information. There is much more to implementing healthcare-specific delivery of secure video than establishing a video connection. 

“Bluestream has best-in-class workflows, security, transactional reporting, and other key elements built-in, so you can focus on your core competencies instead of reinventing the wheel,” said Brian Yarnell, President and co-founder of Bluestream Health. “We also have pre-built user interfaces you can white label and directly integrate into your technology stack.”

API subscribers can also schedule a consultation with Bluestream Health’s deployment and technology experts to whiteboard out a fully integrated solution with them and their teams that will have them delivering virtual care inside their own platform in weeks instead of months.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ProgrammableWeb PR</a>

Categories
3D Printing Industry

Lung operation on young girl in Israel aided by 3D printing

An anaesthesia team in Israel recently used 3D printing and virtual reality to produce an exact model of the airway of a 7-year-old girl, as part of an operation to remove a section of her lung. Suffering from a bone and soft tissue cancer that had spread to her lung, the young girl required part […]

Go to Source
Author: Anas Essop

Categories
Hackster.io

Virtual Events: Attend & Present!

With many events canceled for this year, and some moving to virtual spaces, what’s on? Here are a few options to stay connected with the hardware community:

// https://forwardjs.com/sanfrancisco/
// https://toorcamp.toorcon.net/
// https://virtualtoor.toorcon.net/Static:Self-organized_Sessions
// https://hope.net/
// https://wiki.hope.net/index.php?title=Helping#Hackers_Helping_to_Fight_COVID-19
// https://www.youtube.com/watch?v=NwAiYooL7H0
// https://www.hackster.io/workshops
// https://www.particle.io/asset-tracking-webinar/

Categories
ProgrammableWeb

GitHub Announces Browser-Based IDE, Discussion Forum, and New Security Features

GitHub is hosting the company’s first-ever major virtual event for developers across the globe. The event is highlighted by the launch of four new products aimed at broadly improving the overall developer experience on the platform. The new products include GitHub Codespaces, GitHub Discussions, Code scanning and secret scanning, and GitHub Private Instances.

GitHub Codespaces is designed to reduce the time-to-code for developers working with repositories that operate in disparate environments and have varying requirements. Codespaces aims to streamline community contribution by creating a bridge between various repositories. GitHub noted in the announcement that:

“Codespaces can be configured to load your code and dependencies, developer tools, extensions, and dotfiles. Switching between environments is simple—you can navigate away at any time, and when you switch back, your codespace is automatically reopened.”

Codespaces includes a browser-based version of the full VS Code editor in addition to a desktop IDE. The company has not settled on pricing just yet and the product is currently available in a limited public beta only. However, GitHub did note that “code-editing functionality in the codespaces IDE will always be free.”

As for GitHub Discussions, the company is hoping to encourage conversation on the platform in a way that has not been possible before today. Rather than relying on issues and pull requests as a venue for collaboration, developers will now have access to an all-new section of the platform dedicated to collaborative conversation. GitHub Discussion is also in beta and is being tested with a small group of open-source communities. 

The addition of new code scanning and secret scanning features is a direct result of GitHub’s recent acquisition of Semmle. Semmle is a code analysis company that provides continuous security analysis services. GitHub today announced that these features would be brought more broadly to the platform as a native experience, although these services are still in beta. Additionally, the announcement mentioned that all private repositories will also now be able to take advantage of secret scanning, a feature previously limited to public repositories. 

Lastly, GitHub announced new options for teams that require strict security and compliance:

“Today we introduced our plans for GitHub Private Instances, a new, fully-managed option for our enterprise customers. Private Instances provides enhanced security, compliance, and policy features including bring-your-own-key encryption, backup archiving, and compliance with regional data sovereignty requirements.”

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
Hackster.io

🕹 Arduboy Lives!

Alex is working on a Tamagotchi-style virtual pet for self-care, using the Arduboy. This Arduino Leonardo analogue can be programmed with a ton of cool games and interfaces – let’s check out a few of them!

// https://arduboy.com/
// https://www.youtube.com/watch?v=wtkSycuTgvI
// https://community.arduboy.com/t/sirene-tenth-team-a-r-g-game/2206
// https://github.com/Arduboy/Arduboy
// https://www.hackster.io/news/an-arduboy-built-into-a-playable-game-boy-advance-fpga-cartridge-5fa63e90617f
// https://www.youtube.com/watch?v=V9X1m-nHDwY
// http://kevinneubauer.com/circuitpython-virtual-pet/
// https://www.hackster.io/news/this-circuitpython-badge-brings-tamagotchi-back-67921a93361e
// https://www.iamhoneydill.com/new-stuff/2018/3/2/the-mental-health-all-stars
// https://3dverkstan.se/protective-visor/