Categories
ScienceDaily

Can life survive a star’s death? Webb telescope can reveal the answer

When stars like our sun die, all that remains is an exposed core — a white dwarf. A planet orbiting a white dwarf presents a promising opportunity to determine if life can survive the death of its star, according to Cornell University researchers.

In a study published in the Astrophysical Journal Letters, they show how NASA’s upcoming James Webb Space Telescope could find signatures of life on Earth-like planets orbiting white dwarfs.

A planet orbiting a small star produces strong atmospheric signals when it passes in front, or “transits,” its host star. White dwarfs push this to the extreme: They are 100 times smaller than our sun, almost as small as Earth, affording astronomers a rare opportunity to characterize rocky planets.

“If rocky planets exist around white dwarfs, we could spot signs of life on them in the next few years,” said corresponding author Lisa Kaltenegger, associate professor of astronomy in the College of Arts and Sciences and director of the Carl Sagan Institute.

Co-lead author Ryan MacDonald, a research associate at the institute, said the James Webb Space Telescope, scheduled to launch in October 2021, is uniquely placed to find signatures of life on rocky exoplanets.

“When observing Earth-like planets orbiting white dwarfs, the James Webb Space Telescope can detect water and carbon dioxide within a matter of hours,” MacDonald said. “Two days of observing time with this powerful telescope would allow the discovery of biosignature gases, such as ozone and methane.”

The discovery of the first transiting giant planet orbiting a white dwarf (WD 1856+534b), announced in a separate paper — led by co-author Andrew Vanderburg, assistant professor at the University of Wisconsin, Madison — proves the existence of planets around white dwarfs. Kaltenegger is a co-author on this paper, as well.

This planet is a gas giant and therefore not able to sustain life. But its existence suggests that smaller rocky planets, which could sustain life, could also exist in the habitable zones of white dwarfs.

“We know now that giant planets can exist around white dwarfs, and evidence stretches back over 100 years showing rocky material polluting light from white dwarfs. There are certainly small rocks in white dwarf systems,” MacDonald said. “It’s a logical leap to imagine a rocky planet like the Earth orbiting a white dwarf.”

The researchers combined state-of-the-art analysis techniques routinely used to detect gases in giant exoplanet atmospheres with the Hubble Space Telescope with model atmospheres of white dwarf planets from previous Cornell research.

NASA’s Transiting Exoplanet Survey Satellite is now looking for such rocky planets around white dwarfs. If and when one of these worlds is found, Kaltenegger and her team have developed the models and tools to identify signs of life in the planet’s atmosphere. The Webb telescope could soon begin this search.

The implications of finding signatures of life on a planet orbiting a white dwarf are profound, Kaltenegger said. Most stars, including our sun, will one day end up as white dwarfs.

“What if the death of the star is not the end for life?” she said. “Could life go on, even once our sun has died? Signs of life on planets orbiting white dwarfs would not only show the incredible tenacity of life, but perhaps also a glimpse into our future.”

Story Source:

Materials provided by Cornell University. Original written by Kate Blackwood. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

First exposed planetary core discovered allows glimpse inside other worlds

The surviving core of a gas giant has been discovered orbiting a distant star by University of Warwick astronomers, offering an unprecedented glimpse into the interior of a planet.

The core, which is the same size as Neptune in our own solar system, is believed to be a gas giant that was either stripped of its gaseous atmosphere or that failed to form one in its early life.

The team from the University of Warwick’s Department of Physics reports the discovery today (1 July) in the journal Nature, and is thought to be the first time the exposed core of a planet has been observed.

It offers the unique opportunity to peer inside the interior of a planet and learn about its composition.

Located around a star much like our own approximately 730 light years away, the core, named TOI 849 b orbits so close to its host star that a year is a mere 18 hours and its surface temperature is around 1800K.

TOI 849 b was found in a survey of stars by NASA’s Transiting Exoplanet Survey Satellite (TESS), using the transit method: observing stars for the tell-tale dip in brightness that indicates that a planet has passed in front of them. It was located in the ‘Neptunian desert’ — a term used by astronomers for a region close to stars where we rarely see planets of Neptune’s mass or larger.

The object was then analysed using the HARPS instrument, on a program led by the University of Warwick, at the European Southern Observatory’s La Silla Observatory in Chile. This utilises the Doppler effect to measure the mass of exoplanets by measuring their ‘wobble’ — small movements towards and away from us that register as tiny shifts in the star’s spectrum of light.

The team determined that the object’s mass is 2-3 times higher than Neptune but it is also incredibly dense, with all the material that makes up that mass squashed into an object the same size.

Lead author Dr David Armstrong from the University of Warwick Department of Physics said: “While this is an unusually massive planet, it’s a long way from the most massive we know. But it is the most massive we know for its size, and extremely dense for something the size of Neptune, which tells us this planet has a very unusual history. The fact that it’s in a strange location for its mass also helps — we don’t see planets with this mass at these short orbital periods.

“TOI 849 b is the most massive terrestrial planet — that has an earth like density — discovered. We would expect a planet this massive to have accreted large quantities of hydrogen and helium when it formed, growing into something similar to Jupiter. The fact that we don’t see those gases lets us know this is an exposed planetary core.

“This is the first time that we’ve discovered an intact exposed core of a gas giant around a star.”

There are two theories as to why we are seeing the planet’s core, rather than a typical gas giant. The first is that it was once similar to Jupiter but lost nearly all of its outer gas through a variety of methods. These could include tidal disruption, where the planet is ripped apart from orbiting too close to its star, or even a collision with another planet. Large-scale photoevaporation of the atmosphere could also play a role, but can’t account for all the gas that has been lost.

Alternatively, it could be a ‘failed’ gas giant. The scientists believe that once the core of the gas giant formed then something could have gone wrong and it never formed an atmosphere. This could have occurred if there was a gap in the disc of dust that the planet formed from, or if it formed late and the disc ran out of material.

Dr Armstrong adds: “One way or another, TOI 849 b either used to be a gas giant or is a ‘failed’ gas giant.

“It’s a first, telling us that planets like this exist and can be found. We have the opportunity to look at the core of a planet in a way that we can’t do in our own solar system. There are still big open questions about the nature of Jupiter’s core, for example, so strange and unusual exoplanets like this give us a window into planet formation that we have no other way to explore.

“Although we don’t have any information on its chemical composition yet, we can follow it up with other telescopes. Because TOI 849 b is so close to the star, any remaining atmosphere around the planet has to be constantly replenished from the core. So if we can measure that atmosphere then we can get an insight into the composition of the core itself.”

Go to Source
Author:

Categories
ProgrammableWeb

Motion Layout API Leads Android Studio 4 Features

Google this week released Android Studio 4, the newest version of its core Android developer environment. Android Studio 4 is meant to give developers more tools so they can design and build apps faster and smarter. The platform arrives mere days before the Android 11 beta launch, which is expected on June 11. Google pushed back the launch of Android 11 in response to COVID-19. Android Studio 3.6 arrived in February.

The company added several core functions to Android Studio 4, including a new Motion Editor, Build Analyzer, and Java 8 APIs. Google also says that it overhauled the CPU Profiler user interface to make it more intuitive.

“Whether you’re working from your kitchen table on a laptop or from a home office, you need tools that keep up with you,” said Adarsh Fernando, Product Manager, in a blog post. “Android Studio 4.0 is the result of our drive to bring you new and improved tools for coding smarter, building faster, and designing the apps your users depend on.”

Google explained that the MotionLayout API should help developers manage motion and widget animations in their apps. The new Motion Editor delivers an interface meant specifically for creating, editing, and previewing MotionLayout animations. It can generate them automatically, with support for constraints, transitions, and other attributes. The resulting code is merely a click away. The new Layout Inspector allows developers to debug their UIs by providing access to updated data and insights on how resources are being used. These include dynamic layout hierarchy detailed view attributes, and a live 3D model of the UI. Last, the Layout Validation window lets developers preview their layouts on different screen sizes and resolutions simultaneously to ensure it works on a range of hardware. 

Java 8 language APIs are now onboard regardless of the associated app’s minimum API level. This allows the Android Gradle plugin to support the Android Studio Build analyzer, which means developers can create complex feature-on-feature dependencies between dynamic modules. A list is available here. Other Java-related adjustments include the ability to enable or disable discrete build features, as well as support for Kotlin DSL script files. 

On the developing and profiling front, developers will find improvements to the R8 rules, which include a smart editor for code shrinkers, an update to the IntelliJ IDEA platform, Android-specific live template performance bumps for Kotlin, and Clangd support. 

The quickest and easiest way to get Android Studio 4 is via the update tools within the app itself. Alternatively, you can snag the download here.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">EricZeman</a>

Categories
ScienceDaily

What’s Mars made of?

Earth-based experiments on iron-sulfur alloys thought to comprise the core of Mars reveal details about the planet’s seismic properties for the first time. This information will be compared to observations made by Martian space probes in the near future. Whether the results between experiment and observation coincide or not will either confirm existing theories about Mars’ composition or call into question the story of its origin.

Mars is one of our closest terrestrial neighbors, yet it’s still very far away — between about 55 million and 400 million kilometers depending on where Earth and Mars are relative to the sun. At the time of writing, Mars is around 200 million kilometers away, and in any case, it is extremely difficult, expensive and dangerous to get to. For these reasons, it is sometimes more sensible to investigate the red planet through simulations here on Earth than it is to send an expensive space probe or, perhaps one day, people.

Keisuke Nishida, an Assistant Professor from the University of Tokyo’s Department of Earth and Planetary Science at the time of the study, and his team are keen to investigate the inner workings of Mars. They look at seismic data and composition which tell researchers not just about the present state of the planet, but also about its past, including its origins.

“The exploration of the deep interiors of Earth, Mars and other planets is one of the great frontiers of science,” said Nishida. “It’s fascinating partly because of the daunting scales involved, but also because of how we investigate them safely from the surface of the Earth.”

For a long time it has been theorized that the core of Mars probably consists of an iron-sulfur alloy. But given how inaccessible the Earth’s core is to us, direct observations of Mars’ core will likely have to wait some time. This is why seismic details are so important, as seismic waves, akin to enormously powerful sound waves, can travel through a planet and offer a glimpse inside, albeit with some caveats.

“NASA’s Insight probe is already on Mars collecting seismic readings,” said Nishida. “However, even with the seismic data there was an important missing piece of information without which the data could not be interpreted. We needed to know the seismic properties of the iron-sulfur alloy thought to make up the core of Mars.”

Nishida and team have now measured the velocity for what is known as P-waves (one of two types of seismic wave, the other being S-waves) in molten iron-sulfur alloys.

“Due to technical hurdles, it took more than three years before we could collect the ultrasonic data we needed, so I am very pleased we now have it,” said Nishida. “The sample is extremely small, which might surprise some people given the huge scale of the planet we are effectively simulating. But microscale high-pressure experiments help exploration of macroscale structures and long time-scale evolutionary histories of planets.”

A molten iron-sulfur alloy just above its melting point of 1,500 degrees Celsius and subject to 13 gigapascals of pressure has a P-Wave velocity of 4,680 meters per second; this is over 13 times faster than the speed of sound in air, which is 343 meters per second. The researchers used a device called a Kawai-type multianvil press to compress the sample to such pressures. They used X-ray beams from two synchrotron facilities, KEK-PF and SPring-8, to help them image the samples in order to then calculate the P-wave values.

“Taking our results, researchers reading Martian seismic data will now be able to tell whether the core is primarily iron-sulfur alloy or not,” said Nishida. “If it isn’t, that will tell us something of Mars’ origins. For example, if Mars’ core includes silicon and oxygen, it suggests that, like the Earth, Mars suffered a huge impact event as it formed. So, what is Mars made of and how was it formed? I think we are about to find out.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Why You Should Re-evaluate the Strength of Your API Security

APIs are the core building blocks of digital businesses—assembling data, events and services from within the organization, throughout ecosystems, and across devices. Now with organizations moving more of their business online in the wake of COVID-19, those APIs are increasingly being exposed to external groups, whether to other departments, customers or enterprises in their partner network. This exposure not only raises the stakes for protecting users and data but also makes APIs more vulnerable to security attacks.
 
In light of these changes, it is important for software architects and developers to review the API security mechanisms they have in place with their API management implementations and evaluate whether it’s time to put more robust protections in place. Let’s look at the four main authentication approaches to securing APIs as part of an API management implementation, as well as options to augment these approaches either during the design or runtime of an API.
 

Basic Authentication

Basic authentication is the easiest and most straightforward method. The sender places a username and password into the request header, and the username and password are encoded with Base64. This authentication method does not require cookies, session IDs, login pages, or other such special solutions, because it uses the HTTP or HTTPS header, itself. It also doesn’t need to do any handshakes or follow any complex response systems.
 
The simplicity of basic authentication means it can be an appropriate method for securing APIs where you have a system-to-system communication that happens in an internal network, for example with an Internet of Things (IoT) solution.
 
However, basic authentication comes with some caveats. First, credentials are encoded and not encrypted. As a result, it is easy to retrieve the username and password. For this reason, developers should only use basic authentication on HTTPS, not plain HTTP. Additionally, with this method, there is no policy on refreshing user credentials. So, if the user credentials are changed, the client applications need to be changed as well.
 

OAuth 2.0

Open Authorization (OAuth) 2.0 is an open standard for access delegation that is used for token-based authentication and authorization. With this approach, the user logs into a system, and that system requests authentication, usually in the form of a token. The user will then forward this request to an authentication server, which will either reject or allow the authentication.
 
From here, the token is provided to the user, and then to the requestor. Hereafter, without the user, this token can be used or can be validated over time until it expires. This token has a scope defining the limit of where it can be used, so the same token cannot be used for all the resources of APIs if it is bound with a particular resource. Once the lifetime of the access token expires, the requestor has to refresh the token to obtain a new one. So there is a token refreshing mechanism when compared to the basic authentication.
 
Fundamentally, OAuth 2.0 is a much more secure and powerful system because of the scope and validity period. This standard is used by many technology providers, such as Google, Facebook, and Twitter.
 
There are several grant types a client application can use to acquire an access token, including client credentials, authorization code, password grant, New Technology LAN Manager (NTLM), Security Assertion Mark-up Language (SAML), grant, and refresh grant. Password grant is similar to basic authentication where a user needs to use their credentials to get an access token. Meanwhile, authorization code is the strongest grant type.
 
An OAuth 2.0 access token can reside in two forms: either an opaque token or a JSON Web Token (JWT). JWT is a self-contained access token that contains all the required information to validate an access token. When you provide an opaque access token, the gateway has to call the key manager to validate the access token.
 
By contrast, with a JWT access token, the gateway itself can validate the access token without going to the key manager. This is very important in a locked-down environment where the gateway is not connected to other components. The drawback of this OAuth 2.0 option is that client applications have to implement the access token retrieval, refresh, etc., making it somewhat complex for the client application.
 

API Keys

API Keys are a popular option as it requires less effort than developing an OAuth 2.0 flow. In this method, a unique generated value is assigned to each first-time user, signifying that the user is known. Each time when a user tries to re-enter the system, their unique key is used to verify that they’re the same user as before. Based on the API key implementation of different API management providers, the key can be changed. It could be a JWT access token, an opaque token, or a consumer key of an OAuth 2.0  application.
 
An API key can be sent as a header as well as a query parameter as part of the URL. However, using it as a header value is more secure, since sending the API key as a query parameter makes it easier for any attackers to discover.
 
The API key approach is widely used in the industry and has become somewhat of a standard due to its simplicity. And, while API keys are recommended for use with system-to-system communications, they present too many security risks when used for authenticating users.
 

Mutual SSL

In mutual Secure Sockets Layer (SSL) authentication, a client verifies the identity of the server, and the server validates the identity of the client so that both parties trust each other. Using this approach, both the API gateway and the clients that connect to it are well known. Therefore, it is recommended for use in scenarios requiring tight security and/or server-to-server communications.
 
Determining the best authentication option for securing APIs will depend on the specific use case. Key questions to consider are:
1. Is tight security for the system needed?
2. Are either system-to-system or server-to-server communications involved?
3. Is access delegation required?
4. Does the communication between the API gateway and the clients happen in an internal network?

Beyond Authentication

Authentication is at the heart of securing APIs. But, depending on how extensively an API is used, it is worth considering further API security measures to measure how secure an API is or flag if there is a potential API attack.
 
One approach is to audit the Swagger definition of an API before publishing it. With technology like API Security Audit from 42Crunch, an API developer can get a report on how secure the API is based on OpenAPI format requirements, security, and data validation. Using the report, the developer can eliminate any existing security loopholes in an API by tracking exactly where an issue is and taking corrective actions.
 
Another approach is to use artificial intelligence (AI) based security analysis at runtime to detect any attempted attacks. Here, the API gateway will intercept API requests and apply any policies as usual, but it will also send API request metadata to an API security analysis tool, such as PingIntelligence for APIs. The AI-driven tool will check the validity of the request, search for any abnormal access patterns, and confirm if the request is OK or if it is suspect and therefore needs to be blocked by the API gateway.
 
Examples of API attacks that can be reported and blocked using this approach include credential stuffing attacks on login systems, botnets scraping data, layer 7 distributed denial-of-service (DDoS) attacks, stolen cookies, tokens, or API keys; and rogue insiders exfiltrating data in small amounts over time.

Conclusion

Early business surveys and our own discussions with customers across a range of industries suggest that their expansion of remote collaboration and communications, automation, and digital product and service offerings will become the new normal. With APIs driving many of these uses, now is a good time for architects and developers to re-evaluate and update their API security strategies.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">Pubudu-Gunatilaka</a>

Categories
UnrealEngine

Two-person studio creates robust character creation tool to fuel CHIKARA: Action Arcade Wrestling

Developed by a core team of just two people, CHIKARA: Action Arcade Wrestling represents a blend of old-school wrestling games infused with new tech. With this being VICO Game Studio’s first Unreal Engine-powered game, we spoke with Executive Producer David Horn and Lead Programmer and Producer Eugene Tchoukrov to see how they handled the transition. 

With CHIKARA: Action Arcade Wrestling being the studio’s third wrestling game, what have you learned from the first two titles that you’ve built upon for CHIKARA?

Executive Producer David Horn: Well, the first game, Action Arcade Wrestling, was done completely by one person. The team who worked on this game only came together during the development of the second. So, there was quite a bit of education needed to describe the vision of the first game. 

By the time we were ready for the third, CHIKARA: Action Arcade Wrestling, we had already experienced many of the pitfalls in developing a wrestling game, especially one that includes a creation suite. Once wrestlers can be created by users, an entirely different approach needed to be taken. Every decision must be widened to allow for almost any possibility of content. There’s a good amount that needs to be abstracted; almost like writing a play without knowing any of the main characters. 

With varied wrestling games throughout the years, were there any titles that you particularly drew inspiration from? 

Horn: WWF WrestleFest was our main influence, but our game really draws from elements of a lot of wrestling games that came out in the 90s. The darker lit crowd was inspired by WWF WrestleMania for SNES and the over-the-top moves were inspired by Saturday Night Slam Masters. We were also inspired to marry the elements of those games with grappling systems from the 64-bit era. 

What’s interesting, at least to us, is that saying our game is an arcade pro wrestling game seems to be a very specific niche. However, within that genre, there has already been a wide spectrum of game styles. The aforementioned WrestleFest was very “arcadey” but it was still grounded in real-world physics, as opposed to WWF WrestleMania Arcade, which literally had a wrestler’s arms morphing into razor blades. The more recent WWE All-Stars, on the other hand, sort of struck a middle ground between realism and being over-the-top. 

Separating itself from most other wrestling games, CHIKARA features power-ups that enable temporary stat boosts and allows players to perform superhuman abilities. What drew the studio to include those elements? 

Horn: Well, even the original WWF game for the Nintendo saw characters getting power-ups, albeit very bizarrely. There have been other instances, mostly in the Japanese games, where players would earn certain items to get certain boosts. I remember as a kid playing M.U.S.C.L.E. for the NES and random flashing orbs would roll into the ring to make your character more powerful for a time.

Once we started to play around with the idea, it snowballed into “how could this not only work in the game, but differently for each match type?” For instance, during a battle royal, if you see a person trying to eliminate an opponent, looking at what power-up you might want to steal can determine which wrestler you help.

What drew the studio to the game’s cel-shaded look?

Horn: Once we started experimenting with the look, we decided to go with the cel-shaded aesthetic. For one thing, it made it easier for users to generate content since they didn’t have to worry about spending so much time making realistic versions of wrestlers. Secondly, it really fit the genre we were going for; for instance, when you look at the game and its cel-shaded appearance, you know instantly that it’s probably not a slow simulation game. And finally, it completely fits the theme of the CHIKARA wrestling organization, as they’re known as being a “comic book [that’s] come to life.”
DeveloperInterview_-CHIKARA_005.jpg
With the game featuring hundreds of hand-made animations, can you touch upon how you created them?

Horn: We met an amazingly talented animator named Erik Novak during the development of the second game. We contracted him to do a majority of the animations for CHIKARA: Action Arcade Wrestling. We were very blessed to work with him.

One of the crazy parts about making a Pro Wrestling game is that once the animations have been… well… animated, you’re about 10 percent done. The animation then has to be imported, categorized, and converted into an Anim-Montage, and assigned another animation for the “receive” counterpart. Then that move has a reversal which then, in turn, has a “reversal receive” move to go along with it. Then each move has branch points to define reversal points, sound effects, rope bounces, special effects, points in the animation where tables, wrestlers, or other surrounding entities will be impacted, and more. Suffice it to say, it’s quite an undertaking.

With the free Wrestle Factory suite that you’ve created for the game, CHIKARA features some of the most robust character customization options out of any wrestling title yet. Can you talk about the importance of this inclusion and explain how you implemented it?

Horn: We didn’t want to release a wrestling game without a create-a-wrestler mode. There are some games that decide not to, and we totally get that, but we felt that we could create something that could really branch us out into some creative areas that indie wrestling games might not always go. When you look on YouTube and see a match between superheroes and/or old-time wrestling favorites, you might think “that’s pretty cool for being a small indie wrestling game” because not many of them have in-depth creation tools. 

But to take things a step further… seeing a match between a soda machine and a pencil? Now we’re getting the “Wait… What?!” reactions, and that always makes us smile. We’re showing folks things they’ve never seen before in a wrestling game or with user-generated content in general.

Lead Programmer and Producer Eugene Tchoukhrov: [Regarding the character customization feature,] we implemented it as a separate, special build of the game to help us streamline development and iteration, and to make sure that the codebases are in-sync, which is very important. 

We wanted players on all platforms to be able to download and enjoy creations made by anyone else, which meant that we’d need a unified pipeline for these creations across all platforms. This was probably the most important piece to test and verify before we committed to developing the creation tool. We had to develop three parts of the game simultaneously: the creation tool (the bit that allowed the actual creation, which included texture processing, morphing, accessory creation, move list preview/setup, serialization, metadata generation, and uploading), the server-side (the bit that stored uploaded creation, processed queries, and retrieval), and the game-side parts (the bit that downloaded and loaded creations into a match). It was an interesting and challenging few months putting together the initial versions of these three creation system parts.
DeveloperInterview_-CHIKARA_001.jpg
Considering this is the studio’s first UE4 game, how was the transition to the engine?

Tchoukhrov: Having a AAA level engine at our disposal was fantastic. We no longer had to spend time reinventing the wheel putting in basic pipelines or rendering back-end stuff. We pretty much hit the ground running in that regard. Having source [code] access is probably the best thing about UE since it allowed us to fix bugs in the engine immediately, tweak core parts of the engine to our needs and, most importantly, add critical features to our game—like the morphing of NvCloth assets.

What made UE4 a good fit for the game?

Tchoukhrov: As mentioned earlier, having a AAA level engine that was easy to work with was tremendous. But it was the ability to start making the game right-away and having complete source access to tweak the engine to our needs. It has saved us countless times when we ran into a pesky bug or needed to add a unique feature to the engine. 

Do you have any favorite UE4 tools or features?

Tchoukhrov: DataTables. They are awesome and we use them everywhere to make as much of our game as dynamic as we can. It makes adding new modes a breeze and makes tweaking gameplay quick and easy. Oh, and Blueprints are cool as well! 
DeveloperInterview_-CHIKARA_006.jpg
Eugene, as someone with cerebral palsy, can you talk about what it’s been like to have a healthy and robust career as a video game programmer?

Tchoukhrov: It has been a pretty interesting ride so far and I’m very much enjoying it. I don’t think it has been too different from anyone else’s career in this field, to be honest. Sure, I work around physical hurdles to do what I do best, but at the end of the day, I’m doing it all because I love it; because seeing a bunch of lines of code turn into a suplex on-screen makes me smile, and because seeing other people play and enjoy the game you put countless hours into is an incredible experience. It makes the effort to work around the physical hurdles all worth it.

The game features a streamlined two-button control scheme. Can you speak to why you went with this accessible approach and elaborate on how you designed the combat system in general?

Horn: The two-button control scheme comes directly out of the arcades. Folks didn’t have the time to learn a complicated system of buttons after they put in their quarters. So, we wanted to capture that and further separate our game from what’s out there currently. We looked at games like Super Smash Bros to see how we can get the most out of it. We were able to lean on the fact that our game is played on a 2D plane. Adding modifiers for the directional buttons/arrows allowed us to expand what we could do.

What was the biggest challenge developing the game and how did you overcome it? 

Tchoukhrov: The fighting/move system. Wrestling games are up there as some of the more difficult games to put together due to the myriad of scenarios that must be accounted for and handled correctly during a match. 
DeveloperInterview_-CHIKARA_002.jpg
The studio did a lot of work to create realistic rope and cloth physics for the game, even going as far as creating a plugin on the Unreal Marketplace. Can you shed some light on what this work entailed? 

Tchoukhrov: To create the realistic rope and cloth physics, countless late nights were spent reading whitepapers with very long physics equations and figuring out the best way to convert them into working code. This was followed by more late nights converting the working simulation code into much faster SIMD code which was “fun…” and then adding multi-threading to the simulation, which was even more “fun.” Early on in the game’s development, [we] rigged up a simulated rope with collisions to test out as the ring ropes and it looked fantastic. We were using static animations until then. This little weekend prototype evolved into what is now a multi-platform soft-body simulation system that can simulate rope, cloth and volume-preserving soft-bodies (this is a recent development that is releasing as a free update soon). This plugin has been used in a number of games, most recently in Deliver Us The Moon. It has been super exciting and humbling to see how other games utilize the plugin.

In addition to contributing to the Unreal Marketplace, did the studio leverage anything from it? 

Horn: The initial cel-shading came from the Marketplace as did several of the particle effects. The Marketplace was great for a small team because it really allowed us to be educated quickly. What we mean is that instead of spending tons of time understanding everything about UE’s particle systems, for instance, we could purchase something that was close to our vision, dissect it, and then use that information to create our own original Material

Thanks for your time. Where can people learn more about CHIKARA: Action Arcade Wrestling?

https://www.actionarcadewrestling.com/
https://www.facebook.com/ActionArcadeWrestling
https://twitter.com/ChikaraAAW
 

Go to Source
Author: Jimmy Thang

Categories
3D Printing Industry

B9Creations releases medical 3D printer, establishes new Healthcare Division & Service Bureau

B9Creations, the South Dakota-based manufacturer of the B9 Core Series of DLP 3D Printers, has launched a new additive manufacturing Healthcare Division & Service Bureau as well as a 3D printer targeted at the medical industry. The B9 Core Med 500, released with the first two of the company’s suite of biocompatible materials, will support […]

Go to Source
Author: Tia Vialva

Categories
ScienceDaily

Tool for rapid breakdown of cellular proteins

Cellular functions depend on the functionality of proteins, and these functions are disturbed in diseases. A core aim of cell biological research is to determine the functions of individual proteins and how their disturbances result in disease.

One way to study protein functions is to examine the effects of rapidly removing them from cells. During the past years, researchers have developed several techniques to achieve this. One of these techniques is known as AID, or auxin-inducible degron. This method utilises the signalling of a class of plant hormones known as auxins to rapidly deplete individual proteins from cells.

The research group headed by Academy Professor Elina Ikonen at the University of Helsinki increased the speed and improved the hormone-dependency of the AID technique in human cells. The researchers were able to degrade the targeted cellular proteins within minutes.

In addition, the researchers expanded the potential uses of the technique to encompass several types of proteins. The method can also be employed in the acute degradation of proteins whose long-term absence cannot be tolerated by cells.

The study was published in the distinguished Nature Methods journal.

“The technique we have developed is useful primarily in research, but thanks to advances in gene technology it also has potential for novel diagnostic and therapeutic methods,” Elina Ikonen states.

Story Source:

Materials provided by University of Helsinki. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Temperatures of 800 billion degrees in the cosmic kitchen

When two neutron stars collide, the matter at their core enters extreme states. An international research team has now studied the properties of matter compressed in such collisions. The HADES long-term experiment, involving more than 110 scientists, has been investigating forms of cosmic matter since 1994. With the investigation of electromagnetic radiation arising when stars collide, the team has now focused attention on the hot, dense interaction zone between two merging neutron stars.

Simulation of electromagnetic radiation

Collisions between stars cannot be directly observed — not least of all because of their extreme rarity. According to estimates, none has ever happened in our galaxy, the Milky Way. The densities and temperatures in merging processes of neutron stars are similar to those occurring in heavy ion collisions, however. This enabled the HADES team to simulate the conditions in merging stars at the microscopic level in the heavy ion accelerator at the Helmholtzzentrum für Schwerionenforschung (GSI) in Darmstadt.

As in a neutron star collision, when two heavy ions are slammed together at close to the speed of light, electromagnetic radiation is produced. It takes the form of virtual photons that turn back into real particles after a very short time. However, the virtual photons occur very rarely in experiments using heavy ions. “We had to record and analyze about 3 billion collisions to finally reconstruct 20,000 measurable virtual photons,” says Dr. Jürgen Friese, the former spokesman of the HADES collaboration and researcher at Laura Fabbietti’s Professorship on Dense and Strange Hadronic Matter at TUM.

Photon camera shows collision zone

To detect the rare and transient virtual photons, researchers at TUM developed a special 1.5 square meter digital camera. This instrument records the Cherenkov effect: the name given to certain light patterns generated by decay products of the virtual photons. “Unfortunately the light emitted by the virtual photons is extremely weak. So the trick in our experiment was to find the light patterns,” says Friese. “They could never be seen with the naked eye. We therefore developed a pattern recognition technique in which a 30,000 pixel photo is rastered in a few microseconds using electronic masks. That method is complemented with neural networks and artificial intelligence.”

Observing the material properties in the laboratory

The reconstruction of thermal radiation from compressed matter is a milestone in the understanding of cosmic forms of matter. It enabled the scientists to place the temperature of the new system resulting from the merger of stars at 800 billion degrees celsius. As a result, the HADES team was able to show that the merging processes under consideration are in fact the cosmic kitchens for the fusion of heavy nucleii.

Story Source:

Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length.

Go to Source
Author: