Categories
ScienceDaily

Future autonomous machines may build trust through emotion

Army research has extended the state-of-the-art in autonomy by providing a more complete picture of how actions and nonverbal signals contribute to promoting cooperation. Researchers suggested guidelines for designing autonomous machines such as robots, self-driving cars, drones and personal assistants that will effectively collaborate with Soldiers.

Dr. Celso de Melo, computer scientist with the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory at CCDC ARL West in Playa Vista, California, in collaboration with Dr. Kazunori Teradafrom Gifu University in Japan, recently published a paper in Scientific Reports where they show that emotion expressions can shape cooperation.

Autonomous machines that act on people’s behalf are poised to become pervasive in society, de Melo said; however, for these machines to succeed and be adopted, it is essential that people are able to trust and cooperate with them.

“Human cooperation is paradoxical,” de Melo said. “An individual is better off being a free rider, while everyone else cooperates; however, if everyone thought like that, cooperation would never happen. Yet, humans often cooperate. This research aims to understand the mechanisms that promote cooperation with a particular focus on the influence of strategy and signaling.”

Strategy defines how individuals act in one-shot or repeated interaction. For instance, tit-for-tat is a simple strategy that specifies that the individual should act as his/her counterpart acted in the previous interaction.

Signaling refers to communication that may occur between individuals, which could be verbal (e.g., natural language conversation) and nonverbal (e.g., emotion expressions).

This research effort, which supports the Next Generation Combat Vehicle Army Modernization Priority and the Army Priority Research Area for Autonomy, aims to apply this insight in the development of intelligent autonomous systems that promote cooperation with Soldiers and successfully operate in hybrid teams to accomplish a mission.

“We show that emotion expressions can shape cooperation,” de Melo said. “For instance, smiling after mutual cooperation encourages more cooperation; however, smiling after exploiting others — which is the most profitable outcome for the self — hinders cooperation.”

The effect of emotion expressions is moderated by strategy, he said. People will only process and be influenced by emotion expressions if the counterpart’s actions are insufficient to reveal the counterpart’s intentions.

For example, when the counterpart acts very competitively, people simply ignore-and even mistrust-the counterpart’s emotion displays.

“Our research provides novel insight into the combined effects of strategy and emotion expressions on cooperation,” de Melo said. “It has important practical application for the design of autonomous systems, suggesting that a proper combination of action and emotion displays can maximize cooperation from Soldiers. Emotion expression in these systems could be implemented in a variety of ways, including via text, voice, and nonverbally through (virtual or robotic) bodies.”

According to de Melo, the team is very optimistic that future Soldiers will benefit from research such as this as it sheds light on the mechanisms of cooperation.

“This insight will be critical for the development of socially intelligent autonomous machines, capable of acting and communicating nonverbally with the Soldier,” he said. “As an Army researcher, I am excited to contribute to this research as I believe it has the potential to greatly enhance human-agent teaming in the Army of the future.”

The next steps for this research include pursuing further understanding of the role of nonverbal signaling and strategy in promoting cooperation and identifying creative ways to apply this insight on a variety of autonomous systems that have different affordances for acting and communicating with the Soldier.

Go to Source
Author:

Categories
ProgrammableWeb

Slash DGraph Brings Managed Backend and No Code Services to the GraphQL Ecosystem

DGraph, a San Francisco based startup focused on providing a full-powered native GraphQL backend service that operates at a global scale, announced on September 10, 2020, the release of its premier product, Slash GraphQL. According to Manish Jain, CEO of the company, “Slash GraphQL takes away the work of building a fast and scalable GraphQL backend.”

The Slash DGraph platform provides a GraphQL data storage service that sits on top of DGraph’s graph database along with a set of graphical tools that allow developers to do GraphQL programming activities that are typically done in code.

For example, the Slash DGraph Schema Builder allows developers to create GraphQL types by entering a type’s name and description in text boxes and then use an array of slider controls to configure the type under construction. (See Figure 1, below.)

Figure 1: The Slash DGraph Schema Builder allows users to create GraphQL types graphically.

Once GraphQL types are defined using the Slash DGraph Schema Builder the types are saved automatically in the DGraph backend by doing nothing more than clicking the tool’s Deploy button. The required data storage capabilities are implemented automatically behind the scenes within the Slash DGraph service.

Developers can then add data to the DGraph backend using a feature called the API Explorer. Developers execute GraphQL queries and mutilations that have been automatically generated by the Slash DGraph platform without having to write a single line of code. (See Figure 2, below)


Figure 2: The Slash DGraph API Explorer allows developers to execute automatically generated GraphQL queries and mutations by selecting fields and adding data to text boxes without having to code.

Slash DGraph is intended to be a fully managed GraphQL service. It supports OAuth, ACL, and TLS based security. Also, the Slash DGraphQL backend is fully operational with technologies such as Apollo GraphQL, Postman, React, and Angular.

Slash DGraph is presently used by companies such as VMWare, Intuit, Siemens and Overstock.com, to name a few.

DGraph’s stated mission is to provide Google production-level scale and throughput to every developer working with GraphQL, as CEO Jain states, “with Slash GraphQL, developers click a button and are presented with a /graphql endpoint. They set their GraphQL schemas and immediately get a production-ready backend. Right away they can start querying and mutating data, without any coding whatsoever.”

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">reselbob</a>

Categories
ProgrammableWeb

Slash DGraph Brings Managed Backend and Nocode Services to the GraphQL Ecosystem

DGraph, a San Francisco based startup focused on providing a full-powered native GraphQL backend service that operates at a global scale, announced on September 10, 2020, the release of its premier product, Slash GraphQL. According to Manish Jain, CEO of the company, “Slash GraphQL takes away the work of building a fast and scalable GraphQL backend.”

The Slash DGraph platform provides a GraphQL data storage service that sits on top of DGraph’s graph database along with a set of graphical tools that allow developers to do GraphQL programming activities that are typically done in code.

For example, the Slash DGraph Schema Builder allows developers to create GraphQL types by entering a type’s name and description in text boxes and then use an array of slider controls to configure the type under construction. (See Figure 1, below.)

Figure 1: The Slash DGraph Schema Builder allows users to create GraphQL types graphically.

Once GraphQL types are defined using the Slash DGraph Schema Builder the types are saved automatically in the DGraph backend by doing nothing more than clicking the tool’s Deploy button. The required data storage capabilities are implemented automatically behind the scenes within the Slash DGraph service.

Developers can then add data to the DGraph backend using a feature called the API Explorer. Developers execute GraphQL queries and mutilations that have been automatically generated by the Slash DGraph platform without having to write a single line of code. (See Figure 2, below)


Figure 2: The Slash DGraph API Explorer allows developers to execute automatically generated GraphQL queries and mutations by selecting fields and adding data to text boxes without having to code.

Slash DGraph is intended to be a fully managed GraphQL service. It supports OAuth, ACL, and TLS based security. Also, the Slash DGraphQL backend is fully operational with technologies such as Apollo GraphQL, Postman, React, and Angular.

Slash DGraph is presently used by companies such as VMWare, Intuit, Siemens and Overstock.com, to name a few.

DGraph’s stated mission is to provide Google production-level scale and throughput to every developer working with GraphQL, as CEO Jain states, “with Slash GraphQL, developers click a button and are presented with a /graphql endpoint. They set their GraphQL schemas and immediately get a production-ready backend. Right away they can start querying and mutating data, without any coding whatsoever.”

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">reselbob</a>

Categories
ProgrammableWeb

​Echosec Systems Releases API for Dark Web Data Access

Echosec Systems has launched an API, providing direct access to previously unavailable data feeds from fringe social, deep, and dark web sources. Some of the most imminent digital and physical threats now originate on obscure social networks and paste sites that are notoriously difficult to access and integrate into threat intelligence processes.

The Platform API was developed to meet intelligence community requirements for streamlined access to raw data from these less-regulated sources. These include sites increasingly relevant for counter-terrorism (4chan, Telegram), data breaches (Pastebin, DeepPaste), and other global and national security objectives—and are not available through existing commercial APIs. 

Data crawled by the Platform API is available through existing Echosec Systems tools, Echosec and Beacon, and as an independent data solution for users with existing tooling and interfaces. Michael Raypold, Echosec Systems’ Chief Technology Officer, says, “The API delivers raw, real-time risk data in the form of a REST API to intelligence professionals, helping users get more value out of their other existing data sets.”

The API allows users to locate pertinent intelligence more efficiently through multilingual machine learning models. The models automatically detect and classify content within eight threat categories including hate speech and data disclosure. Users are able to combine Echosec Systems data with other feeds and query techniques as a bespoke solution. This could include, for example, automatically cross-referencing data points across disparate feeds, or developing their own machine learning models.

The Platform API knowledge base, documentation, and sample query builder are also available to current Echosec Systems customers with the launch.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ProgrammableWeb PR</a>

Categories
3D Printing Industry

3D Sierra Leone provides prostheses to amputees with support from SHINING 3D

3D Sierra Leone, a dutch nonprofit organization, is providing customized 3D printed prostheses to patients in the West African country, with the support of SHINING 3D, a Hangzhou-headquartered 3D scanner and printer manufacturer.  3D Sierra Leone is dedicated to improving the lives of people in Sierra Leone that undergo amputations but are without access to […]

Go to Source
Author: Anas Essop

Categories
ScienceDaily

Asteroid 1998 OR2 to safely fly past Earth this week

A large near-Earth asteroid will safely pass by our planet on Wednesday morning, providing astronomers with an exceptional opportunity to study the 1.5-mile-wide (2-kilometer-wide) object in great detail.

The asteroid, called 1998 OR2, will make its closest approach at 5:55 a.m. EDT (2:55 a.m. PDT). While this is known as a “close approach” by astronomers, it’s still very far away: The asteroid will get no closer than about 3.9 million miles (6.3 million kilometers), passing more than 16 times farther away than the Moon.

Asteroid 1998 OR2 was discovered by the Near-Earth Asteroid Tracking program at NASA’s Jet Propulsion Laboratory in July 1998, and for the past two decades astronomers have tracked it. As a result, we understand its orbital trajectory very precisely, and we can say with confidence that this asteroid poses no possibility of impact for at least the next 200 years. Its next close approach to Earth will occur in 2079, when it will pass by closer — only about four times the lunar distance.

Despite this, 1998 OR2 is still categorized as a large “potentially hazardous asteroid” because, over the course of millennia, very slight changes in the asteroid’s orbit may cause it to present more of a hazard to Earth than it does now. This is one of the reasons why tracking this asteroid during its close approach — using telescopes and especially ground-based radar — is important, as observations such as these will enable an even better long-term assessment of the hazard presented by this asteroid.

Close approaches by large asteroids like 1998 OR2 are quite rare. The previous close approach by a large asteroid was made by asteroid Florence in September 2017. That 3-mile-wide (5-kilometer-wide) object zoomed past Earth at 18 lunar distances. On average, we expect asteroids of this size to fly by our planet this close roughly once every five years.

Since they are bigger, asteroids of this size reflect much more light than smaller asteroids and are therefore easier to detect with telescopes. Almost all near-Earth asteroids (about 98%) of the size of 1998 OR2 or larger have already been discovered, tracked and cataloged. It is extremely unlikely there could be an impact over the next century by one of these large asteroids, but efforts to discover all asteroids that could pose an impact hazard to Earth continue.

JPL hosts the Center for Near-Earth Object Studies (CNEOS) for NASA’s Near-Earth Object Observations Program in NASA’s Planetary Defense Coordination Office.

More information about CNEOS, asteroids and near-Earth objects can be found at:

https://cneos.jpl.nasa.gov

For more information about NASA’s Planetary Defense Coordination Office, visit:

https://www.nasa.gov/planetarydefense

For asteroid and comet news and updates, follow @AsteroidWatch on Twitter:

https://twitter.com/AsteroidWatch

Story Source:

Materials provided by NASA/Jet Propulsion Laboratory. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Google Announces AdMob API Open Beta

Google has announced an open beta release of an all-new Admob API. Google is providing the API specifically for application providers using AdMob, the company’s platform for promoting and monetizing mobile applications. The new API provides data that more accurately mirrors the information that users would find in the AdMob user interface. 

The primary goal of offering the new API is to provide data that is more accurate and consistent with the information that you would find in the AdMob UI. However, Google also mentions plans to provide quicker access to future technology, including JSON REST, via the new API. The company also mentions plans to provide programmatic access to mediation reporting via the API. 

The API is currently available to all AdMob users and interested developers should check out the getting started guide. Google is also encouraging early adopters to contribute feedback. The AdMob API is intended to fully replace the AdSense APITrack this API where application providers are concerned.  

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
3D Printing Industry

CRP continues to support Energica as MotoE World Cup rescheduled

Italian electric motorbike manufacturer Energica will once again be providing its Ego Corsa motorcycles and technical support for the FIM Enel MotoE World Cup competition in 2020. Originally set to begin on 26 March 2020, the event organizers have been forced to reschedule the race calendar in response to the coronavirus pandemic. Replacement dates are […]

Go to Source
Author: Anas Essop

Categories
ProgrammableWeb

How to Build a Streaming API Using gRPC

gRPC is an alternative architectural pattern to REST, GraphQL, and other legacy patterns for providing and consuming APIs. It’s becoming a popular way among many companies to create APIs intended to run at web-scale compared to the other architectures that often rely on data formatting standards such as JSON or XML. gRPC uses the Protocol Buffers binary format to exchange data between API providers and API consumers (ie: applications). Even Microsoft recently started to experiment with support for the technology in .NET and Azure. Using Protocol Buffers makes APIs based on gRPC fast. Also, gRPC takes advantage of the bi-directional communication feature of HTTP/2 to implement continuous message exchange, which is in effect two-way streaming.

gRPC brings a whole new dimension to API data streaming and is a viable alternative to other streaming API approaches such as Webhooks, Websockets and GraphQL subscriptions. For many, it’s a game-changing technology worthy of investigation.

In this article, we’re going to cover the basic concepts that drive gRPC. Then, we’re going to cover how to actually implement a gRPC API that was custom made to illustrate various concepts and techniques about gRPC. We’ll be doing the coding in Node.JS.

Please be advised that gRPC is a specification that has been implemented in a variety of programming frameworks. Node.JS is just one of many. If you want to implement gRPC in a language you’re familiar with, there are implementations in Go, Java, and C#/.NET just to name a few.

Understanding gRPC

The origins of gRPC start at Google as a specification the company devised to implement inter-service communication in a more efficient manner. gRPC has its ancestral roots in the concept of a Remote Procedure Call (RPC) which was a term coined by computer scientist Bruce Jay Nelson in 1981 while working a Xerox PARC while also pursuing a Ph.D. at Carnegie Mellon University.

Essentially gRPC is when code that’s executing in a function in one process invokes a function running in another process. That second process can be on the same machine or it can be on a separate machine half a continent away.

There are a number of legacy technologies in play that support RPC. Java has Remote Method Invocation (RMI). .NET has XML-RPC.NET. There’s even a version of RPC that runs under COBOL, COBOL-RPC. And, there are others.

So, while gRPC is a seemingly new technology, the reality is that its foundation has been around for a while. What makes gRPC different is that, in addition to making RPC a mainstream technology, gRPC is intended to run over the internet over a standard protocol, HTTP/2 using the well-known serialization format, Protocol Buffers.

Yet as novel as gRPC might seem, the essentials of RPC (without the “g”) are still there. (See Figure 1, below.)

Figure 1: gRPC is based on RPC which is a network programming model for exchanging data with a function that is running in another process on a remote machine

When you boil it all down, it’s about one function calling another function somewhere on a network using a message format that is well-known to both caller and receiver.

Now that you have a bit of background on the predecessors to gRPC, let’s take a look at the specifics of using it to implement a streaming API

Implementing a Web-Scale gRPC Streaming API

As mentioned previously, gRPC is the next-generation version of RPC. gRPC allows a client on the internet to call a function running on another machine across the internet according to an IP address or DNS name. The actual exchange of information exchange takes place over HTTP/2 which, for all intents and purposes, is version 2 of the web (most of the web as we know it today runs on HTTP/1.1). The message used in a given exchange is compiled into the Protocol Buffer binary format. The exchange can take place synchronously as a standard request/response or in a continuous, ordered exchange using the streaming feature of HTTP/2.

Working with the IDL, Types, and Functions

gRPC uses a formal type definition to describe the information (ie: string, int32, bool, etc.) that will be exchanged between the calling client and receiving server. Types are defined as messages. gRPC messages are defined using the Interface Description Language (IDL). When it comes time to use these messages in a real-time information exchange, they’ll be compiled from the original text of the message based on a format described in the IDL into the binary Protocol Buffer format. In addition to defining messages in IDL, you define function signatures too. These signatures make it possible for the client and server to access the IDL definitions in order to facilitate data exchange.

Figure 2, below shows an excerpt from the IDL that describes Seat and VenueRequest messages that are used in our sample Seat-Saver-gRPC demonstration application that accompanies this article. The basic idea behind this sample application is to check a venue for available seats and to block or “save” the seats while the end user is completing his or her transaction. It’s a perfect use case for streaming since the status of other seats is changing all the time and as other seats are blocked or released, currently engaged users should be updated in real-time.

The IDL describes a gRPC service, SeatSaverService. The service, SeatSaverService, in turn, defines a function, GetSeats(). The function GetSeats() takes a parameter which is a VenueRequest message. The VenueRequest message has a property called venueId that describes the unique identifier of a Venue that contains the seats of interest. (A Venue is a custom-defined organizational unit that contains Seats, for example, seats in a theater, where the theater is an instance of a Venue.) The Venue message also has a property called authenticationId which is a string that identifies the calling client

Figure 2: gRPC uses IDL to define the structure of the messages and function signatures that are implemented for information exchange between client and server.

Figure 2: gRPC uses IDL to define the structure of the messages and function signatures that are implemented for information exchange between client and server.

Notice the IDL definition of the function, GetSeats:

rpc GetSeats (VenueRequest) returns (stream Seat) {}

The function definition above describes a remote procedure call function, GetSeats() that expects a parameter of message type VenueRequest and in the course of a successful operation will return a continuous stream of Seat objects.

The interesting thing to understand about working with gRPC is the use of IDL in conjunction with Protocol Buffers as the lingua franca of data exchange. When it’s time to call a function on a gRPC server, the client compiles the function’s parameters into a message (a process known in the API world as “serialization”) and sends its request in the binary Protocol Buffer format. (See Figure 3, below.)

Figure 3: gRPC exchanges data between functions in the Protocol Buffers binary format.

Figure 3: gRPC exchanges data between functions in the Protocol Buffers binary format.

The remote function will decompile the incoming request (a process more commonly known in the API world as “deserialization”) and process it. Then, the function will serialize the result data into Protocol Buffers and send it back to the calling client. The client then deserializes the incoming binary data into useful information for further processing.

The benefit of using binary messages is that they tend to be smaller than their text based equivalents and hence allow more compact transmission of data over a network. Also, working with data serialized in a binary format puts less burden on the computational resources of servers or clients. While it’s true that CPU processing overhead is incurred when deserializing binary data to text or numbers, many companies will pick up computation efficiency by avoiding deserialization of incoming data altogether. They’ll just create computational algorithms that work directly with the bits from a message. This may seem like a lot of work, but when you’re looking to reap efficiencies on the order of nano-seconds (which add up to seconds, minutes or hours at scale), doing the programming work that goes with processing bits instead of text and numbers is a minor expense when compared to the time savings to be gained.

Finally, using a binary format lends itself well to streaming data between client and server and vice versa. Using gRPC to facilitate bi-directional streaming adds a new dimension to working with APIs.

Working with Streams

When it comes to returning an array of objects from a gRPC endpoint, an array can be returned as a static collection of messages or as a stream of messages that get delivered continuously one after the other. Unlike REST which often requires multiple trips to the network to get all the data from a large collection (REST doesn’t inherently include streaming as a part of it’s architecture the way gRPC does), gRPC requires only a single network connection to deliver streamed data continuously over time. And, unlike GraphQL Subscriptions which supports continuous messaging from the server to the client over a single network connection, gRPC specifies support for bi-directional streaming. This means that clients can stream data to the server and the server can stream data to the clients. Bi-directional streaming opens up a whole new set of possible use cases.

Defining streams is a straightforward undertaking in terms of IDL. Let’s again take a look at the code for the function GetSeats(). Notice that the function declares its return type to be a stream of Seat objects, according to the Seat object described above in Figure 3.

rpc GetSeats (VenueRequest) returns (stream Seat) {}

Take a look at Figure 4 below which is an illustration of the open-source gRPC query tool BloomRPC. The tool executes the function GetSeats(). As you can see, the function is returning a stream of Seat objects.

Figure 4: BloomRPC is an open-source desktop client that allows developers to query gRPC APIs.

Figure 4: BloomRPC is an open-source desktop client that allows developers to query gRPC APIs.

As mentioned above you can also stream from the client to a gRPC server. The IDL code below shows a fictitious function BuyStock() which submits a stream of stock tickers to be purchased and returns confirmation codes in a stream from the server.

rpc BuyStock (stream TickerSymbol) returns (stream Confirmation) {}

The thing to remember about gRPC is that it is a specification that requires types and functions to be defined in IDL. However, IDL declaration is only the first part of the process of implementing a gRPC API. Once the API is declared in IDL, it must be implemented in a specific programming language. Each implementation will have its own way of provisioning a gRPC API according to the IDL definition. (The code that accompanies this article demonstrates a Node.JS-based implementation.)

Streaming Seats in the Seat Saver Demonstration API

Listing 1 below shows the Node.JS code implementing the gRPC function, GetSeats() from the demonstration Seat Saver API using the NPM library, GRPC. The purpose of gRPC function GetSeats() is to return information about the seats in a particular venue. GetSeats() returns each seat as a data packet in a stream.

The internal mechanisms of the the GRPC library will route calls made to the GetSeats() function defined in the IDL onward to the Node.js function, getSeats() defined at line 44 in Listing 1 below.

Listing 1: Using the built-in streaming mechanism of Node.JS GRPC package to implement the gPRC function, GetSeats().

Listing 1: Using the built-in streaming mechanism of Node.JS GRPC package to implement the gPRC function, GetSeats().

The way that the Node.js function getSeats() works is to call dataManager.getVenue(call.request.venueId) to get a Venue data object as shown at line 46 of Listing 1. The dataManager object is custom to the Seat Saver API.

The call object at line 44 in Listing 1 above is created at runtime by the GRPC framework code and injected as a parameter into the getSeats() Node.js function. The call object has properties and methods that describe various aspects of a gRPC interaction between the caller and the remote function. For example, call.request.venueId represents the venueId property of the VenueRequest object defined in the IDL and is shown below.

message VenueRequest {
    string venueId = 1;
    string authenticationId = 2;
}

The IDL definition of GetSeats(), which we showed above previously, is displayed again below in order to provide a recap of the use of the VenueRequest object as a function parameter.

rpc GetSeats (VenueRequest) returns (stream Seat) {}

Once we have a Venue object in hand, we run a forEach loop over the Venue.seats collection as shown at line 49 in Listing 1 above. Venue.seats is a server-side collection of all the seats in the Venue that we’re going to return to the calling client. But, we’re not going to return all the seats as one big data dump in the single response. Instead we’re going to return the seats in a data stream as the IDL of the function shown above specifies.

We’re going to use the call object’s write() method to add each Seat object in the Venue.seats collection to the steam emitting from the gRPC server. This is done at line 53 in Listing 1 above. The call.write() function is the mechanism provided by the GRPC library to facilitate sending data into a stream.

Then, after every Seat in Venue.seats has been added to the stream, the stream will be closed down using the function, call.end() at line 56 in Listing 1 above.

The Node.js function, getSeats() calls a custom “housekeeping” function, mapSeatSync(seat._doc) at line 52. The purpose of mapSeatSync(seat._doc) is to transform data stored in the Seat Saver API’s MongoDB into a format compatible with gRPC streaming. Also, within mapSeatSync(seat._doc) you’ll see a call to the function mapCustomerSync(seatData.customer) at line 29. The function mapCustomerSync() defined at line 7 of Listing 1 transforms customer data coming from the MongoDB data into a format compatible with streaming out of gRPC.

Listing 2 below shows the IDL specification for the Venue, Seat and Customer messages that correspond to the data objects used in the Seat Saver API. Also, Listing 2 shows the definition for the Status enum which indicates if a seat is open, or in the process of being reserved or sold.

/* Describes a venue that has seats */
message Venue {
    string id = 1;
    string name = 2;
    string address = 3;
    string city = 4;
    string state_province = 5;
    string postal_code = 6;
    string country = 7;
    string changed = 8;
    string created = 9;
    repeated Seat seats = 10;
    string message = 11;
}


/* Describes a seat */
message Seat {
    string id = 1;
    string number = 2;
    string section = 3;
    Status status = 4;
    string changed = 5;
    string created = 6;
    Customer customer = 7;
    string message = 8;
}

/* Describes a possible status of a seat */
enum Status {
    RELEASING = 0;
    OPEN = 1;
    RESERVING = 2;
    RESERVED = 3;
    SELLING = 4;
    SOLD = 5;
}

/* Describes a customer associated with a seat. */
message Customer {
    string firstName = 1;
    string lastName = 2;
    string email = 3;
    string created = 4;
    string message = 5;
}

Listing 2: The IDL defined messages that correspond to the data objects used in the Seat Saver API

The important thing to understand about the code samples shown in Listings 1 and 2, is that they describe the function and objects that make up the call to the GetSeats() gRPC function. Essentially GetSeats() retrieves a Venue object from the API’s database according to the unique identifier of a particular venue.

If a seat has a status of RESERVING, RESERVED, SELLING or SOLD, it will have a customer object assigned to the property Seat.customer. Otherwise, there is no customer assigned to the seat because, as logic dictates, an OPEN seat can’t have a customer assigned to it.

Once the Venue.seats collection is available, GetSeats() traverses the seats associated with the venue, sending each seat into a data stream that runs between the gRPC server and calling client. The entire transmission takes place using the bi-directional streaming mechanisms specified by HTTP/2.

Other implementations, such as those written in Java or .NET will have operational objects and a set of built-in streaming mechanisms that are special to those particular implementation frameworks. While the specification and IDL language are standard, actual implementations of gRPC will differ according to the framework.

As with any framework, there is a learning curve that goes with attaining operational mastery. Yet, when it comes to doing actual programming under gRPC it’s a lot easier to take the time required to learn to use a well-known implementation than to start from scratch. Making a high-quality gRPC client and server is a difficult undertaking requiring advanced programming skills at the enterprise level. It’s better to use one that has a proven history of working.

Putting it All Together

gPRC brings a whole new dimension to API centric applications. The binary nature of data exchange greatly reduces machine time that might be otherwise spent on serialization and deserialization thereby shortening task time and reducing compute time (which, at the end of the day boils down to money and even sustainability). And, the bidirectional streaming capabilities under HTTP/2 are hard to ignore. There are a growing number of companies in the big leagues such as Dropbox, Netflix and Square using gRPC.

Yet doing actual work with gRPC can be quite complex. First, there’s the binary nature of the data exchange. Although the existing frameworks will take care of it for you, every message coming and going needs to be compiled into Protocol Buffers (a nebulous and intimidating domain, even for experienced developers). Most implementations, both on the client and server-side, do the compilation behind the scenes. This means that programmers don’t have to concern themselves with the minutiae of binary serialization. But developers do have to concern themselves with the details of the way a particular implementation supports the types and functions that are defined in the corresponding IDL. This means that both ends of the client-server conversation need to know a lot before anything can happen.

And, then there is the fact that gRPC goes beyond standard HTTP to HTTP/2. It’s not the same as making a call to an endpoint using a simple URL under REST. Under gRPC you’re making calls to specific functions that happen to be accessible within a certain domain at a particular port.

For example, it’s the difference between a REST call that looks like this:

http://ww.myapi.com/venues

And this analogous call in gRPC:

const SERVER_URL = 'localhost:50051';
const seatsaver = grpc.loadPackageDefinition(packageDefinition).seatsaver;
const client = new seatsaver.SeatSaverService(SERVER_URL,
    grpc.credentials.createInsecure());
const call = client.GetVenues({});
call.on('data', function (result) {//do something})

It’s a different way of doing business. But, given the benefits of gRPC, particularly around streaming, it’s a compelling way to do business.

When it comes to streaming data, gRPC has a lot to offer. There is a learning curve to be overcome. But, the same can be said of any streaming technology, whether it’s GraphQL or a high capacity message broker such as Kafka. To do a lot, you’ve got to know a lot. In a world in which data streaming is becoming a typical part of the increasingly event-based digital landscape, gRPC is positioned to be a key technology. It’s going to be around for the foreseeable future. Taking the time to get hands-on experience with gRPC is a wise investment for any aspiring API developer.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">reselbob</a>

Categories
ScienceDaily

New Horizons team uncovers a critical piece of the planetary formation puzzle

Data from NASA’s New Horizons mission are providing new insights into how planets and planetesimals — the building blocks of the planets — were formed.

The New Horizons spacecraft flew past the ancient Kuiper Belt object Arrokoth (2014 MU69) on Jan. 1, 2019, providing humankind’s first close-up look at one of the icy remnants of solar system formation in the vast region beyond the orbit of Neptune. Using detailed data on the object’s shape, geology, color and composition — gathered during a record-setting flyby that occurred more than four billion miles from Earth — researchers have apparently answered a longstanding question about planetesimal origins, and therefore made a major advance in understanding how the planets themselves formed.

The team reports those findings in a set of three papers in the journal Science, and at a media briefing Feb. 13 at the annual American Association for the Advancement of Science meeting in Seattle.

“Arrokoth is the most distant, most primitive and most pristine object ever explored by spacecraft, so we knew it would have a unique story to tell,” said New Horizons Principal Investigator Alan Stern, of the Southwest Research Institute in Boulder, Colorado. “It’s teaching us how planetesimals formed, and we believe the result marks a significant advance in understanding overall planetesimal and planet formation.”

The first post-flyby images transmitted from New Horizons last year showed that Arrokoth had two connected lobes, a smooth surface and a uniform composition, indicating it was likely pristine and would provide decisive information on how bodies like it formed. These first results were published in Science last May.

“This is truly an exciting find for what is already a very successful and history-making mission” said Lori Glaze, director of NASA’s Planetary Science Division. “The continued discoveries of NASA’s New Horizons spacecraft astound as it reshapes our knowledge and understanding of how planetary bodies form in solar systems across the universe.”

Over the following months, working with more and higher-resolution data as well as sophisticated computer simulations, the mission team assembled a picture of how Arrokoth must have formed. Their analysis indicates that the lobes of this “contact binary” object were once separate bodies that formed close together and at low velocity, orbited each other, and then gently merged to create the 22-mile long object New Horizons observed.

This indicates Arrokoth formed during the gravity-driven collapse of a cloud of solid particles in the primordial solar nebula, rather than by the competing theory of planetesimal formation called hierarchical accretion. Unlike the high-speed collisions between planetesimals in hierarchical accretion, in particle-cloud collapse, particles merge gently, slowly growing larger.

“Just as fossils tell us how species evolved on Earth, planetesimals tell us how planets formed in space,” said William McKinnon, a New Horizons co-investigator from Washington University in St. Louis, and lead author of an Arrokoth formation paper in Science this week. “Arrokoth looks the way it does not because it formed through violent collisions, but in more of an intricate dance, in which its component objects slowly orbited each other before coming together.”

Two other important pieces of evidence support this conclusion. The uniform color and composition of Arrokoth’s surface shows the KBO formed from nearby material, as local cloud collapse models predict, rather than a mishmash of matter from more separated parts of the nebula, as hierarchical models might predict.

The flattened shapes of each of Arrokoth’s lobes, as well as the remarkably close alignment of their poles and equators, also point to a more orderly merger from a collapse cloud. Further still, Arrokoth’s smooth, lightly cratered surface indicates its face has remained well preserved since the end of the planet formation era.

“Arrokoth has the physical features of a body that came together slowly, with ‘local’ materials in the solar nebula,” said Will Grundy, New Horizons composition theme team lead from Lowell Observatory in Flagstaff, Arizona, and the lead author of a second Science paper. “An object like Arrokoth wouldn’t have formed, or look the way it does, in a more chaotic accretion environment.”

The latest Arrokoth reports significantly expand on the May 2019 Science paper, led by Stern. The three new papers are based on 10 times as much data as the first report, and together provide a far more complete picture of Arrokoth’s origin.

“All of the evidence we’ve found points to particle-cloud collapse models, and all but rule out hierarchical accretion for the formation mode of Arrokoth, and by inference, other planetesimals,” Stern said.

New Horizons continues to carry out new observations of additional Kuiper Belt objects it passes in the distance. New Horizons also continues to map the charged-particle radiation and dust environment in the Kuiper Belt. The new KBOs being observed now are too far away to reveal discoveries like those on Arrokoth, but the team can measure aspects such as each object’s surface properties and shape. This summer the mission team will begin using large groundbased telescopes to search for new KBOs to study in this way, and even for another flyby target if fuel allows.

The New Horizons spacecraft is now 4.4 billion miles (7.1 billion kilometers) from Earth, operating normally and speeding deeper into the Kuiper Belt at nearly 31,300 miles (50,400 kilometers) per hour.

The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, designed, built and operates the New Horizons spacecraft, and manages the mission for NASA’s Science Mission Directorate. The Marshall Space Flight Center Planetary Management Office provides the NASA oversight for the New Horizons. Southwest Research Institute, based in San Antonio, directs the mission via Principal Investigator Stern, and leads the science team, payload operations and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama.

Go to Source
Author: