Categories
ScienceDaily

New data processing module makes deep neural networks smarter

Artificial intelligence researchers at North Carolina State University have improved the performance of deep neural networks by combining feature normalization and feature attention modules into a single module that they call attentive normalization (AN). The hybrid module improves the accuracy of the system significantly, while using negligible extra computational power.

“Feature normalization is a crucial element of training deep neural networks, and feature attention is equally important for helping networks highlight which features learned from raw data are most important for accomplishing a given task,” says Tianfu Wu, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at NC State. “But they have mostly been treated separately. We found that combining them made them more efficient and effective.”

To test their AN module, the researchers plugged it into four of the most widely used neural network architectures: ResNets, DenseNets, MobileNetsV2 and AOGNets. They then tested the networks against two industry standard benchmarks: the ImageNet-1000 classification benchmark and the MS-COCO 2017 object detection and instance segmentation benchmark.

“We found that AN improved performance for all four architectures on both benchmarks,” Wu says. “For example, top-1 accuracy in the ImageNet-1000 improved by between 0.5% and 2.7%. And Average Precision (AP) accuracy increased by up to 1.8% for bounding box and 2.2% for semantic mask in MS-COCO.

“Another advantage of AN is that it facilitates better transfer learning between different domains,” Wu says. “For example, from image classification in ImageNet to object detection and semantic segmentation in MS-COCO. This is illustrated by the performance improvement in the MS-COCO benchmark, which was obtained by fine-tuning ImageNet-pretrained deep neural networks in MS-COCO, a common workflow in state-of-the-art computer vision.

“We have released the source code and hope our AN will lead to better integrative design of deep neural networks.”

Story Source:

Materials provided by North Carolina State University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ScienceDaily

Fast calculation dials in better batteries

A simpler and more efficient way to predict performance will lead to better batteries, according to Rice University engineers.

That their method is 100,000 times faster than current modeling techniques is a nice bonus.

The analytical model developed by materials scientist Ming Tang and graduate student Fan Wang of Rice University’s Brown School of Engineering doesn’t require complex numerical simulation to guide the selection and design of battery components and how they interact.

The simplified model developed at Rice — freely accessible online — does the heavy lifting with an accuracy within 10% of more computationally intensive algorithms. Tang said it will allow researchers to quickly evaluate the rate capability of batteries that power the planet.

The results appear in the open-access journal Cell Reports Physical Science.

There was a clear need for the updated model, Tang said.

“Almost everyone who designs and optimizes battery cells uses a well-established approach called P2D (for pseudo-two dimensional) simulations, which are expensive to run,” Tang said. “This especially becomes a problem if you want to optimize battery cells, because they have many variables and parameters that need to be carefully tuned to maximize the performance.

“What motivated this work is our realization that we need a faster, more transparent tool to accelerate the design process, and offer simple, clear insights that are not always easy to obtain from numerical simulations,” he said.

Battery optimization generally involves what the paper calls a “perpetual trade-off” between energy (the amount it can store) and power density (the rate of its release), all of which depends on the materials, their configurations and such internal structures as porosity.

“There are quite a few adjustable parameters associated with the structure that you need to optimize,” Tang said. “Typically, you need to make tens of thousands of calculations and sometimes more to search the parameter space and find the best combination. It’s not impossible, but it takes a really long time.”

He said the Rice model could be easily implemented in such common software as MATLAB and Excel, and even on calculators.

To test the model, the researchers let it search for the optimal porosity and thickness of an electrode in common full- and half-cell batteries. In the process, they discovered that electrodes with “uniform reaction” behavior such as nickel-manganese-cobalt and nickel-cobalt-aluminum oxide are best for applications that require thick electrodes to increase the energy density.

They also found that battery half-cells (with only one electrode) have inherently better rate capability, meaning their performance is not a reliable indicator of how electrodes will perform in the full cells used in commercial batteries.

The study is related to the Tang lab’s attempts at understanding and optimizing the relationship between microstructure and performance of battery electrodes, the topic of several recent papers that showed how defects in cathodes can speed lithium absorption and how lithium cells can be pushed too far in the quest for speed.

Story Source:

Materials provided by Rice University. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
3D Printing Industry

Porsche partners with MAHLE and TRUMPF to 3D print pistons for its 911 supercar 

Performance sports car manufacturer Porsche has worked with machine tool producer TRUMPF and automotive parts company MAHLE to 3D print pistons for the engine of its flagship 911 supercar.  Utilizing Laser Metal Fusion (LMF) 3D printing, the carmaker’s engineers have been able to optimize the structure of the automotive part for when it’s under load. […]

Go to Source
Author: Paul Hanaphy

Categories
3D Printing Industry

U.S. Army investigates predictive maintenance for 3D printed steel parts

The U.S. Army CCDC Army Research Laboratory (ARL) has discovered a method of predicting the performance of 3D printed parts and understanding any imperfections that can affect their performance. Detailed in a new study, the ARL will detect and monitor the wear and tear of 3D printed maraging steel through sensor measurement. Such measurements can […]

Go to Source
Author: Anas Essop

Categories
ScienceDaily

A clear semiconductor based on tin could improve solar power generation

Mobility is a key parameter for semiconductor performance and relates to how quickly and easily electrons can move inside a substance. Researchers have achieved the highest mobility among thin films of tin dioxide ever reported. This high mobility could allow engineers to create thin and even transparent tin dioxide semiconductors for use in next-generation LED lights, photovoltaic solar panels or touch-sensitive display technologies.

Tin and oxygen are very familiar elements, and when combined in a certain way to become tin dioxide, the material can be made into a semiconductor. Semiconductors are fundamental to most of our technology and are the basis of computer chips, solar panels and more. Since the 1960s, tin dioxide specifically has found use in industrial applications like gas sensors and transparent electrodes for solar devices. The material is effective for these things because of its high mobility. For most applications, higher is better. However, the high mobility of tin oxide only existed in large bulk crystals, until now.

“We demonstrated the highest mobility in a thin film of tin oxide ever achieved. Improved mobility not only enhances the conductivity but also the transparency of the material,” said Shoichiro Nakao, a researcher from the Department of Chemistry at the University of Tokyo. “Generally, transparency and conductivity cannot coexist in a material. Typical transparent materials such as glass or plastic are insulating, whereas conducting materials like metals are opaque. Few materials exhibit transparent conductivity — it’s very interesting!”

The more transparent a semiconductor can be, the more light it can let through. Nakao and his team have made a tin oxide thin film that allows visible light and near-infrared light to pass. This is a great benefit to the power conversion efficiency of photovoltaic solar panels, but other uses could include enhanced touch-screen displays with even better accuracy and responsiveness, or more efficient LED lights.

“Our method of production was key to creating a substance with these properties. We used a highly focused laser to evaporate pellets of pure tin dioxide and deposit or grow material exactly how we wanted it,” said Nakao. “Such a process allows us to explore different growth conditions as well as how to incorporate additional substances. This means we can endow tin dioxide semiconductors with high mobility and useful functionality.”

Story Source:

Materials provided by University of Tokyo. Note: Content may be edited for style and length.

Go to Source
Author:

Categories
ProgrammableWeb

Dynatrace Enables Automated Monitoring with All-New Environment API

Dynatrace, an application performance management systems provider, has announced the release of the all-new Dynatrace Environment API. The new API is a part of the company’s Cluster API and appears to be in beta. Dynatrace is providing this resource to enable developers to automatically create, update, or delete monitoring environments.

The company believes that “automatic monitoring orchestration” is crucial to succeeding in dynamic environments. The Environments API is designed to streamline automation and includes features like automatic creation of trial environments for new users, overview capabilities for existing environments, and environment tagging functionality.

The Environments API is a part of the company’s “Early Adopter” program and is expected to be updated with new functionality frequently. The announcement of the resource provides additional information for developers looking to get started. 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">KevinSundstrom</a>

Categories
ProgrammableWeb

AppNeta Expands its API Performance Monitoring Functionality

AppNeta, a provider of actionable, 4-Dimensional network performance monitoring, today announces enhanced functionality to monitor APIs that delivers active insights into the performance of third-party SaaS, mobile apps and APIs that are critical across the enterprise. AppNeta’s active monitoring capabilities now range from simple HTTP requests for high-scale, low touch requirements to multi-step synthetic scripting, delivering visibility into business-critical apps and the APIs that support them.

A consequence of digital transformation is that the apps that drive enterprises are getting complicated. Many are consumed from third parties and combine multiple data sources, which require additional insight to test plugins or backend APIs to fully understand performance. For efficiency, IT needs ways to monitor both the front-end user experience and the behind-the-scenes APIs to reduce the mean time-to-resolution for performance issues.

With this latest enhancement, AppNeta offers multiple ways to monitor web apps and the data sources that drive them. For simple server responses at scale, testing with HTTP Web Paths can be used, while Selenium scripting techniques can be used for complex, multi-step user workflows of the most critical user-facing apps. Both technologies actively monitor performance so that IT can get ahead of end-user complaints and regain control of web apps, APIs or mobile back-ends.

API monitoring will be integrated into AppNeta’s existing Experience monitoring to increase enterprise IT’s ability to monitor web and mobile apps that depend on plugins or APIs behind the scenes.

“Software may be eating the world, but APIs are the silverware.” says AppNeta CEO Matt Stevens. “Confidence in an API’s ability to respond correctly, as well as consistent visibility to ensure the application ultimately performs well, is a fundamental element of any comprehensive end user experience strategy.” 

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">ProgrammableWeb PR</a>

Categories
ProgrammableWeb

APImetrics Launches API.Expert To Compare Performance of Common APIs (video)

Over the years, ProgrammableWeb has followed the arc of   Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Transcript of: How API.Expert Ranks Performance of Common APIs Used By Developers

David Berlind: Hi, I’m David Berlind and this is another edition of ProgrammableWeb’s Developers Rock Podcast. Of course developers, totally rock. We love them and that’s why we do these shows to bring more information about what’s going on around the industry to developers as well as API providers. And today with me is David O’Neill. He is not only another David, I love all other Davids, but he’s also the CEO and co-founder of APImetrics. David, thanks very much for joining us on the show today.

David O’Neill: David, thank you so much for having me. It’sa pleasure to be here.

Berlind: It’s great to have you. So let’s first start out, what does APImetrics do?

O’Neill: So APImetrics is a API monitoring platform that monitors any API from the outside in as if you were an actual user or partner built right into it.

Berlind: And when you say monitors the API, what is it that you monitor and who would care about this?

O’Neill: We monitor the functionality of the actual APIs provided. Ideally the production APIs. Typically, our customers, our customer success teams, CIO offices, people on the sharp end of the gap between developer operations and customer support where it’s entirely possible for everything to look like it’s working, but actually you’ve got a significant outage but it may only affect one customer or one particular region of the world, and we’ve got a solution that that spots across different clouds where the performance problems are and warns you if things are not as good as you expect them to be.

Berlind: Wait a minute, wait a minute, wait, wait. ProgrammableWeb is all about how wonderful and great API’s are. Nothing ever goes wrong. Are you trying to tell me that sometimes things go wrong with APIs?

O’Neill: I couldn’t possibly comment, but yes. And usually in pretty entertaining ways. We’ve got hours of war stories on APIs that looked like they were working just great, but actually had failed in very entertaining ways. My favorite is a major UK bank had an API that suddenly got a lot faster. Crack the champagne. Everybody was excited. They had a tenfold speed increase. It was a week before anybody noticed. They stopped returning any data.

Berlind: Oh, no! So that’s a sort of failure that API metrics would spot and say, maybe send an alert out, “hey, you’ve got a problem!””

O’Neill: Yeah, that’s exactly what we look for. So we actually check the … You are getting back what you expected, and I know this used to annoy one of our competitors, but 200 is not always okay.

Berlind: 200, and we’re referring to… 200 is the HTTP code that comes back every time you get a request to either a web server or an API endpoint. A web-based one. So that’s what you’re talking about. 200, okay.

O’Neill: HTTP codes lie as I’ve begun to —

Berlind: No, they don’t. Come on. They don’t lie! (Laughter)

O’Neill: They’re set up incorrectly. We’ve got a lot of customers who use 200 for everything. So even if it fails, it returns a code saying, “okay. Yeah, that didn’t work”.

Berlind: That’s not recommended. Don’t set up 200 as your return code for everything your API does. There’s got to be a few other codes in there. And by the way, in the documentation, we often say this, make sure that if you have a bunch of “error” codes or “okay” codes, it doesn’t matter that you have a list of them and what they mean, right?

O’Neill: Absolutely. But the interesting thing for us when we actually start returning data to people sometimes haven’t even seen the error codes their systems put out. We had one conversation with a client where the dev ops engineer said, well that that code can’t possibly be there. It’s impossible. That means there’s been a complete system failure and it’s rebooting. And it did that about five or six times a day. The system would be down most of the day. And they went away and had a look at a different logging system and went, “oh yeah, the system’s pretty much down all day. You didn’t notice.””

Berlind: So let’s put a couple of things together. First, we’ve got APIs out there that sometimes don’t behave exactly the way we want them to behave. And then two, we’ve got API metrics’ out there and you monitor those APIs to make sure that they’re healthy and running and providing the data that they should be providing. Why do we need another company like APImetrics to do this? It seems like this would be built into the API provisioning systems.

O’Neill: So this is a discussion we have a lot with potential clients. Usually after they’ve had a bad day where people have complained about the service and they said they’ve been down, and yet AWS CloudWatch or the gateway had said, “hey, everything’s fine.” What’s happening? What we realized was, unless you’re monitoring your APIs from where your customers are or where they’re building their application stacks, you don’t actually have a clue what the API is doing. So one of the things that we were surprised to find is there’s differences between clouds. So, an API that works great on Azure in one location may not work so well from another. There is a dependence in the industry on a small number of clouds, but the cloud that the person building an application talks to your API in may be different to yours. And there may be some fundamental networking incompatibilities between what they are building on and your infrastructure.

Berlind: I see.

O’Neill: You don’t know about that. You could have a customer who complains all the time, but actually there’s no easy fix or solution and that’s the sort of thing that we help people understand.

Berlind: I see. Now when you monitor the API, do you also make requests of the different resources on the API to see if it’s returning data, you’d go to that level of depth? Or is it more sort of just checking to make sure that it’s responding when it’s being pinged or however it is you do it? I don’t know how you do it.

O’Neill: No. So, we create a fully functional API calls with the correct layers of security that actually exercise the APIs, ideally in production.

Berlind: All right. You need to know something about the API and how to call it and what the expected return is to make sure that it’s working properly then. Is that right?

O’Neill: Yes, absolutely. So we can integrate to common tools like Postman. So you could take an entire Postman collection, for example, and run it through our infrastructure so you can see how the API is responding and passing against all the checks you’ve set up. Or you can set them up inside our product, and then we look for networking performance issues. We look for whether you returned all the JSON you expect, whether you return what the API was meant to return. But that sort of level is where we excel at. And then we do a bunch of machine learning based analysis on essentially the stability of performance. One of the things we noticed early on is you can get deluged with data and it’s very easy to look at the average response times and go, “well, that’s a pretty good average”, but miss out whether there are actually trailing edges that could cause you problems. So we often see, and I can show you some examples later. APIs where the median response time might be half a second, which is okay, but 5% of calls take over 29 seconds, actual seconds. So if you’re doing hundreds of thousands of transactions and 5% of them take 30 seconds or more, that’s a problem.

Berlind: That is a problem. And, I suppose when you’re able to spot those problems, I mean that sort of response time, not to mention if it’s just down altogether, this would also be important to providers, API providers who have SLA agreements with their customers, right? Because sometimes customers will encode specific requirements, performance requirements, uptime requirements right into their SLAs. Is that a issue for some of your customers?

O’Neill: It’s becoming an issue. I’ll be honest, SLAs are still poorly enforced and poorly specified. But I believe over the next few years, and this is something we’re passionate about, over the next few years, it will become essential. You can’t build systems reliant on APIs and not specify the quality of service you’ll get.

Berlind: I understand.

O’Neill: The other problem we do see with APIs is they are… how shall I put this… you have to prove the SLA was missed. You’re very rarely told that the SLA was missed. So in the case of some very, very famous global cloud service providers, they’ll only give you money back if you can prove that they didn’t deliver to you. That’s a lot of work and actually very hard to do because they will bring receipts that will prove that they were working just fine. And this is something we also see in the regulatory space around open banking APIs where it’s entirely possible for both parties to bring a regulator proof that they were working fine. And then the regulators is not technically able to determine whether bank “A” was up and TPP service “B” was wrong. And that’s something we see a huge gap in the market for that we’re trying to fill with API metrics.

Berlind: I see. So, let’s see. You’ve got some news, I think before we arranged to do this interview you said, “Hey, we’ve got some news coming up.” So why don’t we dive into that? What’s new?

O’Neill: Absolutely. So one of the things we realized is there’s a lack of actual actionable data in the API space. There’s great services like ProgrammableWeb and others who provide you with the APIs. It’s actually very hard to understand which APIs work better than others and how they rank in comparison to each other. So we’re launching a service called API Expert at api.expert, that will provide rankings of common APIs people use. The ranking method we have created is called CASC. C-A-S-C. Cloud, API, Service, Consistency. And that’s a blended metric where we look at all the data points that come back and it’s like a credit score. We score it out of a thousand. The closer to a thousand the closer to perfect the API is. As the score goes down, we expect to see more problems and more issues. So anything above 800 is pretty good. 600 to 800, you’ll see some issues in the performance. Below 600 and lower, the API would be really not acceptable for use in our opinion.

O’Neill: Okay. So, you said anything around 100 is good. You meant everything around a thousand is good.

Berlind: I’m sorry. Everything around a thousand is good.

Berlind: I just want to double check that. Okay. So you’ve got a scoring system and when you’re doing a comparison, do you typically compare APIs that target similar applications like storage APIs or a telephony APIs? Is that what you would typically compare one to another or do you just compare anything that the end-user wants to compare?

O’Neill: So, the beauty of the scoring system is we can compare apples to bananas. We don’t have to compare like for like. Obviously if you’re interested in storage systems, comparing Box and Dropbox is useful. At the moment we have a group of categories we look at corporate infrastructure APIs, we look at social networking APIs, cryptocurrency exchanges, banking and tech APIs. And we’re currently expanding our API coverage. And honestly anybody who’s out there who has an API they would like us to add to the list, they just need to set the API up in our system, tell us they want to share it. And we’ve got a community system we’re sharing those into our public reporting system. So we are currently adding in things like telephony and so on.

Berlind: So, is there something to look at? Can you give us a little bit to have a demonstration of this?

O’Neill: Absolutely. So let me share my screen. Let’s do that. So here we have, this is the last, and this is for a corporate infrastructure. So we’re looking at 13 different common corporate APIs here from GitHub through Microsoft Pivotal Tracker, Mailchimp, a few storage systems. And what we’re doing is looking at the costs goal for the month. So. what we expect with these big APIs is we’ll see they’re all pretty good. They all have solid high scores with the exception of Cisco Spark, we’re not sure quite what’s going on there.

O’Neill: And then we look at the pass rate. “Outliers” is essentially calls that fall outside of what we expect to be the statistical normal performance. And then we look at the median latency. So we can say for December, GitHub had the best uptime. 100% uptime, fewest outliers based on the calls we did. Cisco Spark APIs, we counted four hours of downtime. That’s actual periods where we just could not make successful API calls into our account with them. And then speed-wise, Slack was the fastest, 400 milliseconds for an 18 milliseconds at the 95th percentile. 99th percentile, 900 milliseconds. So everything on Slack to took under a second. Same stats for Cisco Spark.

Berlind: So those guys that Slack are not slacking at all. That’s what you’re saying?

O’Neill: They are not. No. They really have got that working. Cisco, we’re not sure what’s going on there. Bumping up…

Berlind: They’re lacking a little spark in their API.

O’Neill: Haha! Yes, they are!

Berlind: Sorry, sorry I couldn’t help myself on that one.

O’Neill: No, I’m sure we can find some more puns in a second as well. So, if we look at the latency data, and obviously our product, if you use it, it goes into much more detail than this. This is just rolled up information, but we want to keep the flavor of what they need to know to really understand how performance is working. So we have best cloud for GitHub if you’re building an application built on AWS in North America, that’s probably suggesting North America is where they have the data center they’re using. Were South America on Azure, there’s nearly three X, two X difference between North America and South America on that and that’s pretty common. We see huge global variations in performance and speed and then you can pick different metrics. So I’m going to, I’m going to pick an example I know is weird looking. So I’m going to go to DocuSign. So significant difference again between North America and in this case, South Asia and Google. Medians 700 milliseconds, 500 milliseconds, through to two and a half seconds. But if I look at something like DNS lookup time, and DNS, everybody feels that’s just… you let the cloud do that. The cloud will sort it out for you. But actually if we look at DocuSign, we see extremely long name lookup times. So DNS resolution time, that’s the time it takes the internet to figure out where the resource is that you’re calling. So it’s doing a name look-up to Google or whomever, and then it’s trying to figure out, well, where is that server? Where do I send the query to? We see huge variation there. So you’re seeing lookup times of 250 milliseconds, a quarter of a second, half a second. Extreme case here, Google, South Asia, 1.5 seconds. So these are overheads on your API traffic. If we pick somebody else. Let’s say look at GitHub, same queries, you’ll see it’s markedly different. So a lot of the calls only take four milliseconds. So that’s indicating they spend a lot of time optimizing the infrastructure, the internet, to make sure no one has trouble finding GitHub.

Berlind: Right.

O’Neill: These are things that API developers don’t often think about because it’s nothing to do with your API per se…

Berlind: It may not be something to do with your API, whatever the problem is. And DNS in that case yes, I understand. Right? But yeah, overall, overall, any performance hit, there’s a whole chain of events that happen every time we make a call, including the DNS and somewhere in there something’s going wrong. So you have to go and deconstruct the whole conversation and figure out where the breakdown is. But this is helping you do it. It helps you spot whether it might be in the DNS look up or not.

O’Neill: Yes. And also just spot for clouds people should avoid. So we have with some of our commercial clients, we have extreme dish differences. So a call that works great on Azure from the UK, doesn’t work at all from an AWS location in Europe.

Berlind: Hmmm. Interesting.

O’Neill: I had one API we work with where no call from Finland is successful. Finland’s just blocked by…

Berlind: Somebody, yeah, gremlins.

O’Neill: No one was really sure what were the root cause is, but it’s there. So these are things to know when you’re setting up API is, “don’t do that, use this.”” Being able to tell your developers what they should do rather than just somebody going, well, I use AWS, I’ll click deploy default AWS East.

Berlind: You create a lot of data that your customers can use to make some decisions about their APIs and the infrastructure behind them. But at the same time you can make some broad assumptions across the globe that apply to everybody. For example, the ones you were just saying about Finland. Do you create a report for all of your customers to warn them off from doing something like that, since you’re making the observation in the first place, you don’t want to make them go out and figure it out for themselves, right?

O’Neill: No, so we’re actually working on a 2019 roll up and when we’re ready to do it, we’ll put all of 2019’s data up on the API Expert for all the APIs we monitor.

Berlind: Will you do a report? Some sort of qualitative report that somebody can review and say, “Hey, wait a minute. There’s there’s 10 things in here that David O’Neill is saying don’t do, so we better not do those because it just keeps bubbling up in API metrics as being problematic.” Do you give them a set of things to avoid or best practices?

O’Neill: We do that, although a lot of it is case by case. But we will be issuing a report this year on some of the “do’s and don’ts “to avoid things that we have noticed persistent problems. And think it’s just good practice to do. One example I’ll give everyone for free is by all means use a CDN service like Akamai or Cloudflare. They’re very good. They will give you vast improvements in resolution times, but don’t just take it. This is something everyone should trust but verify. Don’t just take it as a given that what they have done works. Actually go in and look and see whether you’ve got the improvement you wanted. We see customers who have paid money for CDN services that are not getting any noticeable improvement. Sometimes that’s because they haven’t told people where the resources are, or they haven’t verified that they’re…

Berlind:…CDN’s configired properly…

O’Neill:Yeah, and that’s something, I think there’s a reliance in the industry and this is the broader cloud that the cloud infrastructure vendors do it all for you and it’s semi-magical and I click deploy and somebody in Seattle at Amazon or Microsoft has magically made all my problems go away. And the reality is that’s not true. There is no magic elf who is going to make your system work perfectly. You don’t have to check anything.

Berlind: What?! There’s no magic elf?

O’Neill: There are no magic elves. I’m sorry. I should’ve said this before Christmas.

Berlind: We should probably change that. We should probably suggest, and by the way, that wouldn’t be a bad job to have to be a magic elf. Probably good benefits if you could a wing some magic on some APIs there. api.expert is where you find the service. Is this service available separately from the core API metrics offering? Or is it bundled in to customers of API metrics? How does that work?

O’Neill: No, API expert is entirely free. We are putting it out there as… The headline dates are the weekly and monthly reports. They will be completely free just as a resource people can look at. If you scroll down, we also have live data for the previous 24 hours, so if you want to see whether it’s up or not we can, we’ll provide that data for free. If you want to get into the weeds or get into deeper details, then yes you’ll need an APImetrics account. And you’ll need to contact us about that. We also are planning to white label it so if you have data sets you want to put out there and you want monitored, we can do that as a managed service. We’ve actually got some interest from government and banking clients on that already.

Berlind: For API providers for example, that want to be transparent about the operational effectiveness of their business. Sure. Okay.

O’Neill: Yes, and if you want to sign up and actually do it for yourself in the core product we go, we roll all the data up for API Expert so you just get domain level data. In the core project you’ll get API specific data, and the ability to test deeper things like test your own infrastructure. Verify down to a per-location granularity rather than a region, and a whole bunch of other tools for alerting and reporting.

Berlind: Oh, very cool. David O’Neill, the CEO and co-founder of APImetrics. Thanks for showing off your new service API Expert today.

O’Neill: No problem. Thank you for having us David. I really appreciate that and had a good time.

Berlind: It was great to have you. And thank you to everybody who watched this video. You know if you want more videos just like this one, you can go to programmableweb.com, and we’ve got not only the videos loaded up there into articles, but we have a full text transcript of everything that was said in the interview so you can see that, too. Or if you just want to watch the videos, you can go to our YouTube channel at www.youtube.com/programmableweb. For now I’m going to sign off. I’ll see you at the next video. Thanks for joining us.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">david_berlind</a>

Categories
IEEE Spectrum

Network Monitoring With Influxdata

In this paper, we discuss how you can use InfluxDB to gain the necessary visibility in the status, performance and responsiveness of your environments.

Categories
3D Printing Industry

Fillamentum Industrial releases a selection of new materials, including Flexfill TPE

Fillamentum Industrial, the performance application brand of Czech 3D printing filament producer Fillamentum, has introduced a selection of new materials including flexible Flexfill TPE, carbon fibre reinforced CPE, carbon and aramid fibres reinforced Nylons, and PP. According to Josef Doleček, CEO of Fillamentum, “Elastomers represent an important group in 3D printing and therefore new materials […]

Go to Source
Author: Tia Vialva