Categories
ProgrammableWeb

How APIs Will Democratize Access to Low-Cost Artificial Intelligence and Machine Learning

As a part of ProgrammableWeb’s   Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


How APIs Will Democratize Access to Low Cost Artificial Intelligence and Machine Learning

Hello everyone. My name is Matthew Reinbold. Tonight, I’m going to be talking about why AI success is built on APIs. We’re briefly going to go over why AI is such a big deal right now just to make sure we’re level set and everybody’s aware of all the hype and the realness that is going on. We’re going to talk about the challenges with creating API models and then segue into why those significant challenges ultimately lead to APIs that will deliver AI functionality and how to evaluate those type of claims.

If you’re wondering who I am, as I mentioned, my name is Matthew Reinbold. I am the Director for the Capital One API and Event Streaming Platform Services. That’s a bit of a mouthful, but essentially it means that I have a very unique position. I’m part enterprise architect, I’m part business analyst, and I’m part internal developer relationship guru. We kind of mix those things together into a pot and then subsequently apply our expertise. Me and my team apply our expertise to our approximately 9,000 plus developers. We try and make sure that the systems that we’re building based on APIs and event streams are consistent, exhibit that best decoupling practice that David mentioned, and that have guaranteed return on investment over the long term.

I also do a fair amount in the community. I write a newsletter called Net API Notes that goes out about three out of every four weeks. I do the curation on a site called Web API Events. If you’re at all curious about future API meetups and the community worldwide, that’s a good place to go. I also tweet a fair amount. I’m not here to sell you anything other than the fact that I’m going to be doing some sticker giveaways in my newsletter. If you sign up and support the newsletter, you could be getting some legally dubious sticker parodies that happen to have something to do with APIs. That’s the only pitch you’re going to hear from me tonight.

But what you’re here for is really about AI. I think there’s a lot of excitement and a lot of interest in what this is among people that are probably technology savvy but maybe not directly in the milieu when it comes to AI, and you would be forgiven to think that we’re all on the precipice of something nasty. There’s been a lot of negative coverage over the past couple of years about how we’re entering this dangerous phase. I mean even the Pope is talking to Microsoft about what’s going on with their API. I mean that’s significant. There’s something going on there and while this is very interesting and probably the source of a lot of clickbait, I am here to tell you that we are not one compile away from Skynet. Okay?

AI is a vast bucket that can describe a multitude of things and usually when you see these scary things of like terminators and questioning whether we’re losing control, usually, that is referring to something called a general-purpose AI or AI that can think like people and we’re a significant ways away from that happening. What I’m talking about tonight is much more narrowly focused. In fact, it’s called narrow API and I’m specifically going to focus on machine learning, which is a segment of AI. That has real benefits for companies and businesses. It has interesting applications for technologists and it’s probably not going to end life as we know it. So that’s a good thing.

There’s considerable growth in the space. So if you’re trying to follow the space right now, we have one of these up into the right type of graphs. If you are following papers released on machine learning, you would be to the point where you’re trying to keep up with a hundred new machine learning papers every single day. A lot of advancement, a lot of interesting things happening in the space. And the reason for that is multifold. You have things like Moore’s Law and GPUs getting faster and faster, enabling compute on a scale that you never had before. And even if you don’t have access to the physical hardware, you now have greater access to scalable cloud compute resources. So maybe you don’t have those servers in your closet to do machine learning, but Amazon does, but Google does and you can access those through APIs very easily.

There’s an increased ability to collect, store, and study data. I think for better or for worse, we’re getting much more sophisticated about how to collect data. And then the final point, the reason we’re seeing so much interest in growth in machine learning applications is because we have lowered barriers to creating training and deploying models.

So when I say model, what does that mean? Taking everybody in the room and making you a machine learning practitioner is probably beyond the scope of this conversation. However, as we get to talking more about why these things take so much compute, why they’re so processor-heavy, we need some basic, some grounding in how these things are put together. Say, for example, we had a task before us. We had to take pictures and decide whether the picture was of a Chihuahua or something else. So a very simple application. Yes, no. Chihuahua or not. Well in that particular thing, you have a number of nodes in your input layer and each one of these nodes has some value for the input and some weight. How important is the value at this node at a given time?

And just to illustrate just how many of these things there possibly could be. My first digital camera was a Sony Mavica and the max resolution you had on those pictures was 640 by 480. You can imagine now your modern smartphones have millions and millions of pixels, but in that 640 by 480 image you have 307,200 pixels. So if you were creating some kind of machine learning algorithm to determine whether or not the picture had a Chihuahua or not, you would have over 300,000 input nodes, one for every single pixel. And then you take the output, you take the weight, you take the input, and then you send that to every single one of the nodes in the next layer and the next layer and the next layer and so on and so forth. And then you train the model, which is taking all of this math that’s happening in here and tweaking it.

I have a subset of data and this got me closer by tweaking the weights here, tweaking, tweaking, tweaking, tweaking, and eventually you get to the point where you have a model that can reliably produce something that will tell you whether or not you have a Chihuahua or not — or you have a muffin. So it’s important to keep in mind that these machine learning models don’t understand what they’re looking at. You don’t spend a tremendous amount of compute and money and power and it knows what a dog is. It just has figured out through all that math that I have a pattern in my data of two blobs overtop a smaller blob. And subsequently, when I put in a muffin picture that has that same pattern, it’s going to think I have a Chihuahua.

If I do have any machine learning folks, I apologize, that’s a gross oversimplification, but we need to be all on the same page as we go forward when we’re talking more about these kinds of things. Machine learning, despite the fact you might get a muffin when you wanted a Chihuahua, is a big strategic initiative for a lot of different companies. The annual growth revenue for artificial intelligence products and services will grow from 643 million in 2016 to over 36 billion by 2025. That’s a 57 fold increase over that time period. It’s the fastest-growing segment in all of IT. So a lot of people need their muffins and chihuahuas.

Where it’s being applied is really important because it’s not a silver bullet. There’s a Princeton professor named Arvind Narayanan and that’s dangerous to pronounce, I apologize in advance. But he had a great presentation where he was talking about how to figure out what is AI snake oil. And he said there’s some genuine innovation happening in the perception space. So things like recognizing images or sentiment analysis or natural language processing. Those are some very powerful applications. When you get to automating judgment, like is this email spam or not? It’s okay and it’s getting better, but we have this whole area that gets really creepy and/or icky where it’s predicting social outcomes.

So you might have a machine learning algorithm that you can focus on somebody’s social media feed and tell you how many times there’s a beer bottle in the photo shared. But what you shouldn’t be doing is using that machine learning algorithm to determine whether or not that is a good candidate to hire. And sometimes in the news we see that conflated, that just because you can go through somebody’s feed, sometimes you can make these correlations, it doesn’t predict the future and we shouldn’t be making machine learning, putting it in the role of trying to divine the future because that’s very, very sensitive territory.

And of course, you need to use common sense. There’s a fantastic rule of thumb here where if your system has more than a hundred if-else, it’s time to switch to machine learning. Now, of course, it might be 90, for you, it might be 110, it might be 1000, but again, machine learning is not a silver bullet. If you have a simple, straightforward use case, program the simple straightforward use case. It’ll be easier to maintain, it’ll easier to understand, it will be easier to pass onto that next developer.

Say, for example, you had a web application and it wanted to suggest the next app to use after you close the first one. Oh, well, you could use machine learning and you could build in thousands of signals based on like time of day and user activity and social media threads and how often they’ve called their mom in the last week and so on and so forth. There’s a lot of signals that you could put into some very complicated machine learning algorithm or you could say, well 70% of the time they just want to use what’s most popular on their phone and that’s what we’re going to suggest to them to use next. Vastly much more simpler. You need to be able to make a significant justification for the investment in machine learning and it needs to be certain X times better than simple alternatives.

So let’s talk about challenges to machine learning. You need data. If you’re going to find patterns in data, then you need to have a sizable amount of data to find. Google republished a paper recently about their latest chatbot named Meena and in order to program it, they use 341 gigabytes of source text. There’s not a whole lot of places where you just find 341 gigabytes of text lying around. That’s significant. That’s something on Google scale that they can do. They used it to train their chatbot and in the course of training it, again doing all those maths, refining the weights, doing the math again, going through that process, they burned approximately $1.4 million in compute. Imagine coming into the office the next month and having to explain where 1.4 million went.

And you see the results there on the right. I don’t know how many folks played with Eliza. The system that was originally built like in the 60s. I struggle to figure out how that’s better. And again, machine learning hasn’t understood what you’re saying. It’s just gotten really good about picking up the keywords and parroting them back to you, which may be sufficient if you’re talking about something like a support system where somebody comes in and asked to pay a bill and the system recognizes the word “bill” and knows that the usual response should be, “Great, what account?” or, “Which bill?”

There’s some regulatory private and ethics of using machine learning. We’re going to talk more about probably one of the least thought of ethics, this power consumption, in just a moment because frankly there’s some pretty significant ethical ramifications for these types of systems. Again, when you start getting into the prediction space, what is the ethics of using a machine learning system in health if the data set population that was used was only ever built on white male results. Is it going to end up causing a disparate number of misdiagnoses on people outside that data set population? All of those are some fairly significant things, but I want to lastly hit on that, that operationalization piece.

You need to continue to tune your model, especially if it’s something like natural language processing. Language is alive, language changes, language is fluid, and so to think that, well, okay, we’re going to spend our 1.4 million now and this is going to be our model that we’re going to be able to use for the next decade is probably false. Brand new memes, brand new things that my kids say that I don’t understand. Does anybody know that “tsk-tsk-tsk” hydroflask? Okay, well he’s a former DJ. He’s down with the kids. I was sitting down to dinner and my eleven and seven year old suddenly started talking in memes and I never felt older in my entire life.

All right, but more about that power, you might’ve seen that the amount of compute used to program Meena ended up using the same amount of power that an average American household uses in 28 years. There’s been other models that have generated some interesting headlines. The GPT-2 is a common text model that Google has produced. Training that emitted as much carbon as five cars in their lifetimes and I will be quick to point out that that includes the manufacturing of those cars, not just their operation. Very, very significant. Also, the amount of power needed is rising at a very rapid rate.

So I alluded to the fact that there’s a lot of ethical ramifications. This is the only one that I’m going to take a side tangent on because I don’t think that many people think about this, but for example, if you are doing machine learning and you can pick which region your machine learning is being run in, Amazon actually has regions that are carbon offset. It’s not US East. So think about that. If it doesn’t matter where your compute is being run, spend five minutes Googling which region is carbon offset and that would do us all a big favor longterm and my daughters will be able to continue going “tsk-tsk-tsk” long after I’ve gone away.

We’ve been here before though, really. I mean because think about it. What if every company decided that in order to build their application, they were going to send a fleet of cars to take a photo of every single inch of every single street? That would be ridiculous. We had one company do that, in this case, Google, and we use their API to consume it. We also would be in big trouble if every single startup decided that they were going to stand up their own data center for scalable compute. Hey, why don’t we just have AWS or Google or Azure go ahead and do their thing and we just consume it via API. Finally, imagine if every single startup that wanted to send a text message or have a phone call in their application had to negotiate their own contracts with the telcos. The barriers to innovation would be horrible. Better to let someone like Twilio do that once and then we just benefit from their contract.

In the same way that these companies have exposed services via APIs, the future very much looks like leaving the big model building to the experts who can burn 1.4 million on a model and then we consume that model through an API, and this is nothing new. David mentioned separation of concerns. It would be the same here. You have your code, you focus on your business problem, and then through a web or a net API, you’re using someone else’s generated model and you leave it to them to have to figure out where all that data is coming from and how they’re refining it and how they’re updating it. Their compute cost is probably divided up across all their consumers, so you probably get a better rate than if you would have tried to create it yourself.

The regulatory and private privacy and ethics of this, of course, you need to have an understanding of how they’re doing things. You can’t just wash your hands of it and walk away if it’s suddenly determined that they’re sacrificing virginal chickens in order to get their data. But it does provide some layer of separation, some layer of abstraction, if someone else, if you’re relying on their information and models. And finally, operationalization becomes their problem. You can continue to innovate on your own business problem and leave that generated model and all of the concerns that come with that model to someone else through that API.

And this is already happening. This was a survey of which types of AI are consumed through an API. Languages first and foremost. So think of things like natural language processing, semantic sentiment analysis, that kind of thing. Followed by speech, vision, data, discovery, and conversation. With the exception of data discovery, these are all perception. That three-column layout by the Princeton Professor about what is having impact. These are perception. How do we perceive the information that’s coming in and how do we scale our response to it? So this is where the API’s are having benefit. And of course, if we’re going to consume these APIs, we do have a responsibility to evaluate their claims.

There’s a great IEEE article about three ways to evaluate. I added a fourth here. There are certainly other folks online who have gone and done rigorous testing as far as, “Hey, I’m going to take all of the major leading AI houses and I’m going to compare them against each other.” But, the first is. Considered incentives.

If the business model that somebody is putting forth means that it’s actually beneficial to them to not be transparent as far as how the model is generated or how the data is collected, that probably is not someplace you want to use. You need to watch out for hype salad. If they’re advertising the latest blockchain, machine learning, IoT, whizzbang gobbledygook, that’s a problem. Certainly leverage the work of others. Again, this is a fantastic article. Not all of them are as comprehensive as this gentleman’s article, but fantastically done, well written, well-researched piece that goes into some depth on all the capabilities, comparing them on all of their various features.

And then finally, interrogate the data. And we all need to be doing this as we are increasingly a data-driven culture, but we’re not good at it. There was a survey done that asked folks to say, “How do you have your data expertise? How do you know whether your data is good or not? Do you have best practices in place for interrogating the data?” And 44% said, no, I don’t know, fell off a truck, now I’m using it. And increasingly, that’s not going to be an acceptable answer. We need to figure out best practices for how to interrogate the data, especially through APIs because we may not have our hands on that data. We need to be able to understand how to talk to those vendors and how to understand where that data is coming from.

Furthermore, “How did you learn to analyze data?” 58 nearly 59% of folks said, Oh I’m self-taught, which may be great. I’m self-taught in a number of things. But as we move forward there needs to be some common understanding, some common approaches, some best practices to how we look at this data. And so there probably needs to be more education about that. There’s a great white paper out there that talks about nutritional labels for data. So, rather than leaving it up to you and I as individuals to dig deep into things and figure out how this stuff was composed, much like anything you buy in a grocery store, they make this a common format.

Again, common standards, common format. So I can come and I can quickly look and see, well, okay, for my given data model, maybe I don’t really need that much metadata, but I really need some probabilistic modeling on how this was put together. That is still being formed. And I think if you have any say at your companies, this is something that you probably need to start asking about. How do we interrogate our data? Do we have best practices for that? And if we don’t have best practices, let’s start referring to some of the thought that’s swirling in the industry right now as far as how we can start putting some of this stuff together. Got it?.

All right, so in conclusion, there’s certainly tons and tons and tons of resources online. Some of the stuff that I found very valuable as a recovering developer who wants to understand this has been some of these gentle entry points. I’m a big YouTube guy, so Crash Course AI, The Coding Train have been great. There’s some GitHub resources… Fast AI is great, if you’re a bit more of a data sciency-type who’s comfortable using some of those tools, they have a great resource. There was a fantastic side by side comparison site, in theory, because these are APIs, somebody should have a portal where I can go and I can test sentiment analysis by putting some text in a field and hitting a button on which cloud provider I want to run that against. And then based on that initial response, then I know who to choose and that was intense.to or intent.io or however you pronounce that.

When I went to go take a screenshot of that site today, they were down, so I don’t know what happened and I can’t find a good alternative, but really we should have like almost a “if this, then that”, to be able to go put some samples in, see how it runs across various providers without having to sign up for a credit card and make that shotgun wedding happen. Algorithmia started out as a marketplace for algorithms before AI was the big catchall. They’ve since pivoted. They’re now a machine learning operationalization company. They don’t have that same marketplace where you could compare and contrast different algorithms so they’re out of the picture. ProgrammableWeb, you have some keywords, some searches that can be done on AI. So that’s an opportunity, but then there’s also finding your own data.

The dataset search from Google is fantastic. I spent a few minutes just searching some terms off the top of my head and I was finding data sets that I had no idea existed. If you want to start playing around with this stuff, that’s a great place to seed some of your initial work on that. And then getdata.io is also great. But the last thing I have to say, I’ll go ahead and I’ll have my notes posted to my website at some point in the near future. As David mentioned, all of these are recorded. If you have any questions, you can certainly find me on any one of those forums and thank you for your attention.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">david_berlind</a>

By admin

I'm awesome! What else would I say about myself.