Categories
ProgrammableWeb

Financial APIs Continue to See Big Growth

Editor’s note: Be sure to check out our research on the overall growth of Web APIs since 2005. At the time of writing, this is the most recent data we have, but check the research page to see if we have a more updated article. We will be continually updating the overall growth chart with other charts getting updated on a less frequent basis.

In past years when we have looked at the fastest growing API categories, Financial services related categories have always ranked highly. This isn’t a surprise; Open Banking initiatives, the rise of digital forms of payment, the need for real-time market information and an ever-increasing customer demand for connected financial services makes this related set of categories an attractive space for developers. In this article we take a look at how much these categories have grown in recent years.

The first thing we did was to consider what categories aligned most closely with Financial Services. From the categories vocabulary within the central taxonomy that cuts across all of the content we publish on ProgrammableWeb, we settled on the Financial, Payments, Banking and Monetization categories to capture the APIs in this space as fully as possible.

We can see from the graph that there have been two points when growth took off; 2012 and the period between 2017-2019. The first matches up well with the overall growth we saw in our directory around 2012. The second one started around three years ago and deserves a closer look. A previous research article looked at the most popular API categories, including the Financial and Payments categories. At the time we mentioned that the “open banking trend is one that had been slow to happen but now seems to be moving at an accelerated pace. ProgrammableWeb analyst Mark Boyd has cited studies stating that ‘the majority of banks recognize that an open banking platform is the endgame for the industry.’ That recognition combined with regulatory pressure such as [the European Union-driven Revised Payment Service Directive] PSD2 (with a January 2018 deadline) has meant that more Financial APIs than ever are now being released.”

The data would seem to back up this argument. For the 2017-2019 time period, we have seen an average of more than 600 Financial APIs added per year. The table below shows the number of APIs across the Financial, Payments and Monetization categories that were added to our directory.

Year
APIs Added
(Financial, Monetization, Payments, Banking)
2005 3
2006 14
2007 12
2008 71
2009 83
2010 93
2011 137
2012 487
2013 366
2014 314
2015 231
2016 431
2017 745
2018 437
2019 646
2020
(through July)
211

Financial related APIs look like they will continue to experience growth for the time being. If you are an API provider, adding a listing in our directory is a great way to improve the discoverability of your APIs and raise awareness of your program. If you would like to include your APIs in our directory, you can use the link below.

https://www.programmableweb.com/add/api

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">wsantos</a>

Categories
ProgrammableWeb

Post Corona, Code for America Looks to Join Devs, Civic Techies and Gov Officials To Transform GovOps

Editor’s Note: The interview in this article took place prior to the new standard operating procedure to cancel large gatherings due to the current coronavirus pandemic. While the   Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Transcript of: CodeForAmerica

David Berlind: Hi, I’m David Berlind. Today is Tuesday, February 18th, 2020 and this is ProgrammableWeb’s Developers Rock podcast. With me today is Jennifer Pahlka. She is the founder of CodeforAmerica.org. They’ve got a big event coming up. Jennifer, thanks for joining us. Tell us all about this event that you’ve got coming up, what in March, I think it is?

Jennifer Pahlka: Yeah, it’s in March. It’s in DC. Hi David. Thanks so much for having me on.

David: It’s great to have you. Yeah, it’s been so long. We used to work together a long time ago, so it’s really great to see you again.

Jennifer: Yeah, I’m delighted, and it’s a wonderful to be able to talk about the Code of America summit, which is happening middle of March in DC, and it’s basically where everybody who wants to make government work better in a digital age, comes together and talks about how hard it is and how much progress we’re making and really, what the world could look like if government got really good at digital services.

David: Well, why does the government need something like CodeforAmerica.org floating around in their midst?

Jennifer: Well, I think government really wants to do well, but we built government in a pre-digital age and it’s a little bit harder to move this very large risk-averse institution into the kinds of ways that, that companies that are thriving in the digital age tend to work. So there’s speed issues. There’s issues of really working with users in the way that our best digital services are built is really tapping into what users want, and that can be really hard in government. So, we’re just trying to give them some of the principles and practices of the digital age and apply it to government, because, in government, that’s where we’re really serving everyone. These services are enormous. They matter hugely to the American people and we’ve got to give government that competency and that capability of being great at digital. And that just requires some retraining and some different perspectives. It’s really about being able to adopt the way that the internet works in government, in the service of the American people.

David: And so as CodeforAmerica.org sort of an officially sanctioned body by the government. Or did you just sort of sprout up and start helping because you felt it was sort of your civic duty to do this?

Jennifer: Yeah, that’s right. We thought it was our civic duty to do this. So I started Code for America 10 years ago. It’s a 501(c)(3)so yes, we’re sanctioned by the government as a nonprofit and we work really closely with government. So what we say is, look, we can change this. It’s very hard to change government, but we can do it if we do three things; we show what’s possible by making government services so good that they inspire change. So if you try, for instance, using the application for food stamps, supplemental nutrition assistance program [SNAP], it’s very cumbersome. It’s not built well for users. It doesn’t work on a mobile phone. And we’ve shown that it can be dramatically better with a service called GetCalFresh. It serves all with California now for anybody applying for SNAP, that’s just sort of ups the bar and gets people thinking about what could be possible.

But then, we help government people do this better themselves. So that’s what the summit is all about is we can’t do all the services for government. But if we share how this works and how you apply these principles and the practices in government, then people in government can make their own services really great.

And then the third thing we do is we build a movement. There’s now this amazing movement of people who understand the digital world and either are coming into government or are already in government who are getting together and saying, “Let’s write a new playbook. Let’s hold everybody accountable to a much better level of service, a much better experience and better outcomes.” And that’s really the movement that then drives even more better examples and even more ability to get people on board with the practices.

David: Wonderful. So I was looking at the website for the summit and I saw that you’re expecting something like about a thousand people based on what you just said, are the majority of those people in government now and coming to get inspired or is it a blend of people from government and then others like you who felt it’s their civic duty to help the government out and sort of a collaboration? What’s the attendee mix like?

Jennifer: Yeah, it’s a great mix, a lot of the people are in government, in fact, a little somewhere near the majority of people are in government and they’re either already in one of the groups like the United States Digital Service or the Colorado Digital Service or San Francisco has its own digital service, which are groups in government who sort of self-define as running by the Code for America playbook. They’re doing things in a user-centered era and data-driven way. Sometimes they struggle with that because government has a bunch of constraints that make it hard to do that. But they are figuring out those ways to do it and they’re successfully doing it in many services in government that we have a long way to go.

The rest of it is, is a mix of folks, some of our vendors are there. We’ve got what we call civic technologists who may, who work like we do sort of from the outside but improve government, you know from that sort of an outside perspective. We’ve got people who are just learning about it. I mean even folks from like staffers from congressional offices who were in charge of the oversight of government digital are also coming to learn and say, “Wait a minute, there’s a better way to do this and we’re going to have to be part of that solution.”

David: And so this is a mixture you sort of talking about cities and States and so, so does Code for America work across all levels of government from municipal on all the way up to federal?

Jennifer: Our biggest projects right now are with state governments. State to do a lot in the areas that we specialize, which is the social safety net and the criminal justice system. We also worked a lot with counties and then we have 82 cities around the country that would have a Code for America brigade, which is essentially a chapter, a volunteer chapter. So we specialize in working at city, county, and state level and our big projects tend to be with them. However, the principles and practices that we articulate and evangelize to the rest of government are applicable across federal as well. And so the attendees a Code for America summit are federal, state and local. In fact, last couple of events it’s been about a third, a third, a third split between those three groups. If you’re working with States, you’re also working with the federal government in the sense that the federal government regulates most of these programs like SNAP and Medicaid and so we do work with them as well, but they aren’t a client if that makes sense.

David: I see. Now the name of the organization is Code for America. And this is the Developers Rock podcast. When I hear the word “code”, I think developer, because you’re coding for somebody. So what’s in it for developers and developers come to this event and what do they do when they get there?

Jennifer: Yeah, so a lot of them, if you look out at the speakers that are there speaking, a good chunk of them are developers. They have developed wonderful digital services at the Veterans’ Administration. It’s with state governments, etc. And what they get out of this more broadly is that they get to work on the things that matter most. We have had so many developers come from, whether it’s the private sector or social sector, and come into government and say, now that I’m coding, a better veteran’s healthcare application, I can never go back because this is so… I’m so aware now of the people that I’m helping, I’m helping people who need help the most. And my impact is bigger than it has ever been on parts of our country that absolutely need the most help. And so, what developers get out of it really get out of upcoming to summit, is a community and the skills and the tactics that need to be successful making those digital services in government, but what they’re getting at it more broadly is these incredible meaning and satisfaction in their jobs.

David: Sure. A very meaningful work and they get to kind of rub shoulders with birds of a feather, other people who feel the same way. So it sort of escalates that feeling of altruism and contribution to the betterment of the nation. One question I have for you though is, that a lot of us, and I don’t know if you’ve given this any thought. A lot of us look at what’s going on at the federal level of our government and we see a lot of nothing getting done. It almost seems like the government doesn’t have an interest in getting things done. You have people yelling across the aisle at each other, but now they also complain that we’re not getting any legislation done and they blame each other if the government is sort of stalled in this way by the people who lead it. Is there any hope of this movement coming from under and getting things going, sort of greasing wheels of government, so to say, as you pointed out?

Jennifer: I think beneath the surface of that dysfunction that we all experience, there is real progress. Six years ago, if you were a veteran, you could literally… almost nobody could get through the healthcare application. It required this very specific combination of an outdated browser and an outdated version of Adobe PDF. And if you didn’t have that exact combination, which just happened to be the sort of weird outdated combination that VA Staffer computers had, you couldn’t load the application form.

That’s just one of a dozen things that wrong with that one specific service. Now you can and people do and they use a service that looks so good that they go, wait, is this… did government build this? Because it’s simple and it’s easy and it’s clear. That’s the work of United States Digital Service and things like that are happening all over government and it’s not making the headlines but it really is progress.

I mean the same thing with our SNAP application that we’re doing in different States now, For example, we had these laws that are passing in States around the country, decriminalizing various convictions, often related to the decriminalization of marijuana, but not always. And we’ve been working with States to figure out how to clear those convictions. That doesn’t involve 10 months of paperwork but just says, wait a minute, this person has a conviction, it’s in a database. Let’s just change the record in the database. And that’s the kind of progress. It’s not just that we’re making forms better, it’s that we’re helping people in government understand how the digital world works and be far more efficient. Like leap-frogging the process and just saying, this is just a matter of changing a record in a debate database. Let’s help you do it.

Hundreds of thousands of people have already gotten relief because we’ve been able to implement these laws in a hundred times faster than they would’ve been and now millions more across the country are going to. That’s hope, right? That’s like there is functioning government happening, it’s getting better, we’re getting better at doing government right. And that’s happening at the same time as all of that political dysfunction. I felt very lucky that I get to look at that every day and use that as a real counterpoint to some of the less beautiful parts of government.

David: Right, so while some people who are leading the show don’t seem to be able to get anything done, you’re busy at work, under the hood, getting a lot of stuff done. And I would have to kind of support the fact that there are people who are looking to do that kind of meaningful work. I don’t know if you know this, but I’m the co-leader of the Federal API meetup in Washington DC the first Tuesday of every month, I am a Gray Brooks’s wingman. I don’t know if you remember Gray Brooks from your days and in DC.

Jennifer: He’s the best.

David: He is the best. He’s an amazing human being. And so, I’m down there every month helping out and helping that meet up run. And of course, not only can everybody who’s watching this come to that meetup if you’re in the DC area, but also I get to meet a lot of people from the various corners of government, especially the federal government, and they’re all very interested in moving the ball forward. So let’s go back to the summit real quick. March 11th to March 13th. Can anybody come? Is there a fee? How do you sign up?

Jennifer: Register at codeforamerica.org/summit, it’s a cheaper rate if you’re in government. Those budgets are a bit limited. Private sector folks pay a bit more, but it’s absolutely affordable and the content is amazing.

David: And you might get to meet Jennifer Pahlka who is the founder of CodeforAmerica.org there. Right.

Jennifer: I will absolutely be there. I’m giving a talk and I’ll spend the best three days of the year with the people that I admire the most.

David: Wonderful. Well, there you go. That’s the Code for America summit. It’s going to have workshops, receptions, breakout sessions, lightning talks, and keynotes. Of course, one of those will be by Jennifer here. Jennifer Pahlka. Thank you very much for joining us.

Jennifer: Thank you so much David. This has been fun and it’s great to see you.

David: It’s been terrific to talk to you. We’ve been speaking with Jennifer Pahlka. She is the founder of CodeforAmerica.org they’re running their big annual event in Washington DC March 11th to March 13th, 2020 hope to see you there. For now, I’m signing off David Berlind, editor in chief of ProgrammableWeb. If you want more videos like this one, just go to our YouTube channel at www.youtube.com/programmableweb or you can go to ProgrammableWeb.com and you’ll find an article with this video embedded and all of the full text, the transcription of what Jennifer just said to you, as well as the audio-only version if you prefer it in podcast form.

Until the next video, thanks for joining us.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">david_berlind</a>

Categories
IEEE Spectrum

Back To The Elusive Future

In the January issue, Spectrum’s editors make every effort to bring the coming year’s important technologies to your attention. Some we get right, others less so. Twelve years ago, IEEE Fellow, Marconi Prize winner, and beloved Spectrum columnist Robert W. Lucky wrote about the difficulty of predicting the technological future. We’ve reprinted his wise words here.

Why are we engineers so bad at making predictions?

In countless panel discussions on the future of technology, I’m not sure I ever got anything right. As I look back on technological progress, I experience first retrospective surprise, then surprise that I’m surprised, because it all crept up on me when I wasn’t looking. How can something like Google feel so inevitable and yet be impossible to predict?

I’m filled with wonder at all that we engineers have accomplished, and I take great communal pride in how we’ve changed the world in so many ways. Decades ago I never dreamed we would have satellite navigation, computers in our pockets, the Internet, cellphones, or robots that would explore Mars. How did all this happen, and what are we doing for our next trick?

The software pioneer Alan Kay has said that the best way to predict the future is to invent it, and that’s what we’ve been busy doing. The public understands that we’re creating the future, but they think that we know what we’re doing and that there’s a master plan in there somewhere. However, the world evolves haphazardly, bumbling along in unforeseen directions. Some seemingly great inventions just don’t take hold, while overlooked innovations proliferate, and still others are used in unpredicted ways.

When I joined Bell Labs, so many years ago, there were two great development projects under way that together were to shape the future—the Picturephone and the millimeter waveguide. The waveguide was an empty pipe, about 5 centimeters in diameter, that would carry across the country the 6-megahertz analog signals from those ubiquitous Picturephones.

Needless to say, this was an alternative future that never happened. Our technological landscape is littered with such failed bets. For decades engineers would say that the future of communications was video telephony. Now that we can have it for free, not many people even want it.

The millimeter waveguide never happened either. Out of the blue, optical fiber came along, and that was that. Oh, and analog didn’t last. Gordon Moore made his observation about integrated-circuit progress in the midst of this period, but of course we had a hard time believing it.

Analog switching overstayed its tenure because engineers didn’t quite believe the irresistible economics of Moore’s Law. Most engineers used the Internet in the early years and knew it was growing at an exponential rate. But, no, it would never grow up to be a big, reliable, commercial network.

The irony at Bell Labs is that we had some of the finest engineers in the world then, working on things like the integrated circuit and the Internet—in other words, engineers who were responsible for many of the innovations that upset the very future they and their associates had been working on. This is the way the future often evolves: Looking back, you say, “We should have known” or “We knew, but we didn’t believe.” And at the same time we were ignoring the exponential trends that were all around us, we hyped glamorous technologies like artificial intelligence and neural networks.

Yogi Berra, who should probably be in the National Academy of Sciences as well as the National Baseball Hall of Fame, once said, “It’s tough making predictions, especially about the future.” We aren’t even good at making predictions about the present, let alone the future.

Journalists are sometimes better than engineers about seeing the latent future embedded in the present. I often read articles telling me that there is a trend where a lot of people are doing this or that. I raise my eyebrows in mild surprise. I didn’t realize a lot of people were doing this or that. Perhaps something is afoot, and an amorphous social network is unconsciously shaping the future of technology.

Well, we’ve made a lot of misguided predictions in the past. But we’ve learned from those mistakes. Now we know. The future lies in quantum computers. And electronics will be a thing of the past, since we’ll be using optical processing. All this is just right around the corner. 

Reprinted from IEEE Spectrum, Vol. 45, September 2008.

Categories
IEEE Spectrum

A Path Towards Reasonable Autonomous Weapons Regulation

Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.

Categories
IEEE Spectrum

Many Experts Say We Shouldn’t Worry About Superintelligent AI. They’re Wrong

Editor’s note: This article is based on a chapter of the author’s newly released book, Human Compatible: Artificial Intelligence and the Problem of Control, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House.

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies presented a detailed case for taking the risk seriously. In what most would consider a classic example of British understatement, The Economist magazine’s review of Bostrom’s book ended with: “The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking.”

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

Some well-known AI researchers have resorted to arguments that hardly merit refutation. Here are just a few of the dozens that I have read in articles or heard at conferences:

Electronic calculators are superhuman at arithmetic. Calculators didn’t take over the world; therefore, there is no reason to worry about superhuman AI.

Historically, there are zero examples of machines killing millions of humans, so, by induction, it cannot happen in the future.

No physical quantity in the universe can be infinite, and that includes intelligence, so concerns about superintelligence are overblown.

Perhaps the most common response among AI researchers is to say that “we can always just switch it off.” Alan Turing himself raised this possibility, although he did not put much faith in it:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled…. This new danger…is certainly something which can give us anxiety.

Switching the machine off won’t work for the simple reason that a superintelligent entity will already have thought of that possibility and taken steps to prevent it. And it will do that not because it “wants to stay alive” but because it is pursuing whatever objective we gave it and knows that it will fail if it is switched off. We can no more “just switch it off” than we can beat AlphaGo (the world-champion Go-playing program) just by putting stones on the right squares.

Other forms of denial appeal to more sophisticated ideas, such as the notion that intelligence is multifaceted. For example, one person might have more spatial intelligence than another but less social intelligence, so we cannot line up all humans in strict order of intelligence. This is even more true of machines: Comparing the “intelligence” of AlphaGo with that of the Google search engine is quite meaningless.

Kevin Kelly, founding editor of Wired magazine and a remarkably perceptive technology commentator, takes this argument one step further. In “The Myth of a Superhuman AI,” he writes, “Intelligence is not a single dimension, so ‘smarter than humans’ is a meaningless concept.” In a single stroke, all concerns about superintelligence are wiped away.

Now, one obvious response is that a machine could exceed human capabilities in all relevant dimensions of intelligence. In that case, even by Kelly’s strict standards, the machine would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly’s argument.

Consider the chimpanzee. Chimpanzees probably have better short-term memory than humans, even on human-oriented tasks such as recalling sequences of digits. Short-term memory is an important dimension of intelligence. By Kelly’s argument, then, humans are not smarter than chimpanzees; indeed, he would claim that “smarter than a chimpanzee” is a meaningless concept.

This is cold comfort to the chimpanzees and other species that survive only because we deign to allow it, and to all those species that we have already wiped out. It’s also cold comfort to humans who might be worried about being wiped out by machines.

The risks of superintelligence can also be dismissed by arguing that superintelligence cannot be achieved. These claims are not new, but it is surprising now to see AI researchers themselves claiming that such AI is impossible. For example, a major report from the AI100 organization, “Artificial Intelligence and Life in 2030 [PDF],” includes the following claim: “Unlike in the movies, there is no race of superhuman robots on the horizon or probably even possible.”

To my knowledge, this is the first time that serious AI researchers have publicly espoused the view that human-level or superhuman AI is impossible—and this in the middle of a period of extremely rapid progress in AI research, when barrier after barrier is being breached. It’s as if a group of leading cancer biologists announced that they had been fooling us all along: They’ve always known that there will never be a cure for cancer.

What could have motivated such a volte-face? The report provides no arguments or evidence whatever. (Indeed, what evidence could there be that no physically possible arrangement of atoms outperforms the human brain?) I suspect that the main reason is tribalism—the instinct to circle the wagons against what are perceived to be “attacks” on AI. It seems odd, however, to perceive the claim that superintelligent AI is possible as an attack on AI, and even odder to defend AI by saying that AI will never succeed in its goals. We cannot insure against future catastrophe simply by betting against human ingenuity.

If superhuman AI is not strictly impossible, perhaps it’s too far off to worry about? This is the gist of Andrew Ng’s assertion that it’s like worrying about “overpopulation on the planet Mars.” Unfortunately, a long-term risk can still be cause for immediate concern. The right time to worry about a potentially serious problem for humanity depends not just on when the problem will occur but also on how long it will take to prepare and implement a solution.

For example, if we were to detect a large asteroid on course to collide with Earth in 2069, would we wait until 2068 to start working on a solution? Far from it! There would be a worldwide emergency project to develop the means to counter the threat, because we can’t say in advance how much time is needed.

Ng’s argument also appeals to one’s intuition that it’s extremely unlikely we’d even try to move billions of humans to Mars in the first place. The analogy is a false one, however. We are already devoting huge scientific and technical resources to creating ever more capable AI systems, with very little thought devoted to what happens if we succeed. A more apt analogy, then, would be a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive. Some might call this plan unwise.

Another way to avoid the underlying issue is to assert that concerns about risk arise from ignorance. For example, here’s Oren Etzioni, CEO of the Allen Institute for AI, accusing Elon Musk and Stephen Hawking of Luddism because of their calls to recognize the threat AI could pose:

At the rise of every technology innovation, people have been scared. From the weavers throwing their shoes in the mechanical looms at the beginning of the industrial era to today’s fear of killer robots, our response has been driven by not knowing what impact the new technology will have on our sense of self and our livelihoods. And when we don’t know, our fearful minds fill in the details.

Even if we take this classic ad hominem argument at face value, it doesn’t hold water. Hawking was no stranger to scientific reasoning, and Musk has supervised and invested in many AI research projects. And it would be even less plausible to argue that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing, and Norbert Wiener, all of whom raised concerns, are unqualified to discuss AI.

The accusation of Luddism is also completely misdirected. It is as if one were to accuse nuclear engineers of Luddism when they point out the need for control of the fission reaction. Another version of the accusation is to claim that mentioning risks means denying the potential benefits of AI. For example, here again is Oren Etzioni:

Doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.

And here is Mark Zuckerberg, CEO of Facebook, in a recent media-fueled exchange with Elon Musk:

If you’re arguing against AI, then you’re arguing against safer cars that aren’t going to have accidents. And you’re arguing against being able to better diagnose people when they’re sick.

The notion that anyone mentioning risks is “against AI” seems bizarre. (Are nuclear safety engineers “against electricity”?) But more importantly, the entire argument is precisely backwards, for two reasons. First, if there were no potential benefits, there would be no impetus for AI research and no danger of ever achieving human-level AI. We simply wouldn’t be having this discussion at all. Second, if the risks are not successfully mitigated, there will be no benefits.

The potential benefits of nuclear power have been greatly reduced because of the catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011. Those disasters severely curtailed the growth of the nuclear industry. Italy abandoned nuclear power in 1990, and Belgium, Germany, Spain, and Switzerland have announced plans to do so. The net new capacity per year added from 1991 to 2010 was about a tenth of what it was in the years immediately before Chernobyl.

Strangely, in light of these events, the renowned cognitive scientist Steven Pinker has argued [PDF] that it is inappropriate to call attention to the risks of AI because the “culture of safety in advanced societies” will ensure that all serious risks from AI will be eliminated. Even if we disregard the fact that our advanced culture of safety has produced Chernobyl, Fukushima, and runaway global warming, Pinker’s argument entirely misses the point. The culture of safety—when it works—consists precisely of people pointing to possible failure modes and finding ways to prevent them. And with AI, the standard model is the failure mode.

Pinker also argues that problematic AI behaviors arise from putting in specific kinds of objectives; if these are left out, everything will be fine:

AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.

Yann LeCun, a pioneer of deep learning and director of AI research at Facebook, often cites the same idea when downplaying the risk from AI:

There is no reason for AIs to have self-preservation instincts, jealousy, etc…. AIs will not have these destructive “emotions” unless we build these emotions into them.

Unfortunately, it doesn’t matter whether we build in “emotions” or “desires” such as self-preservation, resource acquisition, knowledge discovery, or, in the extreme case, taking over the world. The machine is going to have those emotions anyway, as subgoals of any objective we do build in—and regardless of its gender. As we saw with the “just switch it off” argument, for a machine, death isn’t bad per se. Death is to be avoided, nonetheless, because it’s hard to achieve objectives if you’re dead.

A common variant on the “avoid putting in objectives” idea is the notion that a sufficiently intelligent system will necessarily, as a consequence of its intelligence, develop the “right” goals on its own. The 18th-century philosopher David Hume refuted this idea in A Treatise of Human Nature. Nick Bostrom, in Superintelligence, presents Hume’s position as an orthogonality thesis:

Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

For example, a self-driving car can be given any particular address as its destination; making the car a better driver doesn’t mean that it will spontaneously start refusing to go to addresses that are divisible by 17.

By the same token, it is easy to imagine that a general-purpose intelligent system could be given more or less any objective to pursue—including maximizing the number of paper clips or the number of known digits of pi. This is just how reinforcement learning systems and other kinds of reward optimizers work: The algorithms are completely general and accept any reward signal. For engineers and computer scientists operating within the standard model, the orthogonality thesis is just a given.

The most explicit critique of Bostrom’s orthogonality thesis comes from the noted roboticist Rodney Brooks, who asserts that it’s impossible for a program to be “smart enough that it would be able to invent ways to subvert human society to achieve goals set for it by humans, without understanding the ways in which it was causing problems for those same humans.”

Unfortunately, it’s not only possible for a program to behave like this; it is, in fact, inevitable, given the way Brooks defines the issue. Brooks posits that the optimal plan for a machine to “achieve goals set for it by humans” is causing problems for humans. It follows that those problems reflect things of value to humans that were omitted from the goals set for it by humans. The optimal plan being carried out by the machine may well cause problems for humans, and the machine may well be aware of this. But, by definition, the machine will not recognize those problems as problematic. They are none of its concern.

In summary, the “skeptics”—those who argue that the risk from AI is negligible—have failed to explain why superintelligent AI systems will necessarily remain under human control; and they have not even tried to explain why superintelligent AI systems will never be developed.

Rather than continue the descent into tribal name-calling and repeated exhumation of discredited arguments, the AI community must own the risks and work to mitigate them. The risks, to the extent that we understand them, are neither minimal nor insuperable. The first step is to realize that the standard model—the AI system optimizing a fixed objective—must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.

This article appears in the October 2019 print issue as “It’s Not Too Soon to Be Wary of AI.”

About the Author

Stuart Russell, a computer scientist, founded and directs the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley. This month, Viking Press is publishing Russell’s new book, Human Compatible: Artificial Intelligence and the Problem of Control, on which this article is based. He is also active in the movement against autonomous weapons, and he instigated the production of the highly viewed 2017 video Slaughterbots.