R David Edelman is director of the Project on Technology, the Economy, & National Security (TENS), part of the MIT Internet Policy Research Initiative (IPRI). He holds joint appointments in the Computer Science & Artificial Intelligence Lab (CSAIL) and the Center for International Studies (CIS). Edelman earned his bachelor's degree from Yale, and master's and doctoral degrees from Oxford, where his scholarship focused on the intersection of international security and cybersecurity. Edelman comes to MIT following a distinguished career managing domestic and foreign policy in the US federal government.
précis: You’re back in academia after working in government for about a decade, so let’s do some grading. How well do you think the United States is doing at handling the policy implications of technological change?
RDE: On Capitol Hill, let’s put it this way: the grade Congress would get for understanding and managing technology policy issues would not make any of my MIT students happy.
At this phase in our history, we cannot afford to have any members of Congress not devoting the time or energy to understanding technology issues—and when it comes to tech literacy, we’re far from 100% among our elected officials. I will say though I am encouraged by a few developments. I'm encouraged that there are a handful of members—of all ages, but including some newly elected—who have decided to take these issues seriously and either start to build them into their brand of accountable representation, or talk about them as key to delivering on the mission of smarter government. I just wish they were more numerous.
The executive branch has recently started to understand that technology is not optional for senior-level roles in the government. Today I don’t think you would have a cabinet member walk in and declare with some sort of fossilized pride that technology was something to be handled by low level bureaucrats.
Looking ahead, I think there is a real opportunity to seize on the momentum, a lot of which was built in the Obama administration, and some of which has been built in the Trump administration. It is to be commended that the Trump administration wrote and executed an executive order on artificial intelligence. There's a lot of controversy over the value of that executive order. For instance, it didn't have new money, and it lacked some specificity. Those sorts of complaints may well be justified. But as someone who has written a half-dozen similar documents, getting the president to sign an executive order doesn’t just happen overnight. The administration has put a stake in the ground and has actually sought to build international consensus on these issues. In fact, it's quite a departure from what many people regard as the Trump foreign policy or the Trump policy approach.
In the end though, the real grade that we should be giving to the administration hasn't come out yet. We simply don't know, because this administration like all administrations will need to be judged on the outcomes of what they do.
précis: Compared to the US, who around the world would be the star students?
RDE: I think some look to the Estonians for having a leadership role internationally on cybersecurity, particularly relative to the size of the country. Estonia has certainly been a clear leader in both national cybersecurity policy and domestic technology governance. Estonians pay taxes online and vote online. Israel has distinguished itself in the digital economy. There are tremendous Israeli startups that are being acquired, including by US companies, for vast sums of money. They have found a formula that works for making Israeli innovation synonymous with high quality innovation.
Then there's the student in the class who is the biggest troublemaker. That’s Russia, without question. North Korea gets sent to detention as well. But Russia has been trying since long before the internet age to use technology in innovative ways to create disruption in the international system, to extend their foreign policy aims by other means and to test the boundaries of conventional security and governance. I think to those of us who've been studying cybersecurity for a long time, seeing Russian doctrine for 15, 20 years, know that Russia’s actions are, on some level, the culmination of what the US has attempted to do in terms of integrating technology into longstanding foreign policy principles. It just so happens that the longstanding foreign policy principles of the Russian Federation are focused on de-legitimizing the Western project of democratic governance. That is unhelpful to say the least.
précis: You are finishing a book on the international dimensions of cybersecurity. What in particular inspired the project, and what is it focused on?
RDE: In 2008, I sat in a meeting at a reasonably secure building and a reasonably secure room that was all about whether or not the US government would pursue a particular target via cyber means. Just under 10 years later I was in that same room re-litigating the same question. Out of 30 people in the room, 29 were different, and the lessons learned from the prior decision were limited at best. It was groundhog day for public policy. Those episodes show that we are still not clear on how cybersecurity maps onto international security.
When I was in the government, it was my job in part to help figure out how US policy in areas like human rights, innovation, free trade, protection of intellectual property, and national defense, fit together with a global, interoperable, secure, and reliable internet. Over the course of the last administration, we developed a vocabulary for fitting many of those pieces in. However, the piece that seems to still be challenging policymakers is the dynamic of restraint.
For a while we lacked an idea of what the norms of cyberspace should be and what states might make of them. Now we're at a place where there seems to be some growing consensus over how rules and laws might apply to cyberspace, but with very little understanding of what the actual practical effect will be of these rules. We need to know: what forces of international relations will restrain the otherwise rational desire for states to use offensive cyber tools, particularly in large scale attacks against each other?
The book does two things. First it analyzes how international laws and norms might be applied to this rational desire. Second, it asks what we know based on recent international history about the actual efficacy of these tools. What do we know about whether or not restraint will be effective? I think upon further analysis, it becomes clear that there is actually a rather narrow path to limiting states recourse to offensive cyber tools.
précis: What does that narrow path look like? Is it more like the implicit norms of nuclear deterrence, or explicit agreements like the laws of armed conflict?
RDE: If you want the answer to that, you're going to have to take the book out of the library! To give a bit of a preview, I think rumors that deterrence has worked in cybersecurity might be somewhat exaggerated and based on a limited number of data points. Those who completely write off the idea of normative or even formalized regimes of control in this space probably do so at their own peril.
précis: That's a very good bumper sticker. You said that cybersecurity creates a unique set of collective action problems. What makes this domain different from traditional or conventional security domains?
RDE: I'm not completely sure that it is different. One thesis of my policy work in the last decade has been trying to apply the lessons of international history and security to this seemingly novel, but often quite understandable, dynamic of technology and cybersecurity.
There are certainly some new dynamics created by new technology. The deterrence dynamic is partly destabilized by the immediacy of action in cyberspace and by the capacity for long-gestation but immediate actions like sabotage. What we call “operational preparation of the environment” raises very interesting questions about state intentions and culpability.
That said, I often caution against over-indexing to these new dynamics. It was once regarded as a truism that “on the internet, no one knows you're a dog” and that “attribution is impossible.” We've seen in the context of national cybersecurity that attribution can be swift and surprising. Take the case of North Korea’s hack against Sony, where the US government came out very quickly and identified who the perpetrators were.
I think the critical difference is that technology unifies the economic and security conversations in a way that is unusual. The same pipes that carry CYBERCOMMAND’s packets are also carrying your Amazon order and your tweets. This is fundamentally a shared infrastructure that creates a new dynamic, but one that is not entirely new. We have dealt with questions of commons before. We have dealt with questions of shared ownership before. All of those are areas that we have to consider as we think about this space, and we now have to do it from a foundation of technical understanding.
précis: One of the places that non-experts can most readily see tension over technology norms is in the question of regulating social media. What are the characteristics of a regulatory framework that's going to successfully moderate interests in openness and privacy?
RDE: Silicon Valley's recent attraction to being regulated by the government is a cop-out. Many of these platforms have a long history of content moderation. People often point to Section 230 of the Communications Decency Act, which grants immunity from liability for internet platforms. But Section 230 has had exceptions to it for quite some time. There have always been exceptions for content like child exploitation, and in the early 2000s provisions were put into place to create affirmative obligations when platforms became aware of copyright infringement.
Today, if you upload a video to YouTube where copyrighted music is even playing in the same room that you film, chances are the system will reject your video before it ever sees the light of day. Technically, computationally, much better content moderation is possible. So the question that we need to confront as a society right now is who should be dictating the community standards: the government or the platform? There’s a very strong First Amendment argument that says it shouldn't be the government. The reason this question has become so difficult is because a small number of social media companies dominate most of the discourse.
If we were in an environment where multiple competing social media platforms were truly challenging each other for dominance, this conversation would look very different and it would not raise these almost theological debates about the nature of First Amendment jurisprudence. Instead it would be a narrower conversation about how to apply some carefully tailored regulations given the dynamics of different platforms. At the end of the day, no one wants to use a spammy platform. No one wants to be misled by foreign agents into believing something that isn't true, and even fewer want to be manipulated by foreign intelligence services into destabilizing their own democracy.
précis: Conducting policy-relevant research is a core goal for MIT and the Center for International Studies. In your experience of government service, what separated the kind of academic work that helped you do your job from the kind of academic work that was irrelevant to you doing your job?
RDE: I would not advise that leading researchers try to chase the headlines, because the pace of innovation in the private sector and even in government articulation of policy is often faster than basic research. You'd think the most policy relevant conversation would be the one that weighs in on the news of the day, but research that starts by asking the most interesting questions is to me the research that actually ends up being the most enduring and the most policy relevant.
We have a challenge here in Cambridge as they do in Palo Alto and anywhere else in translating the significance of our work to public policy. Since coming to MIT, I’ve seen that the problem doesn’t arise because the work coming out of labs and departments is not policy relevant. Our challenge is to communicate with the general public and with policymakers who actually are in a position to make smarter policy using research. Public policy that happens in a vacuum, without any engagement with thought leadership and academia, is usually devoid of history, context, and evidence. That's the most dangerous place the policy can be in. What we're trying to do in the Project on Technology, the Economy, and National Security is to create that connectivity—and to get both researchers and public policy makers in the same room and speaking the same language.
That's what I saw was missing 10 years ago in government. That dialogue—or its absence—will ultimately be the differentiator between tech policy that seems clueless and that which seems prescient and able to confront both today's and tomorrow's challenges.