Ron Deibert - Director, Citizen Lab

Share

Written by

Ron Deibert is director at the University of Toronto’s Citizen Lab, which conducts research around technology, human rights and global security. Much of the organization’s work concerns the ways in which governments around the world are controlling people’s use of the internet, as well as using the internet to monitor people.

Human rights are closely tied to the digital communication technologies that connect the world today, Deibert says. Those rights, he says, are affected by the use of things like spyware, malware and internet filtering — topics central to the many books, chapters and reports that Deibert has authored.

Deibert says he’s optimistic about the increasing public awareness of these issues, but thinks that in practice the “trend lines are definitely in the opposite direction” and people’s freedom online is facing tough challenges.


CyberScoop: What is the connection between people’s lives online and human rights, for those who may not be very aware of it?

Ron Deibert: I think today it’s fair to say that the exercise of human rights is inextricably bound up with digital technologies and the communication technologies that surround us. In other words, access to information, freedom of speech and even freedom of assembly, privacy — all of the main human rights as outlined in the Universal Declaration of Human Rights — are exercised today through digital technologies. It’s largely inescapable. So protecting human rights, preserving them and extending them means making sure that the digital environment is structured in a certain way. And it also means that we need to be mindful of whatever violations are going on to human rights are likely to have some digital component to them, which is why we do the type of research that we do at the Citizen Lab.

CS: What do you think are some of the biggest threats to those rights online?

RD: Well, I mean, it’s across the board. Just about every one of those areas is the focus of our research here at the Citizen Lab. And there’s certainly no end to the problems and puzzles that we’re exploring. We just released a report on our analysis of how image filtering on WeChat, a very popular Chinese social media application, works. When the internet first emerged, there was less structure around it and the rules were largely informal and based mostly on principles having to do with the engineers and some of the early internet users. As it’s grown, as it’s become more important to all aspects of economics and society and in government, the pressures on the internet have increased.

This has been especially pronounced around concerns with cybersecurity. And so governments have stepped up all of their efforts to regulate the internet. Of course, many governments are either authoritarian, or agencies within governments — even liberal democratic governments — have authoritarian tendencies. This has meant that the internet is no longer guaranteed to be a forum for free expression and access to information and privacy. It’s really become something else.

Probably the biggest factor around this is the personal data surveillance economy that we live in, which has really transformed the nature of the communications environment. That really begins with social media and the idea of using essentially a free exchange of services to undertake fine grained surveillance of users in order to target them with advertisements. That’s led to really an economic model based on mass surveillance. That’s certainly one of the most important threats to human rights as it relates to privacy, but it also overlaps with what governments have been doing over the last 20 years, which is more or less the same thing: using digital technologies to monitor what they perceive to be threats.

And of course, pressures to regulate content and access to information, access to information on the internet have been growing as well. Internet censorship is now practiced worldwide. It’s very profitable business for many companies that provide internet filtering services. And I would add to that the broader cybersecurity marketplace as a whole, which has equipped policymakers with a range of products and services and tools that allow them to do things they never before imagined in terms of either projecting power internationally or monitoring their populations domestically.

CS: What do people need to know about protecting their data and online presence and not falling victim to espionage or surveillance campaigns like the ones that Citizen Lab often writes about?

RD: Well, one of the things I definitely a point people to is our Security Planner tool, which is at securityplanner.org. We created this tool largely out of frustration that we believe there were some very simple, small steps that many people could do to increase their digital hygiene and these weren’t being effectively communicated to people. So we created portal that asks users a few simple questions and then gives them a set of peer-reviewed recommendations that are tailored to their own habits and devices and technologies that they use in order to improve their digital security.

But more broadly, I think that we need to encourage people to look beneath the surface. And by that I mean not take for granted the technology that surrounds us, whether it’s in terms of signing off on terms and conditions or not really understanding how an application works or just deferring to the promises of companies or governments in terms of what they do with our data. I think there needs to be a hypervigilance around this in terms of what I call “lifting the lid” on the internet.

Obviously, not everyone can do this. Not everyone has the skills or the time to do this, which is why I think the type of work that we do at the university is critical for a free society. We need a rigorous evidence based research that’s not afraid to pry into corners that powerful actors would rather us not pry into in order to identify risks around privacy, surveillance, et cetera.

CS: Is there anything that you’d point to as something that has significantly improved past year or two in terms of these needs for awareness about the issue or services or solutions that protect people?

RD: I think there’s some encouraging things out there. I think it’s encouraging when I see that there are other groups that are doing research such as ours, and it’s encouraging when the reports that we write about — for example, targeted espionage against civil society or the abuse of commercial spyware — receive very high profile media attention. We’ve been very lucky to have many of our reports covered on the front pages of the New York Times or Washington Post. So it’s gratifying to see that type of awareness raising going on. I think it’s encouraging that some companies have taken up concerns around the ways in which their technologies and infrastructure can be abused for a state-sponsored cyberattack.

So things like the Digital Geneva Convention that Microsoft has been putting forward or steps that Google has taken to warn users about possible state-sponsored attacks on their email, I think are encouraging. And there is a community of practice worldwide that I believe cares about these issues. I’m not sure if it’s growing or not, but it’s still there. That said, however, I think that the trend lines are definitely in the opposite direction and I could list dozens of things that worry me about where things are headed. Maybe just to give you one example, at the top of that list is the influence of China and the growth of China and the impact that China will have a in terms of standard-setting and norm-setting. [China’s] very ambitious information control strategy, I think, is showing that authoritarian governments can have their cake and eat it, too — in other words: reap the benefits economically of digital technologies while also perfecting ways to contain, if not eliminate threats to the regime that arise through social media and other digital technologies.

TwitterFacebookLinkedInRedditGoogle Gmail
TwitterFacebookLinkedInRedditGoogle Gmail