Twitter uses Dunedin conference to explain approach to political expression
Senior Twitter executives this week spoke at a Dunedin conference on social media and democracy to explain the platform’s approach towards freedom of expression.
The high-level participation at the two-day conference by three of Twitter’s managers appears to be one of the first times that Twitter has engaged in such a public forum to explain its stance on free speech since the US Capitol was stormed on January 6.
The events in Washington D.C. that day precipitated the banning of then US President Donald Trump from both Twitter and Facebook and came against a backdrop of a wider discussion on the way technology companies are setting limits to expression on their platforms.
The Dunedin conference, ‘New ec(h)o systems: Democracy in the age of social media’, was held by the National Centre for Peace and Conflict Studies (NCPACS) at the University of Otago.
Twitter company representatives who spoke at the conference via Zoom included California-based Vijaya Gadde, who is Twitter’s General Counsel and Head of Legal, Policy and Trust, as well as the site’s Director of Public Policy for Australia and New Zealand, Kara Hinesley.
A third executive, Kathleen Reen, addressed the conference from Singapore. Reen is Twitter’s Senior Director of Public Policy & Philanthropy for the Asia-Pacific.
San Francisco-based Vijaya Gadde told the in-person audience of around 50 people – and around 100 others who were watching online – that freedom of speech online was only one of several values that needed to be considered.
“Everyone talks about free expression or free speech in the United States and they think of it as this sacrosanct right that has no barriers. But we know globally that’s not true. We know that around the world different people have very different ideas of what free expression means. We also know that globally people value other things. They value their safety, they value their privacy.”
Gadde, who has been a senior executive at Twitter since 2011, told the conference that the January 6 rioting in Washington had raised significant issues for the company.
“As a private corporation, we never felt that we would be in the position of affecting the public discourse by a world leader in the way that we obviously had to do that day, because of a fear of a further incitement of violence and what we were seeing, both on our platform and off.
“But it raises some really great questions about what the role of private corporations is in speech of this sort. Should we have different types of policies that apply to world leaders in terms of the latitude that we give them? Or should they be held to stricter standards because of the influence and the power that they wield?”
“How should we think about enforcement of our rules against world leaders? Should they not be enforced? Should we have some sort of interstitial [warning] which we’ve used in the past? Should they be eligible for permanent suspension?”
New Zealand’s experience had been instructive for Twitter, Gadde said.
“I think back to the attacks in Christchurch and a lot of the work that came from that, the collaborative work across all of our groups. We’ve used that in the United States, we’ve used that around the world.”
“We have to look at these problems holistically across our society. This is where your [organiser Sanjana Hattotuwa’s] research but also the government of New Zealand has been so open, looking at these problem spaces across the spectrum of players.”
In her remarks, Vijaya Gadde also focused on Twitter’s approach to moderation, saying that content moderation needed to go beyond a “binary” approach that either left material in place, or removed it altogether.
“With respect to misleading information, we have a range of remediation options. From removal for the worst, most harmful content that could really lead to severe offline harm…. but we also have labels, because in some contexts, things are not quite as black and white. We want to encourage debate and dialogue. Removing that content can actually fuel more conspiracy theories or more distrust.”
Kara Hinesley, who is based in Sydney, told the conference that Twitter was experimenting with various new features on the platform. One new function, a prompt to remind users to read an article before sharing, had resulted in a 40% increase in users clicking on links before retweeting them.
Hinesley said Twitter was particularly interested in a concept called “algorithmic choice”, which would give users greater control over algorithms that determined what users see on the site.
“Some of the options would be to filter certain spam, or certain content you don’t want to see, or to promote things you really like, like video games or movies, or just conversations from people you follow.”
“The idea of algorithmic choice is something that we feel very passionately about and we’ll be trying to move towards in the coming months and years.”
But, responding to a question from the audience, Hinesley said she was sceptical of the idea of offering different versions of Twitter to different countries, depending on their views on free expression. She said such a move would risk a “splinternet… a Balkanisation of the Internet into walled gardens” and would be “antithetical” to Twitter’s approach that emphasised a “global public conversation”.
Hinesley said Twitter was already coming under pressure from authoritarian regimes, citing Russia and Myanmar as examples of countries in which access to Twitter had become more difficult.
Sanjana Hattotuwa, a PhD candidate at NCPACS and the event’s lead organiser, told the Democracy Project that Twitter had provided a small amount of financial support for the conference.
Hattotuwa’s research compared responses from Twitter users to the Christchurch mosque attacks and the Easter Sunday bombings in his home country of Sri Lanka, which took place within a month of each other in 2019.
Twitter provided data support and access for Hattotuwa’s research after noticing the doctoral candidate’s independent research into responses by Twitter users in the days following the Christchurch attacks.
In the lead-up to the Dunedin conference, Twitter announced on its blog that it would partner with NCPACS via the platform’s ‘#DataForGood’ programme for a wider project that ‘would study the ways online conversations can be used to promote tolerance and inclusion instead of division and exclusion’.
— Twitter Public Policy (@Policy) March 15, 2021
Based on an analysis of tweets from Twitter’s @Policy account, the Dunedin conference appears to be the first academic forum at which the company’s representatives have spoken on expression issues since the January 6 storming of the US Capitol. And the NCPACS event appears to be only the second time globally that the company’s executives have spoken to an event about political freedoms this year, after a briefing to investors in late February.
The lessons of the Christchurch attacks – and the response by tech companies and governments to them – featured as a main theme throughout the conference, which began the day after the second anniversary of the attacks.
The NCPACS conference was officially opened via recorded video message from Jacinda Ardern. Organisers passed on a request by Ardern to conference attendees not to share her remarks, as she wanted the focus to be on the victims of the March 15 attacks.
On the conference’s second day, Paul Ash, the Prime Minister’s Special Representative on Cyber and Digital and Cyber Coordinator, spoke to the conference about his role in leading work on the Christchurch Call. The aim of the Call is the removal of terrorism and extremism-related material from the Internet.
Ash told the conference that the Christchurch Call, which has now signed up 48 countries and a range of companies and institutions as signatories, contained baseline principles to prevent countries from using the Call as an excuse to clamp down generally on freedom of speech.
“We did put a floor in around international human rights law, fundamental freedoms and a free, open and secure Internet. That is intended to prevent governments doing precisely that. I’m not aware of a government having used the Call to do that to date.
“We’re probably getting to the point where the number of countries that can meet the human rights levels and which will treat the Call in the spirit in which it is intended is becoming reasonably limited.”
Ash said he wanted to grow the number of tech companies involved in the Christchurch Call, which initially focused on the biggest players. He said that tech companies themselves and civil society groups that had committed to the pledge were also important safeguards for preventing abuse of the powers by governments.”
“The second protection is that it is a multi-stakeholder community. The nature of bringing those different parties together means that if a government tried to do that, the companies would object furiously, civil society – they’re there to hold us to account.
Paul Ash was one several government representatives to attend the conference in person.
Nicole Matejic, Principal Advisor Digital Safety at the Department of Internal Affairs, spoke on a panel with Twitter’s Kara Hinesley, along with Kim Connolly-Stone, the Policy Director for the InternetNZ industry group.
Others to speak in person at the event included NZ’s Chief Censor, David Shanks, and Thomas Beagle, the chairperson of the NZ Council for Civil Liberties.
Liz Thomas, a former MFAT staffer credited by Paul Ash for her role in drafting the Christchurch Call, also attended. Thomas now works for Microsoft as the company’s Regional Digital Safety Lead for the Asia-Pacific.
This article was originally published on the Democracy Project.