In his introduction, Kaye notes how the internet has changed over the last decade, the content in the form of individual blogs being replaced by social media. He notes that this has transformed it into something more like television with its centralization, sensationalism, insularity and inward-looking approach. The transition from a horizontal web to a vertical web has made it friendlier to manufactured amplification, censorship, disinformation and propaganda.
Kaye notes that the U.N. Universal Declaration of Human Rights is important. Article 19 states "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." As the prohibition of content that incites hatred or discrimination is at the margin of the law, governments are not in a clear position to order removal of such material and so ask the platforms to do it instead.
The social media companies review the acceptability of material. Due to the tremendous volumes of content, much of this process is automated with manual review of possible violations of the platform's terms of service. Kaye looks more closely at how this process works, with examples from YouTube, Facebook and Twitter. The author feels these processes to be inadequate in that the rules for content removal are not clear, and if one is not happy with a removal decision, it is impossible to find anyone to talk to about it.
Kaye describes a hearing (July 2018) of the U.S. House Judicial Committee, seeming to belittle the testimony of the YouTube and Twitter representatives, and saying that he felt it fell short of an honest investigation, being focused on narrow partisan questions. He notes that the U.K. digital economy committee did a serious report in 2018, but surprisingly does not examine its findings. The U.S., Europe and India have laws that exempt platforms from liability for any content their users generate, and also for removing content.
In 2017, Germany proposed the NetzDG which specified specific penalties if platforms did not remove content that violated specific provisions of German law. Free speech proponents objected on that it could result in takedowns without public involvement, and the financial penalties would result in any marginal content being removed. In 2015, the European Commission created a code of conduct which seemed to restate what companies already did. It was criticized for a lack of an independent arbiter, leaving individuals subject to excessive takedowns, without remedy. The author looks at Singapore and Kenya which have laws penalizing those that distribute false information. These laws turn out to be more commonly used to protect the government and politicians, than to serve interests of the citizens. Europe has the Right To Be Forgotten (RTBF) whereby if content is felt to interfere with an individual's privacy, a request can be made to have the URL delinked from the search engines.
In the chapter "Arbiters of Truth" the author looks at disinformation, suggesting that the issues are knowing the impact of disinformation and what platform owners and government should do to police such content. He notes that botnets that amplify disinformation could be detected, that tools could make the source of information clearer, and that fact-checking organizations can assist in screening material. Kaye and two others issued a joint declaration on disinformation, the primary recommendation being that politicians should start telling the truth, which has surprisingly gone unheeded.
The author apparently misses the idea that information is often not inherently true or false, but often includes some degree of uncertainty and spin reflecting the position of the source of the information. Moreover, the acceptability is a function of how the content fits with the viewer's social values.
Kayes conclusions largely revolve around the idea that rules for speech should be made by "political communities", not private companies. He speaks of ideas such as decentralizing decision making, providing radically better transparency, and instituting industry wide oversight and accountability.
It is evident throughout the book that the author is unhappy with the content filtering done by the social media companies. While he acknowledges they have to make fine distinctions, he accuses them of a seat-of-the-pants approach to content filtering. He terms rule-oriented approaches as bureaucratic. The author is clearly uncomfortable with self-policing, but is also against regulation as it can easily lead to restrictions in free speech.
Overall, Kaye does a good job of showing that too much freedom leads to content unacceptable to many, while regulation can lead to interference with freedom of speech. Kayes solution mainly revolves around the establishment of a variety of councils and committees with greater public input.
-