Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

24 February 2018

“Random”: the Rise and Fall of Facebook, Twitter and Perhaps American Society


[For an update with comment on how gun massacres reflect our national dysfunction, click here. For a note on how to do good by doing well and taking profits, click here. For seven reasons for us to deploy small nukes, click here. For comment on our desperate need to save the Dreamers, click here. For my prediction of a coming stock-market crash, click here. For links to popular recent posts, click here.]

When Bill Gates was at the height of his personal power as a software baron and monopolist, he had an all-purpose word of disapproval: “random.” It was good for all occasions because it described the antithesis of everything he did, or tried to do, in his business at Microsoft.

Gates wanted everything about Microsoft and its software to be rationally planned, calculated and designed in advance. Later, he wanted to maintain and extend his monopoly of operating systems for personal computers. He wanted to plan every move effectively, leaving no room for competitors to escape.

Gates wanted to strike hard and fast while the world of business, law and politics barely understood what he was doing. Virtually no one outside the industry then knew what software was and what it could do—let alone how it works.

So Gates had a clear field for most of two decades, until Steve Jobs, who did understand, came back to take Apple over again and make it the world’s most valuable company (for a time). During the interval between offering the first operating system for personal computers and succumbing (at least for supplying consumers) to competition from Apple, Gates’ well-planned control-freakism made him the world’s dominant software baron and the world’s richest man.

The antithesis of Gates’ personal style was “random.” If a proposal or project inside Microsoft wasn’t planned and engineered down to the tiniest detail and the most obscure consequence, Gates would use that word. After failing to meet Gates’ standards, employees who heard it would wilt. It was a word that could kill projects, doom software under test, break egos and destroy careers.

Facebook and Twitter are different. Randomness is part of their design.

Facebook is the extreme example. It was designed to be random. It’s supposed to duplicate on screen the random process of unstructured human social interaction. It’s kids gathering on a playground to play. It’s adults gathering at a break or after work at a water cooler or in a parking lot to “talk story.” It’s small-town racists gathering uninvited as they pass by a Ku Klux Klan rally or a lynching.

Facebook lets random social impulses prevail, thereby duplicating the normal process of unstructured human interaction. Hence the term “social media,” first applied to it.

Facebook’s primary software feature—each account’s “Wall”—follows this paradigm of randomness closely. An announcement of a friend’s pregnancy or birthday can follow outreach by a long-lost friend or co-worker. Then can come a commercial advertisement, whether sent directly by the advertiser or forwarded by a “Friend.” In the middle of it all can come a bit of “fake news” prepared by trolls without borders, whether working for Vladimir Putin or just probing your computer’s defenses in the hope of stealing your identity or your money.

That’s why Facebook repels users like me, who are trained in careers of discipline like science, engineering, law or medicine. Facebook’s Walls have no organizing principle. All they have is the bare possibility of searching electronically for something you thought you once saw in a particular thread. It would take great effort merely to organize a Wall into basic conceptual categories such as “long-lost contacts,” “birthdays and other special days,” “friends,” “family”, “distant relatives,” “commercial advertising,” and “propaganda.” Very few, if any, of Facebook users make that effort.

Even if you allow only a few “Friends” to post, the volume and randomness of what appears on your Wall can quickly become overwhelming. If you try to follow it all, you can succumb to the depression of a Sisyphean task. If your developing adolescent ego craves “likes“ and abhors rejection, Facebook can crush you.

We all know the apocryphal “truth” that everyone on Earth, and even everyone in human history, is related to you through no more than seven degrees of separation. Well, once you permit Friends of Friends to post on your wall, you are inviting communications from at least two of those seven degrees. Further degrees may appear as Friends of Friends re-post things sent by their own Friends of Friends.

This last effect is hardly unique to Facebook. E-mail alone can cause it unless you exercise discipline in culling your mail and deleting or filing duplicative messages. But unlike Facebook, Google has given its users a way out.

Google lets uses of its Gmail service “organize” messages into real communications, social-media postings and “promotions” (both political and commercial). You can set Gmail to file messages in these categories automatically, as they come in. (In my own use of Gmail, I skim my “Promotions” and “Social Media” inboxes once or twice a year, if at all.)

The randomness of messages on one’s Wall can make Facebook emotionally attractive to many users, especially those who don’t have much going for them otherwise. Randomness offers novelty. If you feel your life is otherwise boring, routine or humdrum, so much the better. Every day something new can appear on your Wall.

Even if your circle of “Friends” is narrow, you never know what might appear there. And if you permit Friends of Friends to post, as many users do, what appears there expands exponentially. You never know what you might get, or from whom. Every day can be a special day.

Thus does the notion of accepting unexpected messages from strangers insinuate itself into the most vulnerable minds. Adolescents and youth are particularly susceptible, as they find their painful way into comfortable social circles in a world of seven billion people, many of whom could (at least in theory) contact them directly or indirectly on Facebook.

The randomness of Facebook postings can have at least three undesirable consequences. First, the volume of postings can quickly become overwhelming. It can draw time from more productive pursuits, such as work, study or real friendships in the real world. The desire to keep up with the endless flow can make Facebook a compulsion or obsession, leading to a virtual treadmill, depression and even suicide.

Second, the randomness and lack of control can permit and even encourage all sorts of gang behavior, including bullying. When students in an ad-hoc Facebook ring bully a victim into depression or suicide, it’s just a virtual lynching—a crime made much easier by being on line and possibly anonymous.

Finally, the atmosphere of acceptance and the craving for “likes” make users of Facebook particularly susceptible to advertising, propaganda, and fake news. After all, every posting comes at least from a Friend of a Friend, doesn’t it? When a bit of fake news falls right next to an announcement of a good friend’s birthday, pregnancy or fatherhood, mere proximity enhances its credibility.

However false or misleading, messages on Facebook have three indicia of reliability that mere rumors don’t have. First, they come from or are blessed by Friends, or at least Friends of Friends, even if they ultimately derive from the darkest and most sinister source. Second, they are in writing or (if audiovisual) in tangible form. You can read them or play them again and again, as obsessively as you like, using your own powers of imagination—or your own conscious or unconscious desire to believe—to render them credible. Finally, if their ultimate source is professional persuaders—whether advertisers, professional trolls or Russian spooks—they have far more intrinsic persuasive power than rumors tossed around orally and off the cuff by a group around a water cooler or in a playground or employee parking lot.

So if Putin’s trolls using Facebook swung our 2016 election (as appears probable but remains unproven), it’s not because Americans are especially gullible. It’s because they have encountered individualized propaganda in a medium, manner and means never before experienced in human history, and fraught with so many random flaws.

The unintended consequences of Facebook’s randomness, coupled with the Internet’s near-capsizing of our professional news media, have upended the communication systems by which Americans once understood the world around them. If we are not careful and clever, they may soon upend our democracy and way of life.

In this regard, Twitter is less insidious than Facebook but still bears watching. It’s less random because it has a simple and powerful organizing principle: hashtags. Users organize their Twitter reading by hashtag and respond to Tweets the same way.

Twitter also makes it harder to disguise or elide the true origin of a Tweet: a re-Tweet discloses its original author, unless the forwarder copies or paraphrases the original Tweet to make it his own. (Russians and Russian bots reportedly did a lot of this.)

But Twitter can be even more insidious than Facebook in one respect. Its character limitation, now 280, renders formal reasoning almost impossible, let alone debate. All you can do in a Tweet so short is state your conclusions or assertions and the most cursory and oversimplified reasons for them.

This fact makes Twitter more a way to reinforce a reader’s preconceived opinions than to change minds. That’s why advertisers and propagandists seem to have taken to it less than to Facebook. Sometimes they retweet messages they favor, as Putin’s trolls often did with Tweets by real Americans.

Nevertheless, Twitter’s very brevity and hashtags lend themselves to something that Facebook’s randomness prevents: the assertion of authority. Twitter is a perfect medium for those in positions of authority or power, whether based on public office, wealth, or celebrity. It’s an ideal medium for orders and pronunciamentos, devoid of research or persuasive reasoning.

Donald Trump understands this advantage instinctively. That’s why he’s made Twitter his principal means of communicating with the public. It’s ideal for an authority figure like the president, precisely because its power relies on the Tweeter’s authority, rather than any cogency of reasoning—which no one could compress into 280 characters.

If the emperors of ancient Rome had had Twitter, rather than papyrus scrolls and town criers, they would have used it just as Trump has. The difference, of course, is that two millennia later we are supposed to have a society of laws, not men. By Tweeting just as a Roman emperor might have issued decrees, Trump reinforces his authority and influence among those who favor him.

He also instills fear in those who oppose him. Even the most sanguine of us fears that his Tweets, willy nilly, might somehow become the law, just as the decrees of an ex-corporal and house painter did in Nazi Germany.

Despite their differences, Twitter and Facebook have a big thing in common. They are completely new under the Sun. Our species has never had anything like them, ever. They exploit the oft-predicted but until recently less-used ability of the Internet to handle many-to-many and many-to-one communication.

Letters and the telephone gave us one-to-one telecommunication, which we have had for well over a century. Broadcast radio and television gave us one-to-many telecommunication. In so doing, they obviated the “whistle stop” political speeches along railroad lines common in the nineteenth century. So they, too, upended politics in their times.

But nothing prepared us for qualitatively different and far more powerful potential for millions (including Putin’s trolls) to put messages on the Facebook Walls of millions of citizens. Nothing has prepared us for a president who collects instant feedback from his citizens and his partisans through “likes” of his Tweets.

Before any pollster can even think of proper questions to ask, let alone write them, prepare a proper statistical sample, and telephone the questions, Trump has a good read on how his public reacts to each of his Tweets. And in theory, though probably not in practice, someone in Trump’s position, or an assistant, can read at least a sampling of response Tweets, thereby getting to know some reasons for the public’s reaction. This new many-to-one communication capability may some day render pollsters and opinion monitors obsolete; at very least its ability to strike fast will push them to rely more on greater precision and thoroughness than Twitter’s “likes” can provide.

Many-to-many and many-to-one communication are absolutely new, let alone with the breadth and instantaneity of Facebook and Twitter. The much-vaunted mavens of these firms gave little, if any, thought to their societal consequences, let alone the unintended ones. They were just interested in innovating and “monetizing” the results of their innovation, willy nilly.

Yet today the unintended consequences of the randomness of what they wrought loom far larger than anything their creators intended. Those consequences include new causes of mental illness among our youth and allowing fringe groups and foreign spooks and trolls to subvert our democracy. They have rendered large portions of our population more vulnerable to lies and propaganda than ever before. It’s hard to imagine how Trump could have become president without them.

If this process continues unabated, our society could dissolve into warring clans, divided not as much by race, religion, ethnicity and national origin as by absolute conviction in differing views of “the facts,” truth and reality. We and perhaps some of our democratic allies could come to resemble Matthew Arnold’s “ignorant armies [that] clash by night,” while authoritarian societies such as Russia and China deliberately exploit and inflame our weaknesses as Putin has.

There are countermeasures we can take. But they will not be easy. Our First Amendment permits no censorship, and our society could be gone by the time we figure out how to amend it without destroying its benefits and implement a solution.

The only apparently durable solutions are education, strict identification of sources, and sources of non-fake news that enjoy universal trust and respect. Until we implement those solutions, we must cope with a level of randomness in our society and our thinking that we have never experienced and never expected.

From our Founding, we distinguished ourselves as a sensible, practical people motivated by Reason, not ideology, religion or superstition. Now we must restore that distinction under the threat and reality of randomness. And we must do so at a time when our entire species is struggling with the stresses of global warming, increasing oil scarcity, refugees from war and climate change, possible consequent food scarcity, overpopulation, and nuclear proliferation.

If we humans are to rise to these unprecedented challenges, we Americans will need both great thinkers and great leaders. Unfortunately, neither Trump nor anyone in his Cabinet appears ready, let alone capable, of rising to the occasion. And as the enfant terrible Zuckerberg slowly comes to comprehend the true scope of the damage he has done, how long will it be before he or anyone else fixes it?

Endnote. Facebook is not the only current example of the many-to-many communication on the Internet. Others are the comment sections to many on-line news-and-opinion sources and the customer reviews that Amazon pioneered and that now appear on the websites of many sellers of goods and services.

Yet for three reasons Facebook is by far the many-to-many communication system most susceptible to abuse. First and perhaps most insidious is Facebook’s random “organization." When a reader sees an online comment on a newspaper article or editorial, or a review of a product or service review on a seller’s website, she knows in advance the specific subject matter of the comment or review. In fact, the website’s very format and navigation are such that the reader is likely looking for exactly that. In contrast, Facebook’s random organization surprises users with messages from advertisers, trolls and propagandists hidden amidst innocent personal missives and photos from friends, relatives and acquaintances. It thus catches them off guard, with their critical faculties sleeping and their gullibility at a maximum.

The second vulnerability to abuse is format. Facebook permits any format for Wall messages, while comment and review pages typically support only limited formats. Thus an advertiser or troll can make a Facebook post in video and make it look official or authoritative. It can even copy the format, typeface and style of a real newspaper, such as the New York Times. Or it can make its fake news resemble a legitimate news sources, even if that “source” is entirely fictitious. In contrast, news-comment pages permit only text (and some even limit hyperlinks), while product-review pages permit text with limited graphics and video supporting the text.

The third vulnerability of Facebook to trolling by advertisers and spooks is the total lack of control over who may post (in unrestricted accounts) and who may become a “Friend” (in restricted accounts), as well as failure to require truthful disclosure of who has posted. In an attempt to limit product reviews to authors who have actually purchased and used the products, Amazon notes “verified purchaser” prominently below the names of authors. Most newspapers allow readers to complain of abusive or inappropriate comments, and some have software algorithms that reject comments with profanity and abusive language. Their improvements in this regard are ongoing. (For a time, the New York Times had human moderators read every comment, correspondingly elevating the level of dialogue.)

In contrast, Facebook originally had nothing of the kind. It left what could be posted entirely to each poster, apparently relying on a “relationship,” family, social or otherwise, between poster and reader. Only now, as Facebook has become a prime target for advertising and propaganda, is it trying to put in place ad hoc (and often ineffective) limits on posting.

Originally Facebook was a site by, for and of kids, set up for their juvenile social interactions. Today, a decade later, its flexibility, disorganization and relative lack of rules have made it a medium of choice for advertising, public relations and political propaganda of all types. It carries the rants of Islamic extremists, domestic extremists, political operatives, and foreign spooks. What rules and organization might be put in place now bear great resemblance to roping the horse after he’s bolted the barn.

And yet we must try, whether by goading or shaming Facebook’s management or directly by federal regulation. Like water rushing downhill though an orifice, the world’s bad actors have taken Facebook as a hole in the Internet’s thin protection of truth, decency, and civilization. One way or another, and no matter how hard the technical or political challenges, we must plug that hole in order to insure the survival of our democracy and our civilization, not to mention decency.

Links to Popular Recent Posts

permalink

0 Comments:

Post a Comment

<< Home