There is an entire cottage industry dedicated to complaining about Facebook, but vanishingly little of the ink spilled about Mark Zuckerberg’s social network concerns actual suggestions about how to remedy it. Facebook has real problems, but too many of its critics are content to just accuse them of being evil Bond villains and move on their merry way. So it’s nice that Farhad Manjoo has published a great piece rounding up some of the proposals from experts about what to do about Facebook.
The suggestions run the gamut from good ideas to bad ideas, but none of them seem to me to be very effective ideas, which is to say, none of them address the actual core complaint people have about Facebook: that the newsfeed surfaces content from your extended network that drives you insane.
At its heart, this is both the main basic appeal of Facebook and its main sin. It is a discovery tool for content of all stripes: photos of your cousin’s kids, but also your coworker’s thoughts on the news. This discovery feature is how misinformation spreads, it’s how people end up in extremist Facebook groups, it’s how polarization reaches fever pitch, but it’s also the whole game. It’s meeting new people, reading new things, finding out that there are niche communities in the world that are like you. But a lot of the benefits of it are baked in and sort of default and for the last five years as this country has descended further and further into mayhem and acrimony the negatives of the newsfeed have really gobbled up all of our attention.
Most of the proposals to fix Facebook wouldn’t do anything to address this. Take one of the most common proposals: break it up.
Facebook owns Instagram and WhatsApp. If Facebook was forced to spin them off, it would definitely make Facebook less profitable and less powerful, but it wouldn’t change the algorithmic issue at the core. Instagram is still going to be filled with content that might lower self-esteem. Facebook is still going to be filled with political content that drives us mad. WhatsApp is still going to be a haven for dark social misinformation.
Similarly, regulating ‘surveillance capitalism,’ another proposal Farhad touches on would effectively reduce Facebook’s profits, but not increase the consumer experience: This would basically ban Facebook from using all the data tracking tools it has to make its ad targeting so effective and profitable. This would without question reduce Facebook's (and Google’s) ability to make such massive profits, but ads really aren’t the big problem for Facebook users. Ads can be used for good or ill. Ad targeting played a role in the spread of misinformation during 2016. But even in that case, it was a drop in the bucket compared to the organic spread of misinformation.
And of course, not all ads are bad, right? Every company and nonprofit and Good organization that you know of also use Facebook’s ad targeting in ways you probably aren’t upset about.
There are millions of people in the world who are very concerned about digital privacy, as a rule. But they are actually a rather small percent of online users. These types of controls would definitely make them happier, but again it’s not the real mass complaint the average Facebook user has.
In Farhad’s piece, he mentions a way of addressing some of this that Congress could do by creating an agency at least to monitor and govern the way everyone’s private information is handled, and that sounds like a fine idea to me. But I just have sincere doubts about how consequential it would be.
I don’t think that either of these proposals—breaking them up or regulating its data collection—are terrible. The former seems sort of punitive to me which makes me uncomfortable—and both are ultimately doomed to fail changing anything anyone cares about—but I don’t really care. They are totally reasonable proposals.
But one of the other oft-bandied proposals is a bad idea: Place limits on its content.
Both the right and left have versions of this they love.
“Proposals from Democratic lawmakers,” Farhad writes, “tend to call on tech companies to delete or demote false content in order to retain Section 230 immunity; proposals from Republicans generally do the opposite, threatening to undo immunity if tech companies censor content “unfairly” or “in bad faith.”
Democrats recently introduced a bill to ban health misinformation on Facebook as determined by the Secretary for Health and Human Services. Empowering a cabinet official to dictate what a company like Facebook can have on its site seems like not a great idea to me!
“Have the senators forgotten that just last year,” Farhad asks, “we had a president who ridiculed face masks and peddled ultraviolet light as a miracle cure for the virus?”
Then we get a proposal that I think is actually pretty good though I don’t think it would address the core problem: Force Facebook to release internal data.
None of the tech companies want to share their proprietary data about how their sausage is made but I think they reached long ago the size and influence where it is totally fair to demand that in the interest of the country congress compelled them to release that data.
Rashad Robinson, president of the civil rights advocacy group Color of Change, favored another proposed law, the Algorithmic Justice and Online Platform Transparency Act, which would also require that platforms release data about how they collect and use personal information about, among other demographic categories, users’ race, ethnicity, sex, religion, gender identity, sexual orientation and disability status, in order to show whether their systems are being applied in discriminatory ways.
Algorithms at Facebook and other places have been caught over and over having biases that can lead to discrimination. That’s why you can’t run housing ads on Facebook anymore. It seems totally reasonable to force them to show how their data collection is being used and if it’s being used in ways that violate civil rights.
Getting a look under the hood of the platforms would be a great first step in seeing just how we could go about regulating tech in a way that makes sense.
Robby Soave proposes “we do nothing.” This is almost certainly what is going to happen lol. Robby thinks that the media has, as Farhad puts it, “become too worked up about the dangers posed by Facebook.”
I agree with Robby about a lot of this stuff but I think he might go too far there. The media definitely blames Facebook for too much and attributes its executives and engineers with a god-like power they do not possess, but it is a simple fact that Facebook is a hugely influential part of society. It is, along with Google, a structural beam of the modern web. Robby thinks that the result of “doing nothing” will be that Facebook collapses. But that is wrong. Facebook isn’t going anywhere, which is precisely why getting Facebook—and the other platforms—right is a hugely important societal goal. This is why I think progressives are erring when they try to shame people from going to work at Facebook. If you care about this, you should want them to hire the best people. Ultimately, changes that really influence the way people engage with social media are going to come from the companies themselves, and the engineers who can improve these products.
Now, if you are someone who believes that Facebook is evil and has no intention of solving these problems, then you’re going to scoff at what I just said. But I don’t think that lol.
This brings us to the best suggestion in Farhad’s piece: Increasing digital literacy.
To my mind, this is the only actual solution, and it is also probably what is going to happen very slowly after we “do nothing.”
Social media is so new. It takes a long time for our brains to adapt to stuff like this. We’re still in the infancy stage of living in the small world of social media, where everyone is interconnected, subjected to more content than previously imaginable, and where our reactions to that content are themselves content.
The main problem with Facebook is that it shows us everyday a ton of people we don’t like and who we wouldn’t like in any situation but who we especially don’t like when they are ranting on a barstool. Not everyone needs to like each other. This isn’t a kindergartner’s birthday party. But social media creates this fake world where you have to see these people all the time, and they’re constantly committing some violation of your norms, and you are brought closer in your outrage to your own group, and as that happens, you then enter into a vicious isolating dynamic of cognitive bias.
We’re not going back to a quieter more deliberate life living by a pond. But eventually, we’ll get a bit more used to this.
Think about the mental allowances you make when you are having a conversation in real life. People in extemporaneous conversation speak dumb. They say “like” a lot. They garble facts. They ramble aimlessly. They are too confident or too insecure. They say things excitedly they don’t mean. They make jokes you don’t find funny. But as a listener, you know this and your brain edits them and attempts to derive from the messy garbage dump that pours out of their mouths the actual meaning in their brain. However, when you read a written document, an article, or an email, you are less likely to be so forgiving in your interpretations because there is an element of deliberateness built into putting something to paper. “Well, they must have thought about it a little more than if they just said this unthinkingly on a barstool.” But social media is way more like chatting unthinkingly on a barstool than it is clacking away at a draft on a typewriter and making revisions. Eventually, we will all get a bit better about it.
This isn’t going to stop people from seeing dumb things that drive them crazy in their newsfeed, but it will make them less affected by it.
However, it could take a billion years! It would be nice if it happened sooner than later and if there are digital literacy programs or ad campaigns or something that would help move this along, then it seems like those are something we should totally do.
In general, I think the media should be wary about demanding specific changes to these platforms—as opposed to pointing out problems with the platforms—because they stand the chance of making them even worse. Consider 2018. After Trump’s election, a large part of the media decided that Facebook was to blame. Facebook responded by making the biggest change to its algorithm it had ever made. The result was that it got worse. This stuff is just hard!
But “this is a hard problem that has to do with not only tech regulations and algorithms but also human nature” is not a very satisfying answer.
Everyone just needs to take social media a little less seriously. Which brings me to my proposal: Put lithium in the water supply.
A solution I would like to hear people explore more is: get rid of the newsfeed. Newsfeed is the problem. If zuckerberg really wanted to make Facebook about connecting with people you love, then he should be ok getting rid of newsfeed and sending FB back to a friend network.
That or “ban all external links.” Just make Facebook a place where, like Instagram, you can’t post links.
Whenever I hear "ban health information" I think of how much of a fucking disaster that could have been for a place like Flint. Most people know where was a water crisis in Flint, very few know that it was a crisis caused almost exclusively by health authorities. The basic story is that Flint was placed under emergency management, which placed all the municipal government's powers in the hands of Darnell Earley. Darnell, in an attempt to save money (which is his job) switched Flint over from a water service agreement with the Detroit Water and Sewage Authority to the not-yet-completed Karegnondi Water Authority. After the contract was cancelled Flint started to treat its own water from the Flint River for the first time in a LONG time. This would have been fine, but in an attempt to save money Earley petitioned the Michigan Department of Environmental Quality (DEQ) to allow Flint to forgo corrosion control. The DEQ accepted it and in turn began testing for lead regularly. The thing is they were testing for lead incorrectly. The DEQ, the entity solely in charge of saying if water is safe for human consumption, REPEATEDLY told residents of Flint that their water was safe, they were being alarmist, and so on. It was only by citizens actively ignoring the prevailing health body that the problem was ever discovered. So ya maybe we shouldn't just let prevailing health bodies decide what the truth is without any room for further discussion.