Sunday, March 25, 2018

links for the weekend with a facebook theme


https://thebaffler.com/latest/techies-who-said-sorry-silverman

...
The fundamental underlying problem is the system of economic exchange we’re dealing with, which is sometimes called surveillance capitalism. It’s surveillance capitalism that, by tracking and monetizing the basic informational content of our lives, has fueled the spectacular growth of social media and other networked services in the last fifteen years. Personal privacy has been annihilated, and power and money have concentrated in the hands of whoever owns the most sophisticated machine to collect and parse consumer data. Because of the logic of network effects—according to which services increase in value and utility as more people use them—a few strong players have consolidated their control over the digital economy and show little sign of surrendering it. [emphasis added]

It wasn’t supposed to be this way. For years, tech executives and data scientists maintained the pose that a digital economy run almost exclusively on the parsing of personal data and sensitive information would not only be competitive and fair but would somehow lead to a more democratic society. Just let Facebook and Google, along with untold other players large and small, tap into the drip-drip of personal data following you around the internet, and in return you’ll get free personalized services and—through an alchemy that has never been adequately explained—a more democratized public sphere.

...
As some critics have noted, the Trump/Cambridge Analytica story is less about a few rogue data scientists getting hold of millions of Facebook users’ data and more about Facebook being used exactly as it’s designed, pairing people with ads based on their behavior. While their acquisition of user data may have violated Facebook’s terms and conditions, Cambridge Analytica did what an endless number of advertising firms, not to mention Facebook itself, has been doing for years: cataloging internet users and appealing to them with specialized ads. This is what the system was designed to do. This is the logical end-product of surveillance capitalism.  [emphasis added] To fix this state of affairs, we don’t need, as so many newly minted design ethicists are now arguing, to build a better mousetrap. We need to demolish the house entirely, and try to imagine a new, more just world to live in.

and keeping things Baffler for a moment

https://thebaffler.com/latest/cambridge-analytica-con-Levine
...

Let’s start with the basics: What Cambridge Analytica is accused of doing—siphoning people’s data, compiling profiles, and then deploying that information to influence them to vote a certain way—Facebook and Silicon Valley giants like Google do every day, indeed, every minute we’re logged on, on a far greater and more invasive scale.

Today’s internet business ecosystem is built on for-profit surveillance, behavioral profiling, manipulation and influence. That’s the name of the game. It isn’t just Facebook or Cambridge Analytica or even Google. It’s Amazon. It’s eBay. It’s Palantir. It’s Angry Birds. It’s MoviePass. It’s Lockheed Martin. It’s every app you’ve ever downloaded. Every phone you bought. Every program you watched on your on-demand cable TV package. [emphasis added]

All of these games, apps, and platforms profit from the concerted siphoning up of all data trails to produce profiles for all sorts of micro-targeted influence ops in the private sector. This commerce in user data permitted Facebook to earn $40 billion last year, while Google raked in $110 billion.
What do these companies know about us, their users? Well, just about everything.

Silicon Valley of course keeps a tight lid on this information, but you can get a glimpse of the kinds of data our private digital dossiers contain by trawling through their patents. Take, for instance, a series of patents Google filed in the mid-2000s for its Gmail-targeted advertising technology. The language, stripped of opaque tech jargon, revealed that just about everything we enter into Google’s many products and platforms—from email correspondence to Web searches and internet browsing—is analyzed and used to profile users in an extremely invasive and personal way. Email correspondence is parsed for meaning and subject matter. Names are matched to real identities and addresses. Email attachments—say, bank statements or testing results from a medical lab—are scraped for information. Demographic and psychographic data, including social class, personality type, age, sex, political affiliation, cultural interests, social ties, personal income, and marital status is extracted. In one patent, I discovered that Google apparently had the ability to determine if a person was a legal U.S. resident or not. It also turned out you didn’t have to be a registered Google user to be snared in this profiling apparatus. All you had to do was communicate with someone who had a Gmail address.

On the whole, Google’s profiling philosophy was no different than Facebook’s, which also constructs “shadow profiles” to collect and monetize data, even if you never had a registered Facebook or Gmail account.
...




Levine has a proposal that the nature of Facebook is relentless data mining and constant surveillance of a sort that is geared to ever more refined targeted ads.  He camps out a bit on the internet having originated in the military industrial establishment and that it's uses at a corporate level have been inherently antidemocratic.  Levine points out a Wired piece that described Obama as the first president to really effectively leverage the Facebook dynamic of the era.

https://www.wired.com/2008/11/propelled-by-in/

on a related note...

http://swampland.time.com/2012/11/20/friended-how-the-obama-campaign-connected-with-young-voters/

...

In the final weeks before Election Day, a scary statistic emerged from the databases at Barack Obama’s Chicago headquarters: half the campaign’s targeted swing-state voters under age 29 had no listed phone number. They lived in the cellular shadows, effectively immune to traditional get-out-the-vote efforts.

For a campaign dependent on a big youth turnout, this could have been a crisis. But the Obama team had a solution in place: a Facebook application that will transform the way campaigns are conducted in the future. For supporters, the app appeared to be just another way to digitally connect to the campaign. But to the Windy City number crunchers, it was a game changer. “I think this will wind up being the most groundbreaking piece of technology developed for this campaign,” says Teddy Goff, the Obama campaign’s digital director.

That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists. What’s more, Facebook offered an ideal way to reach them. “People don’t trust campaigns. They don’t even trust media organizations,” says Goff. “Who do they trust? Their friends.”

The campaign called this effort targeted sharing. And in those final weeks of the campaign, the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button. More than 600,000 supporters followed through with more than 5 million contacts, asking their friends to register to vote, give money, vote or look at a video designed to change their mind. A geek squad in Chicago created models from vast data sets to find the best approaches for each potential voter. “We are not just sending you a banner ad,” explains Dan Wagner, the Obama campaign’s 29-year-old head of analytics, who helped oversee the project. “We are giving you relevant information from your friends.”

Early tests of the system found statistically significant changes in voter behavior. People whose friends sent them requests to register to vote and to vote early, for example, were more likely to do so than similar potential voters who were not contacted. That confirmed a trend already noted in political-science literature: online social networks have the power to change voting behavior. A study of 61 million people on Facebook during the 2010 midterms found that people who saw photos of their friends voting on Election Day were more likely to cast a ballot themselves. “It is much more effective to stimulate these real-world ties,” says James Fowler, a professor at the University of California at San Diego, who co-authored the study.
...

Leveraging viral online tech-savvy younger people toward a shared vision of a better future society?  That sounds just like what used to be Mars Hill. Targeted sharing sounds really fancy and tech if you think of it only in "viral" or technological terms but in relational terms it was a conceptual shift in application of an old concept, invite your friends to church stuff.  This may have had a fancier and tech-driven application but it's relationship evangelism of a sort that's old hat in religious conversion activity.  But the take-away is that Obama was able to secure a victory by leveraging social media platforms. 

When Myspace was a thing a decade ago Mars Hill leaders urged everyone at the church to adopt and use the thing.  When Facebook was coming along a similar culture-wide advocacy happened.  In many respects what made Mars Hill unusual for an evangelical movement was that instead of jumping on trends that were twenty or even thirty years old (think any Christian author on the internet talking about the "postmodern worldview" as if postmodernism hasn't been around since the 1970s), the people of Mars Hill were on the wave rather than swimming after it.  The co-founders of Mars Hill were in the right place at the right time appealing to younger Christians with evangelical sympathies in beliefs but with hang-ups about the culture warrior versions of conservative Protestantism at a moment when the dot com scene was kicking into high gear, social media was incubating, and an economy of behind-the-scenes data mining was being refined.  Put all that together with a hipster shock jock pastor and the recipe worked, for about twenty years. 

And in many ways the success of the Obama campaign could suggest that Driscoll was merely, in a possibly true to the Seattle area ethos, a locavore variation of something a campaigner like Obama and his team were able to do even more potently in national politics.  In that sense Driscoll and company were the non-profit amateur variations.  As some authors have attempted to assert a connection between a Mark Driscoll and a Donald Trump it might be useful to turn the tables and situate Driscoll closer to a political figure that's closer to his generation, for one, and whose early success with a hindsight mixed legacy is more of a piece with Driscoll's own peak celebrity.  In that sense Mark Driscoll's rise and fall may be more plausibly connect, in social media usage terms, to the era of Obama than the era of Trump, even if progressives would like to associate Driscoll's views with Trump's views.  In terms of the wild disconnect between what was promised and what was going on behind the scenes progressives might want to explore the Driscoll/Obama comparison for two guys whose fans believed X was what was going to happen while Y was going on in actual policy.  But that's just floating an idea for the weekend.

Back to Facebook stuff.

Over at The Atlantic there's a few pieces of note:


https://www.theatlantic.com/technology/archive/2018/03/zuckerberg-facebook-cambridge-analytica-statement/556187/\

But let’s look at the big questions that the Financial Times raised: “Why did Facebook take so little action when the data leak was discovered? ... Who is accountable for the leak? ... Why does Facebook accept political advertisements at all? ... Should not everyone who cares about civil society simply quit Facebook?”
On every single one of these questions, Zuckerberg offered nothing.
Facebook’s position has been considerably complicated by further revelations from people formerly inside the company and those who worked with the platform.

Sandy Parakilas, a former employee at Facebook who worked on a team dedicated to policing third-party app developers, told The Guardian that he warned Facebook executives about the problem of data leaking out of the company’s third-party applications, and was told: Do you really want to see what you’ll find? “They felt that it was better not to know,” he said. “I found that utterly shocking and horrifying.”

Equally troubling, Carol Davidsen, who worked on the Obama 2012 campaign, recently tweeted that Facebook knew they were pulling vast amounts of user data out of the system to use in political campaigning and did nothing to stop them. “Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing,” she said. “They came to office in the days following election recruiting and were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side

The Obama team was not doing exactly the same things as Cambridge Analytica, but this is a shocking revelation about how much data was leaving Facebook, and how little was done to stop it.

In Zuckerberg’s statement about the weekend’s scandal, Facebook lays the blame squarely on a Cambridge psychology professor, Alex Kogan, for building an app that vacuumed up data from unwitting users and stored it outside the system, so that it could be used by Cambridge Analytica. And that is fair: Users could not have imagined that when they took a personality quiz, they would end up in the voter targeting database of a company associated with Steve Bannon.


But that is clearly not the only issue here.

One problem, as the Tow Center for Digital Journalism research director Jonathan Albright explained to me, is that apps developed by third parties were crucial to Facebook’s growth in the early 2010s. In order to secure the loyalty of developers who were helping grow the platform without being employed by the company, Facebook used the other currency at its disposal: user data.

What Facebook offered was a platform of users and the knowledge of their connections to each other. And what Facebook wanted back from that was user and engagement growth. Each party got what they wanted.


I.e. a thread that's evolving in coverage and opinion pieces is that Facebook was basically doing what it normally does.  The Cambridge Analytica part is interesting, to be sure, but older people who never got on board the Facebook train or younger people who like their real world lives enough to not want to be on Facebook may feel they have lost nothing by not being on board.

That's not exactly true, since the scope of data mining can see to it that even if you've never even been online there could be ways to mine data about you but that's just a side thought. 

There's more from The Atlantic, of course, about data mining as usual.



 
For a spell during 2010 and 2011, I was a virtual rancher of clickable cattle on Facebook.

It feels like a long time ago. Obama was serving his first term as president. Google+ hadn’t arrived, let alone vanished again. Steve Jobs was still alive, as was Kim Jong Il. Facebook’s IPO hadn’t yet taken place, and its service was still fun to use—although it was littered with requests and demands from social games, like FarmVille and Pet Society.

I’d had enough of it—the click-farming games, for one, but also Facebook itself. Already in 2010, it felt like a malicious attention market where people treated friends as latent resources to be optimized. Compulsion rather than choice devoured people’s time. Apps like FarmVille sold relief for the artificial inconveniences they themselves had imposed.

In response, I made a satirical social game called Cow Clicker. Players clicked a cute cow, which mooed and scored a “click.” Six hours later, they could do so again. They could also invite friends’ cows to their pasture, buy virtual cows with real money, compete for status, click to send a real cow to the developing world from Oxfam, outsource clicks to their toddlers with a mobile app, and much more. It became strangely popular, until eventually, I shut the whole thing down in a bovine rapture—the “cowpocalypse.” It’s kind of a complicated story.
 
 
To understand why withdrawing data was the default behavior in Facebook apps, you have to know something about how apps get made and published on Facebook. In 2007, the company turned its social-network service into an application platform. The idea was that Facebook could grow its number of users and the time they spent engaged by allowing people and organizations to build services overtop of it. And those people and organizations would benefit by plugging into a large network of users, whose network of friends could easily be made a part of the service, both for social interaction and viral spread.

When you access an app on Facebook’s website, be it a personality-quiz, a game, a horoscope, or a sports community, the service presents you with an authorization dialog, where the specific data an app says it needs is displayed for the user’s consideration. That could be anything from your name, friend list, and email address, to your photos, likes, direct messages and more.

The information shared with an app by default has changed over time, and even a savvy user might never have known what comprised it. When I launched Cow Clicker in 2010, it was easier to acquire both “basic” information (name, gender, networks, and profile picture) and “extended” user information (location, relationship status, likes, posts, and more). In 2014, Facebook began an app review process for information beyond that which a user shared publicly, but for years before that, the decision was left to the user alone. This is consistent with Facebook’s longstanding, official policy on privacy, which revolves around user control rather than procedural verification.
 
Cow Clicker’s example is so modest, it might not even seem like a problem. What does it matter if a simple diversion has your Facebook ID, education, and work affiliations? Especially since its solo creator (that’s me) was too dumb or too lazy to exploit that data toward pernicious ends. But even if I hadn’t thought about it at the time, I could have done so years later, long after the cows vanished, and once Cow Clicker players forgot that they’d ever installed my app.

This is also why Zuckerberg’s response to the present controversy feels so toothless. Facebook has vowed to audit companies that have collected, shared, or sold large volumes of data in violation of its policy, but the company cannot close the Pandora’s box it opened a decade ago, when it first allowed external apps to collect Facebook user data. That information is now in the hands of thousands, maybe millions of people.

To be honest, I’m not even sure I know what the Facebook platform’s terms of service dictated that I do with user data acquired from Facebook. Technically, users could revoke certain app permissions later, and apps were supposed to remove any impacted data that they had stored. I doubt most apps did that, and I suspect users never knew—and still don’t know—that revoking access to an app they used eight years ago doesn’t do anything to reverse transmissions that took place years ago.
 
As Jason Koebler put it at Motherboard, it’s too late. “If your data has already been taken, Facebook has no mechanism and no power to make people delete it. If your data was taken, it has very likely been sold, laundered, and put back into Facebook.” Indeed, all the publicity around Facebook’s Cambridge Analytica crisis might be sending lots of old app developers, like me, back to old code and dusty databases, wondering what they’ve even got stored and what it might yet be worth.

Facebook’s laissez-faire openness surely contributed to the data-extraction free-for-all that’s playing itself out now via the example of Cambridge Analytica. But so did its move-fast-and-break-things attitude toward software development. The Facebook platform was truly a nightmare to use and to maintain. It was built like no other software system then extant, and it changed constantly—regular updates rolled out weekly. Old code broke, seemingly for no good reason. Some Facebook app developers were dishonest from the start, and others couldn’t help themselves once they saw the enormous volume of data they could slurp from millions or tens of millions of Facebook users. But many more were just struggling to eke out a part of their living in an ecosystem where people might discover them.
... 
If progressives and the left want to use social media for activism they might as well grant that using that stuff looks like it opens them up to being constantly monitored by a gigantic web of corporate juggernauts who cumulatively form what some are calling a corporatist surveillance state.  It's not just the case of it being a feature and not a bug, it's more like it's a case of the very nature of the business paradigm is to suck up monumental amounts of data, parse it, and use that to target ads and find applications of the data for whatever uses there might be.  That Google, for instance, has had a close relationship to the Obama administration has been discussed here and there.  Meanwhile, I have a suspicion that conservatives of some self-labeled variety only seemed to care about this massive surveillance culture being too powerful when a Democrat held the executive branch.  Over the last twenty years I have gotten the distinct impression that Democrats and Republicans only want to debate about what the formal ideology of a sweeping totalitarian police state mediated by corporate entities is going to be, not whether or not the thing "should" exist. 

So, obviously, one of the themes in the last week has been to observe that we got to this point with the Cambridge Analytica news because Facebook as usual looks to be exactly how we got to this point.

 
...
 
 
Known bugs are the set of problems with social media that aren’t the result of Russian agents, enterprising Macedonians, or even Steve Bannon, but seem to simply come with the territory of building a social network. People are mean online, and bullying, harassment, and mob behavior make online spaces unusable for many people. People tend to get stuck in cocoons of unchallenging, ideologically compatible information online, whether these are “filter bubbles" created by algorithms, or simply echo chambers built through homophily and people’s friendships with “birds of a feather.” Conspiracy theories thrive online, and searching for information can quickly lead to extreme and disturbing content.
 
The Cambridge Analytica breach is a known bug in two senses. Aleksandr Kogan, the Cambridge University researcher who built a quiz to collect data on tens of millions of people, didn’t break into Facebook’s servers and steal data. He used the Facebook Graph API, which until April 2015 allowed people to build apps that harvested data both from people who chose to use the app, and from their Facebook friends. As the media scholar Jonathan Albright put it, “The ability to obtain unusually rich info about users’ friends—is due to the design and functionality of Facebook’s Graph API. Importantly, the vast majority of problems that have arisen as a result of this integration were meant to be ‘features, not bugs.’”
 
In his non-apology, Zuckerberg claimed Facebook had already taken the most “important steps a few years ago in 2014 to prevent bad actors from accessing people’s information.” But changing the API Kogan used to collect this data is only a small part of a much bigger story.
 
To be clear, I believe Kogan acted unethically in allegedly collecting this data in the first place, and that giving this data to Cambridge Analytica was an unforgivable breach of research ethics. But Kogan was able to do this because Facebook made it possible, not just for him, but for anyone building apps using the Graph API. When Kogan claims he’s being made a scapegoat by both Cambridge Analytica and Facebook, he has a strong case: Selling data to Cambridge Analytica is wrong, sure, but Facebook knew that people like Kogan could access the data of millions of users. That’s precisely the functionality Facebook advertised to app developers.
 
Speaking with Laurie Segall on CNN this week, Zuckerberg emphasized that Facebook would investigate other app makers to see if anyone else was selling psychographic data they’ve collected through the Graph API. But Zuck didn’t mention that Facebook’s business model is based on collecting this demographic and psychographic information and selling the ability to target ads to people using this data about them.
 
This is a known bug not just for Facebook and other social networks, but for the vast majority of the contemporary web. Like Facebook, Google develops profiles of its users, with information from people’s private searches and tools like Gmail and office applications, to help advertisers target messages to them. As you read this article on The Atlantic, roughly three dozen ad trackers are watching you, adding your interest in this story to profiles they maintain on your online behavior. (If you want to know more about who’s watching you, download Ghostery, a browser extension that tracks and can block these “third-party” trackers.) The Atlantic is not unusual. Most ad-supported websites track their users, as part of agreements that seek to make their ad inventory more valuable.
...
 
In the midst of all this Zuckerberg has not come across as the person who would be the best face to ameliorate concerns about what Facebook has allowed to happen.

 
...
 ...
But Facebook’s mission is at the very heart of the current scandal. These are the questions that the Financial Times, a paper not exactly known for its anti-corporate fervor, asked this week: “Why did Facebook take so little action when the data leak was discovered? ... Who is accountable for the leak? ... Why does Facebook accept political advertisements at all? ... Should not everyone who cares about civil society simply quit Facebook?”

Mark, maybe now is not the time to assume that everyone loves your “mission.”

Reading through the transcripts, Zuckerberg reminded me of another troubled communicator: Hillary Clinton. In public, he is just as methodical, just as unvaried as the erstwhile presidential candidate; above all, he sometimes seems to take his own correctness for granted.

Like Clinton, Zuckerberg’s attempts to sound contrite come across as humblebrag-like excuses. “I started this when I was so young and inexperienced,” he told CNN. “I’ve probably launched more products that have failed than most people will in their lifetime.”

And like Clinton, Zuckerberg has never nailed the art of the rousing emotional appeal. He has never delivered a stirring invitation to join Facebook; he has never convinced a large audience that he feels their pain. (Unlike Zuckerberg, Clinton was reportedly quite successful at this in private.)
Zuckerberg is, in short, not a performer, and he lacks the performer’s feel for how an earnest and witty performance can soften the hearts of even the most skeptical crowd.
 
Facebook’s problem is simple: It is a staggeringly powerful civic and commercial institution that has lost the public’s trust. Talking to the press is one way to regain that trust. But as chief executive—and as the face of the company since its inception—Zuckerberg must show that he understands that he even lost that trust in the first place, let alone why. Then he has to ask users for their trust back and take responsibility for fixing it.

And if he wants the public to think fondly of Facebook again, he has to recruit us to it, has to remind us of the beauty of a connected world, has to act like the vessel of a tremendous societal responsibility.

Instead, he shrugs and implies that it would be impossible for anyone in his position to do any better.
“There’s no way that sitting in a dorm in 2004 you’re going to solve everything up front,” he told Wired. “It’s an inherently iterative process, so I don’t tend to look at these things as: Oh, I wish we had not made that mistake. I mean, of course I wish we didn’t make the mistakes, but it wouldn’t be possible to avoid the mistakes. It’s just about, how do you learn from that and improve things and try to serve the community going forward?”
 
... 
 
And then there's something at Slate.
 
 
It sounds like the stuff of spy novels. A secretive company backed by an eccentric billionaire taps into sensitive data gathered by a University of Cambridge researcher. The company then works to help elect an ultranationalist presidential candidate who admires Russian President Vladimir Putin. Oh, and that Cambridge researcher, Aleksandr Kogan, worked briefly for St. Petersburg State University. And his research was designed to develop ways to psychologically profile and manipulate voters. 
       
Before we go too deep down the rabbit hole, let’s reiterate that the data Cambridge Analytica gathered to try to target more than 50 million Facebook users in the United States was not stolen from Facebook or removed after some security flaw or “data breach.” The real story is far less dramatic but much more important. It’s such an old story that the Federal Trade Commission investigated it and punished Facebook back in 2011.
It’s such a deep story that social media researchers have been warning about such exploitative practices since at least 2010, and many of us complained when the Obama campaign in 2012 used the same kinds of data that Cambridge Analytica coveted. Obama targeted voters and potential supporters using software that ran outside of Facebook. It was a problem then. It’s a problem now.
 
But back in 2012, the Obama story was one of hope continued, and his campaign’s tech-savvy ways were the subject of gee-whiz admiration. So academic critics’ concerns fell silent. Just as importantly, Facebook’s reputation in 2012 was at its peak. The platform’s usage kept growing globally as did the glowing, if misleading, accounts of its potential to improve the world after the 2011 revolution in Egypt.    
 
Between about 2010 and 2015, Facebook was a data-exporting machine. Facebook gave data—profiles of users who agreed to take one of those annoying quizzes that proliferated around Facebook between 2010 and 2015, but also records of those who were Facebook friends with those users—to developers who built cute and clever functions onto Facebook. These included games like Mafia Wars, Words with Friends, or FarmVille. You might have played, and thus unwittingly permitted the export of data about you and your friends, to other companies.
 
...
 
The Federal Trade Commission saw this as a problem. In 2011 the agency released a report after an investigation revealed that Facebook had deceived its users over how personal data was being shared and used.
Among other violations of user trust, the commission found that Facebook had promised users that third-party apps like FarmVille would have access only to the information that they needed to operate. In fact, the apps could access nearly all of users’ personal data—data the apps didn’t need. While Facebook had long told users they could restrict sharing of data to limited audiences like “Friends Only,” selecting “Friends Only” did not limit third-party applications from vacuuming up records of interactions with friends.
       
The FTC’s conclusions were damning. They should have alarmed Americans—and Congress—that this once-huggable company had lied to them and exploited them.
 
...
 
For 2016, Facebook would do the voter targeting itself. Now Facebook is the hot new political consultant because it controls all the valuable data about voter preferences and behavior. No one needs Cambridge Analytica or the Obama 2012 app if Facebook will do all the targeting work and do it better. 
       
This is the main reason why we should stay steady at the rim of the Cambridge Analytica rabbit hole. Cambridge Analytica sells snake oil. Its “psychometric” voter targeting systems don’t work. No campaign has embraced them as effective. And Cambridge Analytica CEO Alexander Nix even admitted that the Trump campaign did not deploy psychometric profiling. Why would it? It had Facebook to do the dirty work for it. Cambridge Analytica tries to come off as a band of data wizards. But they are simple street magicians hoping to fool another mark and cash another check. 
       
So now, to hear Facebook officials complain that they were tricked or victimized by Cambridge Analytica is rich. It was Facebook’s responsibility—by law—to prevent application developers from doing just what Kogan and Cambridge Analytica did. Facebook failed us, and not for the first time. 
       
While focused on Cambridge Analytica’s psychometric snake oil and on its ties to Russia and to Trump, we are missing the real story: This massive data exporting was Facebook policy and practice from 2010 to 2015. The problem with Facebook is Facebook
 
 
and while I could add to this by highlighting how platforms like Facebook and Twitter are by their very nature what Jacques Ellul would have categorized as engines for sociological and horizontal propaganda I don't feel like writing that just yet.
 
But what I will note is that Mark Zuckerberg's juggernaut is a bigger and more notorious version of the kind of thing The City is (it still exists) and if there are lessons from a former Mars Hill member about the constraints and uses of The City it's that it was quickly fashioned into a top-down information silo system and that the things I didn't much like about The City are things I am not hugely fond of about Facebook.  Facebook is a propaganda and marketing apparatus and so long as you bear that in mind you can use it with an understanding of its strengths and weaknesses.  Yes, it can be used to reinforce and maintain existing relationships but that's what the designer of The City was hoping to accomplish and that's pretty clearly not how Mars Hill ended up using The City in the seven years after The City started beta-testing. 
 
So as I read the headlines and articles and editorials about Facebook being discovered to be what it's been it's easy to feel "some" outrage but it's also easy to see, as so many quoted here have shown, that there's a point at which we have to see that what happened with the Cambridge Analytica thing is nothing compared to what Facebook and similar corporate giants have been doing since their inception. 
 
 
 

No comments: