From the Boidem - 
an occasional column on computers and information technologies in everyday life

August 24, 2015*: What isn't an experiment?

I like a good rumor, though more often than not I'm willing to accept the verdict of Snopes.com - even when it confirms that a juicy rumor, probably too good to be true, isn't. As far as I’m concerned, the more far-out the rumor the better, but when it comes to truth it seems that real life is a bit more banal than we'd like to believe. If we find ourselves exclaiming "did that really happen?", chances are good that the answer is "no", and Snopes is a good place to get the down-to-earth information that forces us, against our hopeful imaginations, to accept that answer.

That being the case, I see no reason to reject the Snopes judgment that Facebook's invitation to put a rainbow background to members' profile photos wasn't yet another experiment that the company was, without our consent, running on us. The Snopes page on that issue reviews the rather predictable history of the claim and reaches the not surprising, if perhaps disappointing, conclusion that this was simply a nice gesture that virally took off.

Probably more often than not our desire to believe a juicy rumor coincides with our willingness to believe just about anything about Facebook - a willingness that many will claim Facebook has well-earned. Facebook has, after all, conducted experiments. The best known of these was the News Feed emotions A/B experiment from about a year ago. This particular case wasn't a rumor, but a real experiment, openly acknowledged by Facebook. Whether or not it was really nefarious is, however, a different question. An article in Wired from May of last year clearly explained, if that's the right word for it, what took place:

For one week in 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly selected users (about 1 of every 2,500 Facebook users).
That rather clearly stated explanation, however, seems to be based on the assumption that there's some sort of Ground Zero at which Facebook never determined which status updates appear in users' News Feeds. And of course that wasn't, and couldn't be, the case. As the Wired article, quoting from the published paper on the experiment, makes clear:
"[b]ecause people's friends frequently produce much more content than one person can view," Facebook ordinarily filters News Feed content "via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging."
In other words, somebody at Facebook “knows” what its users would consider “most relevant and engaging” and, in order to see to it that those so-determined items will be those most prominent in the Feed, is continually tweaking its algorithm. I suppose that it's possible, perhaps even easy, to find some sort of diabolical intent here. For that reason it's perhaps worth reading what Adam Kramer, the chief researcher from Facebook on the study, wrote (admittedly after being criticized) about the reason for the experiment:
We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.
I would never be in a hurry to assume that Facebook has the best interests of its users at heart, but I'm not sure that the evidence suggests that there's any reason not to accept Kramer's explanation.

In this best known (or most infamous) Facebook A/B experiment we only later learned that some users had been fed one sort of information and others another, and it caused quite a uproar. In the New York Times of May this year, however, Claire Cain Miller explained Why Facebook’s News Experiment Matters to Readers. Miller wasn't referring to the "emotions" experiment, but to something that was probably considerably more significant. As Miller explained, what Facebook was doing in this, different, case was attempting to make it unnecessary for us to leave their site to get the news:
If you want to read the news, Facebook is saying, come to Facebook, not to NBC News or The Atlantic or The Times — and when you come, don’t leave.
Though Miller referred to this as an experiment, I doubt that many others saw it as such. The Facebook announcement of this new Instant Articles policy didn't include the word "experiment", and it's fair to say that Facebook users weren't being used as Guinea pigs. And perhaps more importantly, they didn't see themselves as being used in that way. But this was still definitely something that Facebook was doing to us, and something that if it led to successful (as determined by Facebook) results would be maintained. And perhaps most important, in the long run the significance of this "experiment" goes well beyond to what extent our moods are affected by the items in our feeds.

And of course there’s nothing new about this either. Back in the days of AltaVista we assumed, perhaps incorrectly, that the results of a search were ordered on an objective scale – more or less where the term we were looking for showed up on a page, or in the title. Google’s Page Rank changed all this for what most of us view as the better, giving us a measure of how desirable a certain result was. But this change also gnawed away at that original objectivity. Perhaps we ourselves were influencing ... ourselves, but again, something was being done to us, and we weren't being asked whether that was something we wanted or not. And at least to be fair it should be noted that Google is constantly running experiments to "improve" the user experience.

Basically, there's no such thing as a null set. Anything, including doing nothing, is “intervening”, and we interpret any intervention that brings about a change from the original situation we've (incorrectly) identified as “natural” as experimenting with us. This probably explains why so many people view what Cass Sunstein writes about in Nudge as illegitimate, though to a large extent all he's really doing is changing the default parameters on various issues. Am I being too lenient here? Probably. The fact that everything may be an experiment doesn’t mean we should accept with open arms that this is being done to us, especially if the objective is to get us to spend more money. But it should also give us a bit of perspective. I don’t remember whether my high school cafeteria played around with the menu to find which foods we were more prone to purchase (frankly, I don’t remember eating in my high school cafeteria, though I suppose that I at least occasionally did). Today many parents complain about the attempts by school cafeterias to weed out unhealthy foods as an illegitimate social experiment. If those parents had grown up with school cafeterias that only served healthy foods would attempts at "weeding in" unhealthy foods also be seen as a social experiment?

Sometimes, particularly in our digital times, the problematic aspect of these interventions is that they're not distributed equally, and they're made possible by the profiling that's more or less inherent in our reliance on things digital. Higher hotel rates for Mac users isn’t an urban legend. Three years ago Orbitz acknowledged that after identifying the operating system of users who logged in to order hotel rooms, they charged the Mac users, usually considered a richer segment of the population, more. And interestingly, as Larry Dignan in that article notes:
From an analytics perspective, targeting by operating system and pricing accordingly may not be such a bad idea. The bonehead move of the century is Orbitz yapping about it. Orbitz did note that pricing by OS is just an experiment.
It would seem that an "experiment" of that sort is specifically designed to test whether a specific demographic of users will put up with this, or whether a company can get away with it. An article in the International Business Times from a year ago makes it clear that "experiments" of this sort are still going on, and perhaps becoming more popular as smart phones become the buying device of choice:
Computer science researchers from Boston's Northeastern University have proved that e-commerce sites are tracking the online shopping habits of people and will charge individuals different prices, depending on what type of device they are using to access a website.
Previous research and media investigations since 2000 have found that Amazon and US retailer Staples had been charging users different prices for products due to estimations of their wealth and geographic location, a practice known as "price discrimination".
But it now seems that the problem could be more widespread across the global e-commerce industry than previously thought, which is interesting since mobile shopping is on the rise.
Certainly nobody appreciates being profiled, particularly by a computer, in order to get charged more than somebody else. But when "experiments" of the Orbitz sort occur on the web we're not necessarily aware that something different is happening to us, or that it's because we've been profiled in one way or another. Because of this, even though this sort of thing offends our sense of "everybody is equal in cyberspace", it's not necessarily something that everyone is aware of. With a Facebook experiment, however, we're clearly not alone. We're very aware that it isn't only happening to us, but to many others as well. But if everything is an experiment, perhaps there's really nothing new here and all of us, myself included, are over reacting. Perhaps it's simply that our awareness of what Facebook is doing is greater because it's taking place in a medium which is so overtly public. And perhaps it's this publicness that makes us particularly sensitive to this. A supermarket, however, is also public, and the changes in product placement on its shelves may also be a manipulation, perhaps even a blatant manipulation. A good case can be made that manufacturers and marketers who are willing to put out good money to get their products more in our faces than their competitors are "experimenting" on us. But this sort of activity, even if it's distasteful, doesn't generate conspiracy theories. More than anything else, I get the feeling that our reaction, or over-reaction to "experiments" of this sort are a sign of how attached we've become to social media, but also of how little we really understand of them. The almost universal acceptance of Facebook suggests that a News Feed that objectively offered up to us all the new "information" that our friends found relevant or noteworthy somehow made sense to us. Even without prior experience with what we've come to know (in a very short period of time) as social media, in the abstract this Feed was logical, even natural. So natural that we hardly stopped to question why it should work in one way instead of another. And when it started working differently, we responded to that as an attempt to manipulate us, instead of realizing that we'd always been manipulated.


That's it for this edition. Reactions and suggestions can be sent to:

Jay Hurvitz


back to the Boidem Contents Page

Return to Communications & Computers In Education - Main Page