I like a good rumor, though more often than not I'm willing to accept the verdict of Snopes.com - even when it confirms that a juicy rumor, probably too good to be true, isn't. As far as I’m concerned, the more far-out the rumor the better, but when it comes to truth it seems that real life is a bit more banal than we'd like to believe. If we find ourselves exclaiming "did that really happen?", chances are good that the answer is "no", and Snopes is a good place to get the down-to-earth information that forces us, against our hopeful imaginations, to accept that answer.
That being the case, I see no reason to reject the Snopes judgment that Facebook's invitation to put a rainbow background to members' profile photos wasn't yet another experiment that the company was, without our consent, running on us. The Snopes page on that issue reviews the rather predictable history of the claim and reaches the not surprising, if perhaps disappointing, conclusion that this was simply a nice gesture that virally took off.
Probably more often than not our desire to believe a juicy rumor coincides with our willingness to believe just about anything about Facebook - a willingness that many will claim Facebook has well-earned. Facebook has, after all, conducted experiments. The best known of these was the News Feed emotions A/B experiment from about a year ago. This particular case wasn't a rumor, but a real experiment, openly acknowledged by Facebook. Whether or not it was really nefarious is, however, a different question. An article in Wired from May of last year clearly explained, if that's the right word for it, what took place:
For one week in 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly selected users (about 1 of every 2,500 Facebook users).That rather clearly stated explanation, however, seems to be based on the assumption that there's some sort of Ground Zero at which Facebook never determined which status updates appear in users' News Feeds. And of course that wasn't, and couldn't be, the case. As the Wired article, quoting from the published paper on the experiment, makes clear:
"[b]ecause people's friends frequently produce much more content than one person can view," Facebook ordinarily filters News Feed content "via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging."In other words, somebody at Facebook “knows” what its users would consider “most relevant and engaging” and, in order to see to it that those so-determined items will be those most prominent in the Feed, is continually tweaking its algorithm. I suppose that it's possible, perhaps even easy, to find some sort of diabolical intent here. For that reason it's perhaps worth reading what Adam Kramer, the chief researcher from Facebook on the study, wrote (admittedly after being criticized) about the reason for the experiment:
We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out.I would never be in a hurry to assume that Facebook has the best interests of its users at heart, but I'm not sure that the evidence suggests that there's any reason not to accept Kramer's explanation.
If you want to read the news, Facebook is saying, come to Facebook, not to NBC News or The Atlantic or The Times ó and when you come, donít leave.Though Miller referred to this as an experiment, I doubt that many others saw it as such. The Facebook announcement of this new Instant Articles policy didn't include the word "experiment", and it's fair to say that Facebook users weren't being used as Guinea pigs. And perhaps more importantly, they didn't see themselves as being used in that way. But this was still definitely something that Facebook was doing to us, and something that if it led to successful (as determined by Facebook) results would be maintained. And perhaps most important, in the long run the significance of this "experiment" goes well beyond to what extent our moods are affected by the items in our feeds.
From an analytics perspective, targeting by operating system and pricing accordingly may not be such a bad idea. The bonehead move of the century is Orbitz yapping about it. Orbitz did note that pricing by OS is just an experiment.It would seem that an "experiment" of that sort is specifically designed to test whether a specific demographic of users will put up with this, or whether a company can get away with it. An article in the International Business Times from a year ago makes it clear that "experiments" of this sort are still going on, and perhaps becoming more popular as smart phones become the buying device of choice:
Computer science researchers from Boston's Northeastern University have proved that e-commerce sites are tracking the online shopping habits of people and will charge individuals different prices, depending on what type of device they are using to access a website.Certainly nobody appreciates being profiled, particularly by a computer, in order to get charged more than somebody else. But when "experiments" of the Orbitz sort occur on the web we're not necessarily aware that something different is happening to us, or that it's because we've been profiled in one way or another. Because of this, even though this sort of thing offends our sense of "everybody is equal in cyberspace", it's not necessarily something that everyone is aware of. With a Facebook experiment, however, we're clearly not alone. We're very aware that it isn't only happening to us, but to many others as well. But if everything is an experiment, perhaps there's really nothing new here and all of us, myself included, are over reacting. Perhaps it's simply that our awareness of what Facebook is doing is greater because it's taking place in a medium which is so overtly public. And perhaps it's this publicness that makes us particularly sensitive to this. A supermarket, however, is also public, and the changes in product placement on its shelves may also be a manipulation, perhaps even a blatant manipulation. A good case can be made that manufacturers and marketers who are willing to put out good money to get their products more in our faces than their competitors are "experimenting" on us. But this sort of activity, even if it's distasteful, doesn't generate conspiracy theories. More than anything else, I get the feeling that our reaction, or over-reaction to "experiments" of this sort are a sign of how attached we've become to social media, but also of how little we really understand of them. The almost universal acceptance of Facebook suggests that a News Feed that objectively offered up to us all the new "information" that our friends found relevant or noteworthy somehow made sense to us. Even without prior experience with what we've come to know (in a very short period of time) as social media, in the abstract this Feed was logical, even natural. So natural that we hardly stopped to question why it should work in one way instead of another. And when it started working differently, we responded to that as an attempt to manipulate us, instead of realizing that we'd always been manipulated.
Previous research and media investigations since 2000 have found that Amazon and US retailer Staples had been charging users different prices for products due to estimations of their wealth and geographic location, a practice known as "price discrimination".
But it now seems that the problem could be more widespread across the global e-commerce industry than previously thought, which is interesting since mobile shopping is on the rise.
Return to Communications & Computers In Education - Main Page