Don't experiment on me!

Just what is an "experiment", rather than, say, a bit of tweaking? Though I've asked this question throughout this column, I really don't have an answer. If something is done to us without our express permission, does this mean that it's being done behind our backs? And if so, does that mean that some sort of test is being conducted to which our response is the end product? If something were done transparently, without secrets, with full exposure of the intent of some particular tweaking, would we then feel that we're not the lab rats in an experiment we didn't authorize? Cass Sunstein's interventions are purposeful. His intent is to help people make what he thinks are the right decisions. Obviously someone can object and ask "who gave you the right to decide what my right decisions should be?", but Sunstein can honestly respond that value judgments are built-in to any and every policy. Sunstein's actions revolve around what's referred to as Choice architecture. Wikipedia explains:

Choice architecture is the design of different ways in which choices can be presented to consumers, and the impact of that presentation on consumer decision-making. For example, the number of choices presented, the manner in which attributes are described, and the presence of a “default” can all influence consumer choice. As a result, advocates of libertarian paternalism and asymmetric paternalism have endorsed the deliberate design of choice architecture to nudge consumers toward personally and socially desirable behaviors like saving for retirement, choosing healthier foods, or registering as an organ donor.
Is this really objectionable? I guess it depends. Though I don't really see it as a problem, Evgeny Morozov is probably justifiably fearful of social engineering. In a Wall Street Journal op-ed from 2013 Morozov wrote:
A number of thinkers in Silicon Valley see these technologies as a way not just to give consumers new products that they want but to push them to behave better. Sometimes this will be a nudge; sometimes it will be a shove. But the central idea is clear: social engineering disguised as product engineering.
...
As smart technologies become more intrusive, they risk undermining our autonomy by suppressing behaviors that someone somewhere has deemed undesirable. Smart forks inform us that we are eating too fast. Smart toothbrushes urge us to spend more time brushing our teeth. Smart sensors in our cars can tell if we drive too fast or brake too suddenly.
Morozov's fear is that in the end we'll perhaps act in our own best interests, but we'll have lost the ability to make decisions which should be a determining characteristic of our freedom and of our humanity. He may be right.


Go to: What isn't an experiment?