In two previous posts I’ve discussed the need for (re)thinking the relationship of law and technology and the difficulty of knowing what we speak about when we speak about ‘technology’.
The problem that law (or perhaps it is lawyers) has with new ‘technologies’ stems from law’s orientation in time; law changes relatively slowly, and law is orientated towards the past. The result is three related sources of dissonance.
“The strongest impacts of an emergent technology are always unanticipated. You can’t know what people are going to do until they get their hands on it and start using it on a daily basis, using it to make a buck and using it for criminal purposes and all the different things that people do.”
Those are the words of William Gibson in a recent interview in The Paris Review.
The result is that any prediction of how a particular new technology will change the future must necessarily be wrong, probably in important ways. That gap between prediction and experience has become a feature of the present. Gibson’s own career illustrates the point nicely, he co-founded a movement in science fiction writing that was critical of the simultaneously bland and triumphalist vision of (white, male and wealthy) scientists ruling humanity and conquering the physical universe. Gibson opposed a dystopian future to the utopian futures popular in science fiction at that time. Three decades later he sets his work in (a slightly alternative) present, a present characterised by the same ambivalence about the products of human ingenuity, and the same expectation gap between those who first introduce those products and how others experience them. Or in Gibson’s own words: “The future is already here — it’s just not very evenly distributed.”
We could call that the prediction or expectation problem. Both positive and negative outcomes cannot be completely predicted and so a cost/benefit analysis of a particular rule just isn’t possible.
While being unable to predict negative outcomes is problematic it is unexpected positive outcomes that far more difficult. That is because in any economy those who are currently most successful and thus have the most money and power are also those who business models are likely to be disrupted by new technologies that simultaneously introduce greater efficiencies and eliminate their profit margins. This is a type of collective action problem. An economy may gain efficiencies and most people stand to benefit, but a small wealthy and powerful group stand to lose that very wealth and power. This could be called the Hercules problem; only if a new technology or firm is strong enough to survive the attempt to strangle it at birth can it survive long enough to be valued.
A third problem is that new technologies often raise novel ethical issues. Is it ethical for employers to monitor the private email of employees? Should patents be awarded over human genes. Society, the collective of people in whose interests laws are (avowedly) made, hasn’t yet had time to develop a consensus on such difficult issues. This could be called the ethical-consensus problem.