Ethics in Technology
Sometimes we're so occupied trying to figure out how to do things that we forget to figure out whether we should.
Earlier this week, I was reading W.I.B. Beveridge's classic text, The Art of Scientific Investigation. I was struck by a passage that Beveridge included. “Pasteur's rabies treatment has never been proved by proper experiment to prevent rabies when given to persons after they have been bitten...it is impossible to conduct a trial in which this treatment is withheld from a control group of bitten persons.”
What if that treatment really isn't particularly effective? Might more harm than good be done to public health by continuing the practice? What if it were possible to use proper scientific analysis to determine how efficacious the treatment is, or to improve upon the treatment? Isn't the upside big enough that it's worth considering?
Two weeks ago, I had the pleasure of presenting at a program for appellate judges in the state of Wisconsin. The morning session of the seminar was presented by John J. Paris, Michael P. Walsh Professor of Bioethics at Boston College. He raised one issue of particular relevance to our present discussion: the view apparently promulgated by Jeremiah Wright and others in the ‘black church’ that the U.S. Government created AIDS as a genocidal weapon against black citizens. That the government of the United States would engage in such action seems preposterous to many others. After all, this isn't Hitler's Germany, nor is it Stalin's Soviet Union, nor is it Saddam's Iraq.
“Where does this idea come from?” asked the professor. While perhaps extreme and possibly even dangerous to assert without any evidence, the concept isn't completely devoid of foundation. As it turns out, the U.S. Government's Public Health Service ran a significant medical experiment, the Tuskegee Syphilis Study. In this study, roughly 400 men were studied for decades in an effort to understand how their syphilis was affecting them. All of the study's subjects were poor black men. They were not told their diagnosis. They were denied treatment options that otherwise would have been available.
Those unfamiliar with this bit of history would do well to give it consideration. The account is a shocking and disturbing example of how badly wrong things can go when science is practiced in a moral vacuum.
I work in digital information technology. In that field we often think that we don't have to deal with such hard problems. We're not counseling people through things like end-of-life decisions. We're not trying to figure out whether medications are effective. There are a lot of things that we don't do.
That isn't to say that we don't have ethical problems before us. For example, when we're told to build some system, do we work on getting it up and running or do we think more carefully about the kind of information that we're going to manage and take the time to ensure that the systems are being done safely, respecting the privacy of people who use the systems (or the services of organizations using the systems)? What about when it comes to formation of opinions regarding technology—for example, when it's ready for use?
The fact is that the systems that we build have side-effects, some of which stay with us for a long time. Building an international computer network where privacy of individual users isn't a consideration, for example, could go a long way to enabling totalitarianism. Larry Lessig wrote about these issues in his book, Code and Other Laws of Cyberspace.
My point is that as interesting as it is to talk about how to make things go, we sometimes need to think about whether we're really solving the right problem and whether our work is likely to have the impact that we expect that it will. These are matters that I think confront all technologists. As history has sadly shown us, pursuit of The Problem too narrowly (say, the mechanism of syphilis) can lead us astray and cause us to forget The Real Problem (human suffering), perhaps helping us to make some gains on the short-term goals even while causing us to forget where we're trying to get in the end.
Are technologists properly equipped to address these kinds of issues? Do we have frameworks for assessing when we should do things by comparison to when we shouldn't? How should we in technology address these kinds of issues? And, in the spirit of following my own advice, I'm obliged to add another question: Should we?
Earlier this week, I was reading W.I.B. Beveridge's classic text, The Art of Scientific Investigation. I was struck by a passage that Beveridge included. “Pasteur's rabies treatment has never been proved by proper experiment to prevent rabies when given to persons after they have been bitten...it is impossible to conduct a trial in which this treatment is withheld from a control group of bitten persons.”
What if that treatment really isn't particularly effective? Might more harm than good be done to public health by continuing the practice? What if it were possible to use proper scientific analysis to determine how efficacious the treatment is, or to improve upon the treatment? Isn't the upside big enough that it's worth considering?
Two weeks ago, I had the pleasure of presenting at a program for appellate judges in the state of Wisconsin. The morning session of the seminar was presented by John J. Paris, Michael P. Walsh Professor of Bioethics at Boston College. He raised one issue of particular relevance to our present discussion: the view apparently promulgated by Jeremiah Wright and others in the ‘black church’ that the U.S. Government created AIDS as a genocidal weapon against black citizens. That the government of the United States would engage in such action seems preposterous to many others. After all, this isn't Hitler's Germany, nor is it Stalin's Soviet Union, nor is it Saddam's Iraq.
“Where does this idea come from?” asked the professor. While perhaps extreme and possibly even dangerous to assert without any evidence, the concept isn't completely devoid of foundation. As it turns out, the U.S. Government's Public Health Service ran a significant medical experiment, the Tuskegee Syphilis Study. In this study, roughly 400 men were studied for decades in an effort to understand how their syphilis was affecting them. All of the study's subjects were poor black men. They were not told their diagnosis. They were denied treatment options that otherwise would have been available.
Those unfamiliar with this bit of history would do well to give it consideration. The account is a shocking and disturbing example of how badly wrong things can go when science is practiced in a moral vacuum.
I work in digital information technology. In that field we often think that we don't have to deal with such hard problems. We're not counseling people through things like end-of-life decisions. We're not trying to figure out whether medications are effective. There are a lot of things that we don't do.
That isn't to say that we don't have ethical problems before us. For example, when we're told to build some system, do we work on getting it up and running or do we think more carefully about the kind of information that we're going to manage and take the time to ensure that the systems are being done safely, respecting the privacy of people who use the systems (or the services of organizations using the systems)? What about when it comes to formation of opinions regarding technology—for example, when it's ready for use?
The fact is that the systems that we build have side-effects, some of which stay with us for a long time. Building an international computer network where privacy of individual users isn't a consideration, for example, could go a long way to enabling totalitarianism. Larry Lessig wrote about these issues in his book, Code and Other Laws of Cyberspace.
My point is that as interesting as it is to talk about how to make things go, we sometimes need to think about whether we're really solving the right problem and whether our work is likely to have the impact that we expect that it will. These are matters that I think confront all technologists. As history has sadly shown us, pursuit of The Problem too narrowly (say, the mechanism of syphilis) can lead us astray and cause us to forget The Real Problem (human suffering), perhaps helping us to make some gains on the short-term goals even while causing us to forget where we're trying to get in the end.
Are technologists properly equipped to address these kinds of issues? Do we have frameworks for assessing when we should do things by comparison to when we shouldn't? How should we in technology address these kinds of issues? And, in the spirit of following my own advice, I'm obliged to add another question: Should we?
Labels: ethics, technology
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home