We Need To Develop Even The Most Dangerous Technology

| Dave Hamilton's Blog

"A curious thing happens when you decide to eliminate skepticism as an option, even if only for a few minutes: You give improbabilities an opportunity to transform into possibilities. You give unconventional ideas a chance to bloom a little bit."  Youngme Moon, Different


The quote above is as good a place as any to start this. As always happens throughout human history — though perhaps moreso today — our lives see us constantly exposed to new technological developments. Our perspective on them taints our gut reactions, and it's often easy to forget that all of it is simply part of the iterative design process we as humanity share. Nothing we have today – not cell phones, not cars, not even a can opener – was created in a vacuum. Everything builds on that which came before it, and this is an easy fact to forget — and an important one to remember.

So often these days the first reactions we hear to something new are, "it's bad," or "it'll never succeed," or, "it's dangerous and should be stopped." The first two are mostly harmless so long as the inventors ignore the commentary and keep inventing. The latter, though, is quite dangerous in and of itself. We're all guilty of this to some degree, as it's a natural reaction, especially in today's (understandably) security-conscious world.

Dave QuoteTake Google Glass for example. Sure, you may think it's silly, you may not find any use for it, but in and of itself Glass is not dangerous. Some people (with Google possibly counted amongst them) might use Google Glass for purposes that are dangerous, but it doesn't mean the technology should stop being developed. The tech is cool! Wearing something on your face that records what you see and reacts to what you say? 20 years ago (heck, maybe 10 years ago) this would have been seen as magic. Even today it's might as well be magic.

Yes, Google Glass might have limited use for some, but if nothing else it's a step in our technological ladder. And this is a necessary step for the next things to come along, whatever those might be 5 or 10 or 50 years from now. We certainly don't want to stop iterating and developing.

Can you imagine what it was like when someone started putting airplanes in the sky? I guarantee you there was a vocal group opposing it, saying that planes will fall out of the sky, and people will use them to spy on us, and they should be stopped immediately.

And can you imagine what our world would be like today if we had followed their wishes? What about cars? There was fierce resistance to those, and look how that turned out.

We need to learn to separate social problems from technological development. The latter must be allowed to continue unabated so long as the development itself isn't infringing on people's rights or privacy. If the result might infringe on those rights, that's a social problem, and should be dealt with in a different way.

Cell phones are perhaps the best example we have right now. We still haven't figured out all the social etiquette going along with all of us having cell phones (and computers!) in our pockets. And we need to continue working as a society to figure that out. What happens when we're in a face-to-face conversation with someone and a text message comes in? Do we look? Don't we look? We honestly haven't sorted that out yet, and in certain circumstances the path you choose there could have profound impact. But that doesn't mean the cell phone should never have been invented. And it certainly doesn't mean we should stop inventing more cell phones.

We just need to evolve our etiquette to sort these things out. And if we decide that all of us having cell phones is a bad, unnecessary thing, fine. But at least we have let the tech develop and stand as a foundation for that which will come next.

Feel free to have your gut reactions. We can't stop them, so we might as well accept them! But we can temper them and guide them such that we as a society don't become paralyzed by fear. Remember, security and safety exist on a continuum ranging between living in a brick house and never going outside to putting all your information and cash in a pile on the street while sleeping in a box 50 yards away. Decide where you want to be and live there, but don't let that limit where the next generation can choose to go.

Image made by Bryan Chaffin with help from Shutterstock.

Sign Up for the Newsletter

Join the TMO Express Daily Newsletter to get the latest Mac headlines in your e-mail every weekday.

Comments

Scott B in DC

The problem is not the “dangerous technologies” or the people that will use the technologies dangerously, but the media punditry that calls something like Google Glass the best thing since sliced bread. The tech media has the tendency to go overboard by slobbering all over themselves after being artificially targeted as part of a hype machine.

Does anyone really think that there was a true lottery to “buy” Google Glass? It is clear that Google hand picked who was going to get Glass to generate the most hype. Is it really a coincidence that the top tech bloggers, websites, podcasters, and the folks at TWiT or have a frequent association with any of these media outlets “won” the lottery? It was a well crafted ruse to generate as much hype as possible.

The tech media got played like a cheap fiddle and in the process opened up Pandora’s box about security and privacy. Good. It’s about time security and privacy was thought about up front where it could be more easily addressed and not down the road when it is more difficult to fix.

Like Dave said in his article, nearly every technology has the potential to be used for both good and bad. It is how we as a society use the technology and how we deal with the abuse of the technology. Don’t consider technology as bad. Technology is a tool, like a hammer. A hammer can be used to build homes to bash someone over the head. Thankfully, a lot of people do the former and the latter is a crime. Let’s hope those inventing the next new thing has the foresight to think about the potential risks and guide us to the appropriate mitigations. Then let’s go invent something cool!

geoduck

Science does not have a moral dimension. It is like a knife. If you give it to a surgeon or a murderer, each will use it differently - Wernher von Braun

iJack

I’m wondering if the world today might not be a more pleasant place, if airplanes hadn’t come along. Life might be a little slower. Warfare might be a little less likely. And I say that as a pilot, who loves the magic of flying.

aardman

It’s not quite fair to Google Glass critics when you say, in effect, what if we listened to all those people who claimed that airplanes were too dangerous to be put into practical use.  The fact is there is a safe way to deploy airplanes and an unsafe way.  We learned pretty quickly what those unsafe ways are: the biggest one was the idea that airplanes (and helicopters) be deployed like automobiles—privately owned with at least one in every garage and facing minimally regulated access to the airways.  I think there’s no need to explain why that idea just wasn’t going to fly.

The same sort of use refinement will happen with Google Glass.  GG critics aren’t saying the device is completely impractical or unsafe.  What is being said is that the mode of use as most GG advocates envisage right now (i.e. a 24/7 device) is unworkable mainly because it would be socially unacceptable.  By all means, continue developing such devices, but don’t for one minute think that one entity, especially a self-interested profit seeker, gets to dictate how it can or should be used.

wab95

Dave:

Very nicely articulated and argued.

I want, however, to disentangle this argument from its current emphasis on hobbling the advancement of science and technology in order to placate current social norms and cultural assumptions, which is generally not in the best and broader interests of humanity, from the more relevant and imperative issue of balancing our scientific and technological progress with putting in place the implements and controls that will safeguard society from the excesses and abuses of that same advancement, particularly for society’s most vulnerable and weakest members, which generally is in the best and broader interests of humanity.

When we survey history and the introduction of new technology, we observe that, in the main, the novel technology or product was introduced ahead of societal consensus, rules or laws regulating the limits of use, with varying degrees of harm and cost to human welfare. Many of these technologies have had, what we call in medicine, a wide therapeutic index, meaning that the exposure range for beneficial effects is sufficiently broad before entering the range of toxicity or harm, such that for most exposures, the net effect is beneficial (e.g. electrical energy); but most technologies have had some degree of cost that have compelled us post facto to introduce controls and laws (e.g. automobiles) or international conventions (e.g. nuclear fissile technologies) regulating its use. We have seldom taken stock of the toll in human welfare in the interim between the introduction of technology and the implementation of regulations to control its use; although with some of these the litany of annual mortality statistics (e.g. auto-related deaths) and historical events (e.g. Hiroshima) is sufficient witness on behalf of such regulation.

I argue that this pattern of technological introduction and post-introduction regulatory intervention is not a fixed phenomenon with a fixed cost-benefit ratio, but one in dynamic flux; indeed a dependent variable of the independent but changing variable of our interconnectedness as a race. When technologies, however potentially harmful when used as intended, let alone when abused, have been introduced into a social milieu of minimal interconnectedness, the potential harm has been to others has been limited; but has increased as a function of our connectedness. Consider the introduction of the automobile into a rural area with a population density of 10 per Km2 vs a city with a population density of 4000 per Km2.

Another variable that affects this is our surveillance capacity to identify harm associated with a technology or product. A case in point is tobacco. It took time for humanity to associate lung, oral and throat cancer, emphysema, COPD, heart disease, and male infertility with tobacco use.

Historically, both our increasing interconnectedness and our technological capacity to identify and link specific product exposures to adverse outcomes has trailed the introduction of specific technologies. More importantly, we have seldom, with rare exceptions (e.g. the scientific community’s written appeal against research into nuclear fission as a weapon) thought about the need to regulate a technology prior to its introduction. In many such cases, this has been influenced by our complete cultural naïveté to the technology and not knowing what to anticipate. Again, tobacco is a good example of a product exposure that not only harms the user when used as intended, but others, which was not anticipated; the greater the exposure to non-users, the greater the imperative to control it.

As with tobacco, products and technologies that when used as intended, have untoward consequences on both the user and the non-using exposed, regulations are required to protect both the user and, importantly, those who have chosen not to use the product.

As we are no longer the computer-technological novices of yore, but a global culture inured to computer technology’s social ramifications, I argue that initiating a discussion on the need to understand how a new technology like Google Glass, which can have many useful applications (e.g. medicine and science, police investigation and surveillance, education and the arts), but can also compromise the privacy and security of both user and non-user alike, not mention society at large (e.g intellectual property and its economic impact) and even a priori putting in place regulations and controls on its usage is not technological Luddism but social and indeed moral progress. The progress of technology is robust but the human is frail and vulnerable, and it is our attempt, however imperfect, to proactively engage in the responsible task of protecting the human from unforeseen harm, without unduly limiting the pace of that progress, that is laudable and the best means to insure that the benefit/cost ratio of novel technology will, on balance, benefit the human.

Log-in to comment