How To Maximize the Good From AI Augmenting Humanity – A Quick Take

Artificial Intelligence

Recently, the SETI institute hosted a webinar titled, ‘AI: Augmenting Humanity for Good’. The discussion, presented by two experts in their field, was largely thoughtful. The speakers did cover what one might have anticipated from the title, including a short video presentation on an app available for the iPhone that enables AI for the blind, and speaks aloud whatever the camera can see, including the active environment, photos, etc. They also covered near-horizon applications as well as issues related to AI bias and its mitigation. 

What impressed me most was what was not covered, at least not explicitly. Perhaps the speakers purposefully kept the topics light and generic. Here are three topics that, irrespective of audience, should probably feature in any public discourse on AI when considering how to maximize the benefits while minimizing its risk and harm: 1) Science literacy and understanding the product; 2) Ethics in the design and application of AI; 3) Reshaping society and ourselves, the challenges that these impose, and their mitigating safeguards. 

Scientific and Technological Literacy

Our shared society is undergoing a troubled relationship with truth and facts. Collaterally, there is a distrust of science that, at least for some, is a function of a distrust of authority – any authority – other than an accepted one related to one’s identity and world view. There is a growing gap between the current state of our science and our common understanding that, apart from potentially rendering us scientifically and functionally illiterate, imperils our health, safety, well-being and future security.  As our science and technology continue to advance, it is essential that we have at least a rudimentary understanding of how it works, appreciate how and where it is applied, what it means for us, and where possible to engage in public dialogue about its applications. Nowhere is this more relevant than with AI. As one of the discussants on the SETI panel mentioned, if we go to the grocery store to purchase a loaf of bread, we might look at its list of contents to decide whether or not we wish to consume it. Yet we consume technology products without questioning how they work and how they might affect our well-being. Specifically with AI, this is a tool that is learning about our behavior in order to not simply be responsive to it, but in turn to shape it. The happy version of the latter is that AI will shape our behavior in ways that benefit us, such as making us more efficient, and increasing our productivity. On the other hand, we are witness to the nightmarish consequences of AI shaping our behavior in destructive ways, such as distorting our perception of our body image and exacerbating depression, fuelling our outrage and leading us to commit violence and terror. Earlier generations had a working knowledge of their tech, how it worked, and could even repair and modify it. We need to at least generally understand how our present-day technology works, specifically AI – enabled technology, in order to anticipate challenges, identify and shield ourselves and our loved ones from unwanted effects, and appreciate which applications of this technology, or it abuse, we might choose to avoid. 

Ethics and AI

We have mentioned before the Hippocratic Oath for AI developed by Oren Etzioni. As laudable as it is to pledge to do no harm, and as much as such an oath might conduce to rectitude of conduct, this assumes that we sufficiently understand and foresee what harm might arise from our AI applications. Our history of applying science to create new technologies suggests that such confidence is misplaced and that we may not be able to control what we have created or how others might use it, an age-old fear brilliantly captured in an enduring morality tale. 

There are proven measures that can be put in place to reduce the likelihood of adverse consequence and abuse, such as oversight boards, peer-review, and post-production surveillance, including independent impact studies under the review of regulators (eg FDA for medical technologies, devices, therapeutics and vaccines; FTC for a wide array of business-related practices). 

There is another measure, underused but essential for AI development and minimizing adverse impact on specific demographics, namely diverse inclusion in product development and testing, as detailed in a recent Forbes piece. One example given by the SETI panel was the selfie camera and how Google noted early on that certain photos were inappropriately oriented. Upon investigation, they found that these were left-handed users. There had been no left-handed users in the product development and testing phase. 

A more prominent example is AI-powered face recognition technology. We have mentioned before that dark-skinned users have reported unreliable performance and others its potential for society-level abuse, resulting in some companies abandoning its further development. Microsoft found that one their early AI products, a chatbot named ‘Tay’ could be easily repurposed in less than one day in a coordinated attack to spew racial hatred, raising important questions about safety of such tools in the wild. Others have opined that AI could be co-opted to harm specific groups, proposing mitigation measures with as yet untested efficacy. All of this reverts to our earlier point about the limitations of the AI Hippocratic Oath alone, and assuming that we know which applications of this technology will lead to harm at the outset. We do not know, and assuming that we will is dangerous, and leaves us vulnerable. 

The surest antidote to that potentiality is to include a representative group of potential users in both the development and testing of such technologies prior to rollout. We do this in medical research out of necessity because it limits unforeseen adverse events at product release. The same must occur with AI product development. This is the proven standard for limiting post-production harm while maximizing the widest possible benefit. Post-production or post-market surveillance on product performance is no less essential for new technologies to further identify unexpected rare but important events, and should be adopted as an industry standard for AI as well. 

Reshaping Society and Ourselves 

The most widely adopted technology is that which solves a problem or addresses an unmet need. This in turn reshapes our behavior, and thereby our culture and norms. It creates new problems to solve and new demands, which in turn give rise to yet new technologies. Ad infinitim. Thus far, we have been relatively laissez faire in our rollout and adoption of new digital technologies, often responding post facto after problems arise, as we have seen with social media. 

Any technology that takes direct aim at modifying our behavior, like AI, requires close monitoring, with pre-existing protocols in place as to how we will identify (ie select a reliable signal) and respond to specific adverse consequences, and how we will define and measure a successful response. These discussions and decisions cannot be left to a technological oligarchy, but must include independent expert opinion and take place in the public domain. While we will require knowledgable experts to identify the most reproducibly reliable indicators to monitor and conduct impact analyses, these reports must be available to the public, and cannot be sequestered by the private sector, which might remain hidden from public view. Though specific industry products are proprietary, neither the user community nor a product’s societal impacts are. Thus, reporting and monitoring systems, published reports, and discussions about remedial actions and alternatives must occur in the public domain, and involve public participation as essential stakeholders. 

Conclusion

AI is a technology designed specifically to interact with human behavior, ostensibly as an aid to augment human productivity, but will inevitably alter our behavior in unexpected ways, as it already has, and in turn can be further altered by us. As free societies, we must acknowledge two truths: 1) the greater our understanding of AI’s underlying technologies, the more likely we are to anticipate challenges, and; 2) we cannot anticipate every challenge. Many of our interactions with AI, and their outcomes, cannot be predicted and can result not only in the abuse of the vulnerable, but pose a threat to society at large, and must be monitored. A free society has a right to expect safe and reliable products, including AI, backed by adequate regulatory controls. The very nature of AI requires that an inclusive workforce be engaged in its development to minimize bias and to thwart targeted abuse. All of society should be engaged in assessing AI’s various impacts; which of these are beneficial and go forward, and which of these are harmful and rejected. Only in this way can we maintain the requisite level of control over this evolving, behavior and capacity augmenting technology to ensure the widest possible benefit with the least amount of harm. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.