How technology can save us: perspectives from outside of the bubble

As we approach the second decade of this century, it is time for those of us who seek to advance—and profit from advancing—our most powerful technologies to begin to imagine and develop them with an element of compassion.
We must achieve innovation while preserving responsibility towards the greater good. This will only happen if we include many and varied disciplines, concepts, and cultures in a discussion of how to best move forward.
This conversation will be among the most important of our time. It is clear that this work will require our greatest dedication and effort. There will be no simple answers.
In September 2018 I (Ferose) opened the inaugural Responsible AI DI Summit with this core truth: If the future of humanity is decided in our absence, we will not be exempted from the consequences[1].
Powerful technologies, massive consequences
Artificial intelligence (AI) and related technologies are among the most potent—and potentially worrisome—of our newest tools. As we think about consequences of unleashing the power of AI we must also be realistic about its possible consequences, many of which are not immediately obvious.
The very good news is that we have myriad sources of insight and inspiration to help us find a path forward. We can draw on the teachings of spiritual and business leaders, leadership advisors, and—as you’ll read below—even a comedian. And we can encourage thinking not just in terms of More (new features, higher performance, and so on) but also in terms of Better: how technology can enable outcomes that benefit the greatest number of people.
As a technology leader, I believe that it is imperative to consider these multiple points of view, which go beyond the traditional perspectives. Over the last year or so, I have conducted a bit of research on this topic. And I’ve reached out to a few of my friends to gain their guidance and learn from their diverse perspectives.
This article presents some of my thinking on the topic of responsibility and technology and shares a few samples of the amazing insights those friends have so generously shared with me.
Jobs, technology, and the economy
The i4j (Innovation for Jobs) foundation[2] in its new The People Centered Economy[3] anthology presents intriguing visions of how innovation can disrupt unemployment and create meaningful work for everyone.
As technologists, many of us accept that an innovation-driven economy is the best path to sustained growth. As my friends I4J co-founders David Nordfors and Vint Cerf have eloquently explained, however, profit maximization in a competitive market economy requires ever-lower costs. Labor is chief among those costs, and one way to quickly decrease labor costs is to progressively replace human jobs with machine work.
If as a (likely) consequence humans are thus relegated to ever-lower-paying work, however, many people will soon lose the ability to purchase the products and services our industries offer. Without that spending, sales and profits will fall, ultimately reducing growth and cancelling out any short-term benefits of lowering costs.
This is a classic “lobster claw” pattern of cause-and-effect, in which optimizing a short-term goal leads to a long-term net negative impact:

The “Lobster Claw” causal pattern. Source: Quantellia.
To avoid this paradox, Norfors, Cerf, and their many partners in I4J suggest that we focus instead on developing and applying advanced technology to create fulfilling work, thereby helping to offset this job loss. The I4J foundation is working to develop and test “a framework of ideas for a people-centered economy, where companies compete to raise the value of people as much as for lowering costs. The goal is a sustainable innovation economy where people do meaningful work with people they like, creating value for people they do not know, providing for people they love.”
The collateral damage of using technology to replace human labor is not limited to issues of earning and spending, however. Climate change author Fred Pearce computes, for example, that data centers (the “factories of the digital age”) eat up two percent of world energy, emit CO2 in amounts that match the airline industry, and require construction costs around $20 billion a year worldwide[4].
The many contributors to I4J’s work agree that, if technology is to bring the maximum benefit to the maximum number of people, we must begin to imagine ways of defining success and wealth in ways that extend far beyond financial metrics. I have explored this idea of trusteeship in other articles and speeches, including in the foundational document of the Responsible AI/DI Summit[5].
Where lies responsibility?
Recent events, from digital infiltration of democratic elections and plebiscites to increasing government control over information and incitements to violence illustrate both the power and peril of AI and other advanced technologies.
Many are aware of the successful efforts by the Russian government to influence public referenda in the United States, the United Kingdom, the Ukraine, and beyond. Perhaps less widely-known, however, is the Indian national government’s struggle to deal with the human consequences of rumors and false news spread via the Facebook-owned WhatsApp social network. WhatsApp-influenced mob violence took dozens of lives in a recent incident.[6]
I recently had the chance to sit down with Ravi Shankar Prasad, the Indian Minister of Technology and Law to discuss some of these unintended negative consequences of technology. Prasad is recognized as a global leader in advancing digital governance. He told me that, while the creator of the encrypted messaging platform that allowed the fake news to spread did not foresee such a consequence, “the medium used for such propagation cannot evade responsibility.”
Who is responsible if viral memes cause harm? Is it the originator? Or the forwarder? Or the individual who reacts to the message, the false narrative, and takes physical action in the real world? With no easily available information about the origin and propagation of these messages, consumers can’t verify content is factual or placed in a factual context.
Indeed, it is arguable that the underlying technologies are designed and managed precisely to disguise—we might say “ghost”—the origin and intent of messages, in order to feed the all-important “engagement” engines that power profits in these systems.
Shining a bright light
For this reason, we must embrace the importance of understanding long chains of cause-and-effect, and begin to make visible the effects of our actions. This “ghost in the machine” in WhatsApp and more must be exposed if we are to live in and hand down a world that is more peaceful and productive than the one we now inhabit.
Either we use the machine itself to find and stop the ghost (using AI, Decision Analysis (DA)[7], or new unifying disciplines like Decision Intelligence (DI))[8] or we design and deploy enough obstacles within platforms and networks to make propagation of the ghost harder. The latter seems less practical. We must have a way to ensure that our shared values are reflected in and supported by our technologies, and that those (or other less-desirable) underlying values are not hidden inside complexity.
Returning to WhatsApp, what is the proper role of its owner Facebook regarding preventing such horrible unintended consequences?
“We’re horrified by the violence in India, and we’ve announced a number of different product changes to help address these issues,” a WhatsApp spokesperson said. “It’s a challenge which requires an action by civil society, government and tech companies.” The company’s initial response was to restrict to 20 the number of contacts to which any message can be forwarded. In addition, the Indian government pushed for tighter limits, in response to which WhatsApp dropped forwards to just five. Furthermore, the company has made other changes intended to clearly mark forwarded messages’ origin, so that recipients don’t automatically assume they come from a familiar, trusted source and react accordingly.
Responses like these are steps in the right direction, but can be imperfect. According to The Verge[9], for instance, Facebook moderators are sometimes encouraged to contravene the company’s official policies in order to keep popular accounts active. If true, then this story illustrates the conflicted goals that companies like Facebook face, as they seek to satisfy both commercial as well as societal goals.
Moving outside the box
My friend Navi Radjou, innovation and leadership advisor and recipient of the Thinker 50 Innovation Award, says that “Disruption finds a new solution at the same level of consciousness, thereby displacing the problem rather than solving the problem.” In contrast, we should seek solutions at new levels of thinking. Frans Johansson says that groundbreaking ideas and innovations happen at the intersection of multiple disciplines, concepts, and cultures.[10] So we should search outside the current paradigm for solutions, and to accept that complete overhauls to economic/technology/governance systems may be necessary to get where we need to go.
Philanthropic action at a distance
This problem of technical expertise being isolated from the impact of its decisions is not limited to purely commercial endeavors. Economist William Easterly argues that “Technical experts in development sometimes concede some rights and deny others, which disrespects rights for what they are: unalienable.” [11] Projects designed by organizations like the World Bank often focus on highly technical solutions that ignore the well-being of the people they are purportedly intended to help. It is not difficult to draw a line between projects that destroy villages and farms in the name of reforestation and the dystopian, Terminator-like AI stories.
My friend and New York Times columnist Anand Giridharadas sees a similar pattern in philanthropy. Writing in his book Winners Take All: The Elite Charade of Changing the World)[12], Giridharadas says that philanthropic endeavors are a drop in the bucket of global spending on aid and development , but “this drop is upholding the problem.” He argues that philanthropy acts to preserve systems that do not provide the benefits with which they are credited to the people they purportedly serve.
Anand goes on to explain that “Tending to the public welfare is not an efficiency problem”[13] and that “Corporations get more ‘juice from the squeeze’ [than do governments] because corporations don’t solve very complicated problems. God bless ’em, but making Pepsi or manufacturing a car seat is an easy problem. Governing 350 million people is an extraordinary thing that we have discredited in this age of markets.”
And he is right. But corporations are building systems so powerful that they directly impact the lives of many more than 350 million people. Can we leave decisions about how they operate to hugely wealthy and (hopefully) well-intentioned entities working from a perspective of corporate-style “efficiency”?
We run the risk of trapping citizens in a pincer—experts who decide what is right based on their technology model on the one hand, and an affluent few who “use their wealth and influence to preserve systems that concentrate wealth at the top at the expense of societal progress” on the other.
Here, once again, understanding the difference between short- and long-term outcomes, and looking to new modes of thinking, is essential. If technology is just another means to generate wealth at the expense of human well-being, then this constant skimming of wealth will reduce the beauty of life to a game of economic indicators. Isn’t that a profound loss we need to prevent at all cost?
We should instead work to enable money and technology to become two hands that uplift humanity by preserving the sanctity of life, liberty and pursuit of happiness.
Shedding some light on this aspiration is Rev. Heng Sure, a Buddhist Monk and Lecturer at Berkeley, who I met earlier this year. Reflecting on his over-two-year meditative pilgrimage to bow and walk 800 miles up the coast of California in silence, Sure explains that change must begin from within: “The power is ours. Evil and good, selfishness or compassion all come from the mind first. If more people care for others, the world will spontaneously grow brighter.”[14]
Technology is a tool. It is the wisdom and compassion of the hand that wields the tool that delivers the intent of the user. The moment that we, as technology’s masters, decide monetizing life is all that matters, we have lost the battle.
Perhaps the most sobering observation along these lines came to me, ironically, from comedian and Monty Python co-founder John Cleese. “In order to know how good you are at something”, he says, “requires exactly the same skills as it does to be good at that thing in the first place…which means…if you are absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.” [15]
The long view
Reflecting on the common thread unifying the global thought leaders that I’ve had the pleasure to meet in recent months, I come to a few early questions and conclusions. In a world where the gap between intent and action is diminishing to only a single click, how can we counteract this reactivity to the more purposeful, the more meaningful? Can we encourage thinking in time spans that stretch across generations? Can technology act, indeed, as a scout as we strive to maximize our actions and more fully realize human potential through longer spans of time and space? I believe we can.
Just as AI-powered social networking and AI-enabled social manipulation are at the center of many of the challenges facing governments and societies across the globe, I believe that they are also part of the solution that will provide us with the foresight needed to responsibly address such challenges.
New AI and DI tools will augment, expand, and apply our human intelligence, just as the industrial revolution magnified muscle power. Combining the most advanced tools of the digital revolution with humans’ immense intelligence and empathy will enable “wisdom enhancement.” What is required of us is a shift in our ability and willingness to frame and address our most pressing modern issues for the benefit of all. And the first step is to return to fundamentals: human beings should be human, and mankind must be kind.
References:
[1] Yuva Noah Harari, 21 Lessons for the 21st Century, https://www.ynharari.com/book/21-lessons/
[3] The People Centered Economy, David Nordfors and Vint Cerf, 2018. https://amzn.to/2PwLqlq
[4] https://e360.yale.edu/features/energy-hogs-can-huge-data-centers-be-made-more-efficient
[5] https://www.responsibleaidi.org/2018/08/30/shifting-at-the-edge-gandhi-ai-and-beyond/
[6] How WhatsApp Leads Mobs to Murder in India. https://www.nytimes.com/interactive/2018/07/18/technology/whatsapp-india-killings.html
[7] https://en.wikipedia.org/wiki/Decision_analysis
[8] https://en.wikipedia.org/wiki/Decision_Intelligence
[9] https://www.theverge.com/2018/7/17/17582152/facebook-channel-4-undercover-investigation-content-moderation
[10] The Medici Effect (HBR Press 2017), http://bit.ly/TheMediciEffect
[11] The New Tyranny: How development experts have empowered dictators, and helped trap millions and millions in poverty https://foreignpolicy.com/2014/03/10/the-new-tyranny/
[12][Knopf, 2018], https://amzn.to/2Ohbyw7
[13] https://portside.org/2018-09-04/why-philanthropy-bad-democracy
[14] http://www.awakin.org/local/sv/?pg=speaker&who=hengsure
[15] https://www.youtube.com/watch?v=wvVPdyYeaQU&feature=youtu.be