Humanistic AI

What Is Humanistic AI?

Artificial Intelligence (AI) is about making machines that can do intelligent things. Today, this powerful technology can rival human abilities on many fronts, with the potential to do amazing things and as well as the risk of things not turning out as we wish. How can we guide the application of AI in achieving its greatest potential while avoiding unintended consequences? It is worth starting with an examination of why people are building and advancing this technology. We find that there are two basic philosophies on the purpose of AI: to create machine intelligence that automates what humans do (and therefore competes with humans), or to augment human intelligence, directly or in collaboration with them. Tom has devoted his work in AI along the latter path. He calls it Humanistic AI.

Humanistic AI: A Guiding Philosophy

There is no question that we have reached a new level of competence in the development of Artificial Intelligence. We read breathless celebrations of how AI programs have beaten the masters of humanity’s most challenging games, or how it can recognize a human face among billions of images. We are alarmed by warnings that AI will displace our jobs, or even threaten our existence as it reaches superhuman levels of cognition. Yet we hope that it will be used towards more useful and benevolent ends, like driving our cars and curing diseases.

Both of these futures are possible. Are they both inevitable? That’s up to us — and how we frame the problem is vital. Must we live in a world where machines are set against humans at every turn, surveilling us, shaping our beliefs, manipulating our attention, and competing with us for jobs, all driven by a zero-sum race to superintelligence that leaves the winners to take all? What can we do to ensure that applications of AI that are beneficial to humanity receive the necessary resources, priority and policies?

What can we do today to help guide a future that supports the common good?

There is no simple answer to these questions. But there is a simple way to begin. Start from a philosophy that guides what forms of AI should be developed and why, and use this guidance to navigate the many ethical questions that will inevitably arise.

Throughout the history of AI, there have been two competing philosophies on the purpose of AI: to create machine intelligence (for its own sake) and to augment human intelligence with AI. Both are good reasons to do research, and great progress has been made in both camps. Today, as progress is accelerating and applications of AI are reaching global impact, the consequences of these two philosophies are worth considering.

Automating Intelligence

The first camp views the purpose of AI as the creation of machine intelligence. In this view, the goal is to automate cognitive tasks which typically require a human, such as speech and language understanding, visual interpretation, dextrous manipulation, planning and execution in adversarial environments. The metric of success for machine intelligence is task competence, particularly in comparison to how well humans do on the same tasks. We celebrate when AI programs can pass standard tests of speech recognition, language comprehension, and image recognition. We cheer AI game players whose creations defeat the former masters of chess, Go, and military strategy games. We are astonished that machines can now learn to control robotic limbs with agility approaching that of muscle and bone.

Humanistic Ai Augmenting Human Intelligence

Augmenting Human Intelligence

The other camp is Humanistic AI. The role of AI in this philosophy is to augment human intelligence, either directly or by collaborating with humans to perform a task. (The science writer John Markoff calls this IA, or Intelligence Augmentation). This camp needs the same basic research and fundamental cognitive skills as the machine intelligence effort: speech, language, vision, robotic control, planning and decision making. The difference lies in how these skills are applied and evaluated. Humanistic AI consciously applies machine intelligence to shore up human limitations or extend our capabilities, rather than to control or compete with us.

For example, we can use computer vision as a biometric to help secure our machines, or we can use computer vision to surveil a population. The metrics of success are different: how accurate does my phone recognize me versus how accurate does that camera on the street corner recognize anyone? Similarly, speech recognition can be used to help us send texts to friends and search our audio content, or it can be used to eavesdrop on enemies and monitor suspects.

Tom promotes the idea of Humanistic AI because the choices we make as inventors, investors, entrepreneurs, and product designers make a difference in the consequences of our AI applications. In his 2017 mainstage TED talk, he outlined the basic distinction between the machine intelligence and humanistic AI philosophies, and showed examples of humanistic AI in medicine, design, and cognitive enhancement. He explained how AI that augments medical experts can create a superhuman capability to diagnose disease — achieving better results than machines or humans working on their own.

Similarly, in the field of design automation, he described how the task of creating a design can be distributed between AI programs that do exhaustive computations over many potential solutions and people that evaluate those designs for human needs. The result of the human-machine collaboration is a superior design that neither machine nor human could create on their own. In the realm of cognitive enhancement, he revealed a vision for an augmented personal memory that is within the reach of today’s AI and consumer electronics.

Hacking the Objective Function

In his 2019 keynote at the ai.x conference, Tom further explored the theory of Humanistic AI and showed how it might be applied to help guide decisions that have serious consequences in the world. He explained how AI is at the core of the disruption of mental health and society resulting from social media platforms that optimize for human attention over human welfare.

As an advocate for the Center for Humane Technology, the subject of the award-winning documentary The Social Dilemma, Tom wants to help change the industry from within. He explains the problem as a case of unintended consequences of AI, which are caused by misguided optimization. In particular he calls attention to a common component of every AI model and project, the objective function, which tells the AI what the goal is for its recommendations.

In the case of the big social media platforms, executives set company objectives which are passed down through management and compensation incentives. AI scientists, then build these objectives into their AI models’ objective functions. The AI systems in turn recommend things that are built into web sites and recommendation engines to achieve objectives such as time on site and viral growth. Massive data on how users respond is fed back into the AI training loop, resulting in new ways to optimize the sites and shape the behavior of users. This AI-fueled big-data feedback loop has contributed to unprecedented levels of media addiction and the breakdown of a societal consensus on how to make sense of the world from published information.

Tom proposes that the AI used in these services is not inherently addictive or crazy-making — and that it can be reformed to do more good than harm. The key is to rewrite the objective function to include human benefit as an objective. It’s not easy, but it’s worth doing. In the AI.X keynote, and a shorter more recent talk at TechfestNW, Tom shows examples of objective functions that operationalize humanistic AI, systems that are literally programmed for human benefit.

The Promise of Humanistic AI

In his capacity as impact advisor, Tom works with companies and organizations that exemplify the philosophy and methodology of Humanistic AI. The basic principle is to use AI in products that augment human capacity or work with humans to achieve their goals. For example, in his work with Cognixion, Tom consults on ways to use AI to help people with severe speech impairments by interpreting EEG signals on a wearable communication device. In his work with TRU LUV, he is advising the company on how to create AI characters in game-like environments that interact through self care and nurturing, rather than competition and violence.

In the fields of mental and physical health care, Tom advises companies like Mindstrong, which uses AI to interpret mobile phone activities that predict mental health problems, rather than induce them; and Migraine.ai, which uses AI to predict the onset of migraines so they can be more effectively addressed. The products of these companies show how Humanistic AI applications can be instrumental in our ability to understand ourselves and our environment. The difference is, they use this technology to improve our human self-awareness rather than surveil or control us.

As our society contends with the ongoing challenges of pandemics, mental health crises, and climate change, it is vital that we apply technology in ways that lead to the beneficial consequences we intend while avoiding unnecessary harms and reducing collateral damage.

It has never been more important to get this right. The choices we make now about how and why we apply AI technology to so many aspects of our life and society will mean the difference between AI that augments and cooperates with us and AI that harms and competes with us. That is the goal of Humanistic AI.