Blog: A Deontology for the Field of Neurotechnology

By
Emily Einhorn
January 24, 2020

The Hippocratic Oath has existed for millenia. While some aspects of this ethical deontology have shifted and evolved, the Hippocratic Oath remains a long standing tradition that governs physicians. Medicine has long been viewed as a field fraught with societal responsibility and ethical complexities. But today, new fields are emerging that hold equal, if not greater amounts of power to impact people’s bodies and lives. We now live in a world of brain computer interfaces (BCIs) and artificial intelligence (AI); where devices are built to connect neural pathways directly to digital networks; where AI capabilities are becoming so advanced that algorithms are able to decode brain activity data to glean multilevel insights into everything from an individual's personality, to their intentions and thoughts. Yet there is no centralized code of ethics governing how technologists approach the development and applications of technologies that have the power to fundamentally alter human characteristics and experiences. Perhaps these fields warrant a new promise for social good--a Technocratic Oath to set the intention for a morally aligned field of innovators shaping our future. 

 

When discussing the Hippocratic Oath, the term  “ethical governance” is used loosely. The Hippocratic Oath is not legally actionable, as the word governance might imply. Other documents and laws, including medical malpractice law and the Consumer Protection Act  serve as legally enforceable frameworks for ethical liability in medicine. Given the Hippocratic Oath’s lack of enforceability, critics have expressed skepticism as to whether the oath really impacts doctors’ behaviors. There is a debate as to whether the Hippocratic oath remains relevant today, given the evolution of the medical field. Critics assert that it is unrealistic to put individual accountability on a doctor to do ethically right by their patients when doctors’ actions are so strongly dictated by larger entities, like insurance and pharmaceutical industries. However, the purpose of the Hippocratic Oath is not to directly penalize medical wrong-doing. It serves a more philosophical purpose. The Hippocratic Oath maintains legitimacy because of its historic tradition, its global ubiquity, and the sense of integrity that it fosters among physicians. It promotes an ethos of moral responsibility and societal accountability, which is crucial in a field that has such direct impact on human lives. 

 

This philosophical direction is critically craved by the fields of neurotechnology and AI. BCI’s and machine learning technologies will have major impacts on the human body and mind in the coming decades. Immoral algorithms that may be biased or draw invasive insights from brain activity data; BCI chips that could grant augmented mental functioning to those who can afford them; externally wearable BCIs that gather shareable brain activity data from the populace--these are coming realities that society will grapple with.

 

In the case of doctors, the immense power that physicians have over their patients’ bodies and futures is evident. It is logical therefore, that a portion of a doctor's training concern how to ethically handle this power. It is becoming equally evident that a similar power exists in the fields  of computer science, engineering, and neuroscience; where technologies are being developed to bring interpretations of humanity's most intimate and uncontrollable neural activity into view. Yet no such ethical norm has been set and centrally adopted within neurotechnology and its related fields. To this day research within these fields is often divorced from discussion about resulting applications of such technologies to human bodies and societies.  It is easy for developers of neurotechnology and AI to remain myopically focused on a small piece of the puzzle without panning out to reflect on the direction of innovation’s broader trend.


Skeptics of the Hippocratic Oath could doubtlessly apply similar arguments to the advent of a Technocratic Oath. It is true, the notion is  idealistic, and it is important to note emphatically that a Technocratic Oath must be accompanied by extensive legislation providing legal definitions on brain activity data privacy, algorithm standards, and mental augmentation, among other critical issues. Such laws will undoubtedly take many years to develop. In the meantime, why not set a collective intention? If a Technocratic Oath makes some technologists think more critically about innovations that would otherwise receive no scrutiny, why not clarify the responsibilities of innovators? It may not work universally, but such a declaration could inspire engineers, entrepreneurs, scientists, physicians, and computer scientists to remain tethered to humanist philosophy and ensure that they are leaving behind legacies that will benefit human society. Furthermore, a centrally adopted framework could inform additional legally enforceable policy on neurotechnology and AI ethics. With such high stakes, why not attempt every measure to highlight accountability among technologists that are influencing the status of the human existence?