New article: "Hippocratic Oath" for Neurotechnology developers

Editor's note:

Read in Spanish Here

 

By
Mayte Rius Barcelona
June 07, 2021

The article discusses the NeuroRights initiative`s proposition for a "Hippocratic Oath" for Neurotechnology developers, a so-called "Technocratic oath".

Facebook and IBM collaborate on a project that asks those who work in neurotechnology to commit to ethical principles, just like doctors do in their practice.

In the same way that doctors must commit to and follow the Hippocratic Oath at work, those entrepreneurs, researchers, computer scientists and other professionals that work with neurotechnology and artificial intelligence must follow the Technocratic Oath.

That is what the Neurorights Initiative, a project at Columbia University promoted by the Spanish neurobiologist Rafael Yuste, proposes to protect Human Rights and promote ethical innovation in the fields of neurotechnology and AI. Yuste was already the promoter of the BRAIN Initiative project, supported and financed by the Obama administration, to decipher the human brain.

Yuste and his team from the Neurorights Foundation, Xabi Uribe-Etxebarria (from the AI company Sherpa), the Catholic University of Chile, members of the Center for Research in Social Complexity of the University of Development, Facebook and IBM are all involved in promoting the oath. "We are undergoing pilot programs with these two large technology companies for their employees to undertake the oath," explains Yuste in an online conversation with La Vanguardia.

The promoters are the Neurorights Foundation, directed by Rafael Yuste (U. Columbia), Xabi Uribe-Etxebarria (Sherpa) and the U. Católica de Chile.

"Advances in neuroscience techniques - an interdisciplinary field that includes biology, medicine, psychology, chemistry, genetics, computer science, physics, engineering and mathematics - open an unprecedented possibility to access, collect, share and manipulate information from human brains that can positively impact clinical practice and the well-being of people, but also be used commercially to offer cognitive improvements, personalized communication, entertainment.” Highlighted the promoters of the oath.

And this, they say, "poses new ethical challenges if it is used incorrectly or inappropriately," since it can facilitate intrusion into people's private lives, unduly influence their behavior, or even cause physical or psychological harm.

"A solution to address these problems and establish ethical guidelines is to develop a professional oath that involves all those who work in neurotechnology,” said the promoters. And they emphasize that an oath is not an ethical code or a promise, as it is not limited to a declaration of intentions, but that it is solemn, public, and has a greater moral weight.

The Hippocratic Oath as a Model

More than a code or a statement of intent

A good example, they say, is the Hippocratic oath that doctors take that has maintained its validity and legitimacy over the centuries and that causes the doctor who does not follow these values ​​to be shamed by his own professional colleagues.

"An oath is a solemn word of honor, usually formal, that one has the sincere intention of fulfilling in a specific context", said the promoters of the project. And they explain that it also gives the people who work in that area a moral framework and a guideline of good practices.

The idea is for the Technocratic Oath to be sworn by students and workers who are involved in the use of neurotechnologies and brain data analysis tools. It should incorporate seven widely used principles within the ethical guidelines on AI.

The seven basic principles

  1. Non-maleficence, that is, no intention of causing harm with the applied technology.
  2. Beneficence, the intention to contribute to the common good with the work done
  3. Autonomy, which establishes that nothing can be done without the consent of those who are involved in any situation that involves AI and neurotechnology.
  4. Justice. It seeks to ensure that the application of neurotechnology generates fair and impartial results, avoiding, for example, algorithmic biases.
  5. Dignity. In other words, all people must be treated with respect and ensure their integrity.
  6. Privacy, which advocates the elimination of all sensitive and identifiable information from the data collected by technology
  7. Transparency, the purpose of which is to ensure that the algorithms used are as transparent and correctable as possible.

Yuste clarifies that, initially, the idea is that the Technocratic Oath begins as something voluntary. “The intention of this pledge is to contribute to emerging concerns about ethical guidelines in current and future neurotechnologies; Although it is not legally binding, the cultural weight of an oath has historically led to responsible practices in the areas where it is implemented, such as the Hippocratic oath in medical practice”, its promoters conclude.

The text of the oath

“In all aspects of my work, I will make sure that my knowledge is not used to harm people; I will ensure that my knowledge is used for the benefit of users; I will seek consent and respect the will of those who have trusted me; I will maximize the fairness of the results, avoiding any discrimination or unfair promotion of certain people over others; I will make sure to respect the dignity of users, protecting their human rights; I will not violate the privacy of confidential information of individuals; I will maximize the transparency of the algorithms that I generate and use. I take this oath freely, in my honor, and I assume any responsibility should I break it.”

 

Translated by Paloma Rodriguez Paramo