Responsible AI – when philosophy becomes part of business

Idan Beer, Head of Matrix's AI Center of Excellence, and CTO AI at matrix Defense  image
Idan Beer, Head of Matrix's AI Center of Excellence, and CTO AI at matrix Defense

We are currently seeing a huge struggle between Google, OpenAI, universities and open-source associations, which continues to maintain and even increase the buzz around ChatGPT and Generative AI. One of the hot topics – and perhaps, the most interesting – emerging from the discussion is not a distinct technological issue, but rather a human one – responsibility.

 

The discourse surrounding the responsible use of technology in general, not just artificial intelligence technology, has existed for many years. There have been plenty of books and movies that have presented us with a picture of the grim reality into which we may descend due to the irresponsible use of technology – an apocalyptic and chaotic world, possibly ruled by extreme regimes. Back in 1921, Karel Čapek published his work, Rossum’s Universal Robots, in which he describes artificial people created to serve us, humans, and take our place in dangerous or boring tasks, before the story takes an ugly turn into a violent and deadly conflict. Ever since then, there have been many examples of similar ‘cautionary tales’: 1984, The Terminator, Eagle Eye, and perhaps the most famous – The Matrix. Despite this, professional and public engagement on the subject has still been relatively marginal. But, all that changed when ChatGPT came into the world. As with many other things, ChatGPT has changed the concept of technological responsibility in AI contexts, or to give the issue its professional name – Responsible AI. This is clearly no longer a side issue that can remain in the realm of lip service and empty promises; it must be an integral part of the practice of any organization that plans to use such technology.

 

Who is responsible for what? You probably won’t be surprised to hear that ChatGPT also knows how to disclaim responsibility

If you ask ChatGPT what Responsible AI is, it will tell you that it is “the development, deployment and use of artificial intelligence systems in a manner that upholds ethical principles, while respecting human values and rights, considering the potential impact on individuals and society as a whole”. A nice definition, considering that this is a statistical engine for predicting words in a sentence. The terminology ‘Responsible AI’ may mislead you into thinking that the responsibility lies with the AI itself, but the intention is that it is the human user that should be responsible.

If we were looking for further proof that artificial intelligence in general, and ChatGPT in particular, are a mirror of humanity, we have it in the form of the answer to the question: “is artificial intelligence responsible?”. ChatGPT begins with a denial: “As an artificial intelligence system, I do not have personal responsibility in the way that humans to”; then moves on to transfer of responsibility: “The developers, users and organizations behind the creation and deployment of artificial intelligence systems bear responsibility for their design, implementation and use”; and ends with a lecture: “Ultimately, it is the responsibility of humans to use artificial intelligence in a responsible and ethical manner”. Quite a typical human reaction. Having no choice in the matter, the ball of responsibility ends up back in our own court.

 

The ball of responsibility is in our court. It’s time to examine on which levels we should take responsibility, and how we do it

This responsibility is manifested on several levels.

The most familiar and oldest level of all is fairness & bias. This level deals with the proper representation of genders, skin colors, languages, nationalities and more in the materials on which artificial intelligence engines are trained, and the prevention of discrimination. As ChatGPT suggests, if its training data is based on more conversations between white men than between black women, for example, the model will likely generate responses that are more suited to male discourse. Of course, this is a bias that does not exist only in ChatGPT; we have also seen manifestations of it in face recognition engines that make significantly more mistakes in recognizing the faces of black women than of white men, or in a resume classification engine that does not accept women for work.

The next level is transparency. Deep learning technology raises questions of traceability and explainability. These are learning algorithms with a very large number of parameters, which do not allow the user to know why the model made this or that decision. In a world where artificial intelligence is becoming a tool that makes decisions automatically, the ability to retrospectively understand why a decision was made, and to investigate it, is critical.

Another level, which it goes without saying needs to exist, is actual accountability. An organization should understand that, when it builds an AI-based engine, it is responsible for ethical issues that may arise from its improper use, even if by someone else. Similarly, organizations using technology developed by another entity bear responsibility for its proper use by its staff.

AI engines are trained on data. A significant part of that data contains private information; therefore the level of privacy & data governance is very significant. Disclosure of personal information, and inadequate protection of private information in the corporate environment, may seriously damage the rights of the individual.

If in the past algorithms were found mainly in the digital world, today more and more artificial intelligence engines are also expressed physically. Autonomous driving and flying, driving robots, etc., require physical security to prevent physically bumping into humans. Cyber security is also essential to prevent abuse of the engines for the purposes of harming the organization or leaking information. Although organizations are aware of the attempts of parties to penetrate through corporate e-mails, for example, and invest many resources in increasing awareness and training employees to prevent this, they almost never curb the dangerous use of artificial intelligence or the ability to create ‘security holes’ in the organization. Just look at the case in which developers at Samsung used ChatGPT for quality control and to find bugs in the code of the company’s products, resulting in the leakage into the Internet of sensitive information. It’s clear that this danger exists in every organization, no matter how technological.

The last level refers to the extent to which artificial intelligence contributes to collaboration & social impact. Running artificial intelligence engines requires expensive and significant computing power. Casual use of artificial intelligence, which does not produce real value for human society and its members, wastes natural resources. Artificial intelligence should help to reduce gaps, promote more correct and efficient utilization of resources, and generally do good for humanity.

The challenges mentioned up until now could be thrown at almost any AI engine. Generative AI has added another dimension of complexity – the information generated after the process. It has been found that significant parts of the code produced by Gen-AI engines has been taken in its entirety from open sources. The user cannot know whether the code generated by the engine is protected by copyright, or what license defines its use. In addition, the creation of documents or works based on these engines may include exact quotations from articles that, if not correctly cited in the work, damages the intellectual property of their authors.

Until now, we as AI researchers have been used to addressing one or two types, at most, of ethical issues in a tool or model. ChatGPT presents problems in every field and at every level, forcing us to look at the highest standards of ethics. It is no longer possible to fix specific holes with a temporary patch. The issue needs to be treated in a more meaningful way.

Of course, there is a consensus that responsibility in general must be addressed in the use of artificial intelligence and Responsible AI, but there are different opinions about how to do so. On the one hand, there are those who support regulation and legislation designed to limit the technology, or requiring certain tests to be performed to ensure that it is safe. A significant number of them recently signed a petition that calls for a momentary halt to the technological race, to deal with the consequences. In contrast, there are those who do not support regulation on research and development, but only on the products. In a world in which everyone has access to open-source code and can deploy it themselves, a question arises as to what a product even is.

 

In all this ethical darkness there is a significant bright spot, and that is that there is much to do! And just like with philosophy, everything begins with knowing how to ask questions

First of all, ask questions. The key to proper use and consumption of artificial intelligence lies in our ability to understand its limitations and challenges. Appropriate training is needed in order not to fall into the mistakes of artificial intelligence. What information did the engine train on? Does the engine save the information or use it in a way that is unacceptable to me? Who built the engine? Is there a reliable source of information that can provide me with information on how it works? Is there an independent test that denotes its performance? Is the source code available to me? and more.

If you still do not understand AI in enough depth, or there is not enough information to assess the risks and benefits, it is worth consulting with experts – and even better, to find an organization that will help and accompany you in the process on a strategic level. Remember, artificial intelligence is not just a model; it is a complete concept of information management and security, correct implementation, correct use and endless updating.

And finally, every organization must adopt the concept of ‘respect them and suspect them’. I am very much in favor of the democratization of artificial intelligence. It can significantly optimize our day-to-day life, and increase the value we get out of various services. However, we must be critical and understand deeply what this animal is that we are bringing home. Just like we would interview a potential employee.

“With great power comes great responsibility” said Voltaire (followed by Spider-Man). We all understand how powerful artificial intelligence is. We ‘just’ have to be equally responsible.

People who viewed this page also viewed
The GenAI arms race
The rise of the super-developer
AI and Deep Learning
THE ‘BIG BANG’ OF DIGITAL FINANCE
Find out more
Please complete your details and we will contact you
*
*
*
*
JOIN THE MATRIX NATION Back to home page - Matrix