14 June 2023
If you are a business owner, or a recruitment specialist, or a business owner who tries to recruit and figure out how to become more attractive to potential candidates you probably noticed that the new generation of employees has turned out to be more focused on ethics than any of their predecessors. And I don’t mean it in a sarcastic “the vibe is off” kind of style, more like a legit question on how to define and understand ethics in IT, what is relevant and what is the general consensus on things we should collectively worry about.
Nowadays, ethics have made a space for themselves inside university programs because it has been noticed that computer systems can violate values and interests. And while getting offended by an IT system looks childish and funny (as long as it didn’t work against your personal values, obviously) it might easily translate into financial loss if users or investor decides to abandon it because of some scandal around it.
Uppsala University has an ongoing project called “Ethics in IT”, focused on design and evaluation based on a questionnaire for the identification of potential issues, computer ethics tests, training programs in ethics and principles that should become a standard for computer science in general. Scientists are investigating the possibility to teach a system how to handle moral problems by simulating human competence in this area.
This is also a perfect example of how quickly the idea of ethical issues in IT has been changing into something more complex and human-like. Before the rise of AI, the biggest ethical concerns were about personal privacy, and access rights, as well as managing risks of harmful actions that could potentially end in the loss of data or ownership.
The copyright issues lead to the creation of particular legislation to provide laws that are supposed to protect owners and users against potential misuse. The same goes for trade secrets, which are now protected by law or piracy – and in that case, legislation was also put in place to stop unauthorized duplication.
So, if anyone thought that the ethical thing is a new concept it’s not – the first code of ethics for Electrical and Electronics Engineers was created in 1912 and it’s still updated.
Before AI, internet-related ethical issues were about the user, human-user to be precise. The whole code of conduct was based on simple assumptions – be respectful, don’t steal somebody’s content, do not take credit for pieces of work that are not yours, and so on.
Computer ethics are more complex because we have to consider what AI can do and whether it’s actually intelligent enough, to understand paradoxes, subjectivism, objectivism, and relativism. At the same time they shouldn’t be biased (and we know they already are), so the whole subjectivism part becomes even more confusing.
One of the most urgent matters in Ethical IT is the matter of deep fakes – a technology that allows a sophisticated manipulation of images and videos to the point where it’s almost impossible to tell if it’s real or not.
Another issue that brought a lot of attention in 2023 is the matter of health tracking and collecting data on the physical or mental health of citizens, which might pop up at the worst possible moment and sabotage their work-related or any other plans.
This one is a good one – the Ethical Operating System has been created to participate in the product development process with its point being to minimize technical and reputation risks. It provides descriptions for eight risk zones and later a specific checklist of questions to check if the product has been designed while having in mind specific matters.
For instance, it will ask the developer if their product is using Machine Learning, were there any gaps already discovered that might bias the technology, what about diversity in the team, and so on.
Altogether – ethics matter. The point of technology is to make life easier and there should be steps taken into making sure that we at least try to make it easier for all of us and not just the privileged ones.