• Training

Back in 2019, the Australian Government published the 8 whole-of-government AI ethics principles(Opens in a new tab/window) for the first time. This was the result of significant work and relied heavily on the impressive feat of IEEE’s Ethically Aligned Design(Opens in a new tab/window) published two years earlier. Since these principles were published, AI has become one of the most pressing topics in our lives. Thankfully, through this process, our government has not just maintained but strengthened its commitment to safe, secure, and responsible AI development and use. 

As an ethicist that has worked all around the world to help inform the responsible development and use of sociotechnical systems, many of which we now classify as AI systems, I am writing this blog to share practical tips about how to use the 8 AI ethics principles in your work. These principles can be applied in many contexts, ranging from large scale development and implementation processes through to back of the napkin type workings amidst an everyday workflow, helping inform if, when and how to rely on these powerful yet imperfect, increasingly present systems.

The 8 principles

Here are the 8 principles, along with the types of questions you might like to ask in relation to each of the principles. By asking questions like this, by reflecting on these principles, you are encouraged to gather evidence, engage with diverse stakeholders, and make decisions that best align to each of the principles. Of course, if you’re leading a major project, you might be expected to do this rigorously, document the whole process, and ensure that this process informs what you build and how. If you’re using these principles for a more everyday decision, the process can be a lot easier. More on that below!

*All the questions are open ended and non-exhaustive. They are designed to help you begin reflecting and deliberating so that you can make decisions about how to design and use AI systems in ways that are positively aligned to each of the principles.

1. Human, societal and environmental wellbeing
AI systems should benefit individuals, society and the environment.

  • If we build this system, what are all the potential impacts to people, society, and/or the environment, not just immediately, but in the long-term? And not just directly, but indirectly and systemically?
  • Are the positive impacts likely to far outweigh the negative impacts? How can we demonstrate or justify this? If not, are there actions we can take to shift the balance back to positive? If not, what do we need to do to report on our findings and move forward? 

2. Human-centred values
AI systems should respect human rights, diversity, and the autonomy of individuals.

  • Which human values are most important for us to consider in the process? How can we design the system to ensure these are respected?
  • Could the system impact people's fundamental human rights? How?
  • How can we include diverse perspectives in the design of this system?
  • Could the initiative impact people’s ability to make free and informed decisions about how they live their life? How can we ensure their autonomy is respected?

3. Fairness
AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

  • What does fairness mean in the context of this system?
  • Is there a chance the system might unjustly burden or overly benefit people? How might this happen and what can we do about it?
  • How can we ensure the system is accessible for diverse peoples?
  • What steps will we take to ensure the system is inclusive?

4. Privacy protection and security
AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

  • Will the system impact people’s privacy rights? If yes, how? What can we do about this?
  • How can we design the system to ensure data is protected throughout the entire lifecycle?
  • How can we ensure the system is as secure as possible for all using or impacted by it?

5. Reliability and safety
AI systems should reliably operate in accordance with their intended purpose.

  • Can we clearly describe the purpose of the system?
  • What technical measures do we need to design for and monitor to determine if the system is performing safely, in alignment with its stated purpose?
  • If something goes wrong, can we pause or stop the system completely?

6. Transparency and explainability
There should be transparency and responsible disclosure so people understand when they are impacted by, or engaging with an AI system.

  • Will the people impacted by the system know AI is involved?
  • Can people choose to opt out?
  • Will we be able to explain how the system operates end-to-end? If we can’t, how can we reliably describe the limits of the systems’ explainability?

7. Contestability
There should be a timely process to allow people to challenge the use or outcomes of an AI system.

  • How will we ensure people impacted by the system understand their rights?
  • How will we ensure people have a way to raise concerns or request information about outputs and outcomes?
  • Have we clearly defined processes to support people in doing this? Have we designed these processes to ensure the experience doing this is simple and effective?

8. Accountability
People responsible for the AI system should be identifiable and accountable for the outcomes. Human oversight of AI systems should be enabled.

  • Who is responsible for the system throughout the project and once in production?
  • Is that person or other persons held accountable for how the system operates?
  • Will we ensure adequate training for staff using the system?
  • Do we have auditing and oversight of how the system operates end-to-end?
  • What processes will we follow—and who is responsible for this process—if the system operates outside of its intended purposes?

As you can see, each of the principles and any guidance relating to them acts as an important reminder. They tell us what we need to remember when we are reflecting, deliberating, and making decisions about if, when and how to develop and / or use AI.

Let me offer a brief repeatable example, not of an entire ethics process but of how to use the principles simply in everyday work.

Ethics on a napkin

Imagine you’ve been asked to assess the use of AI in the context of one of your workflows. You’ve got a meeting coming up in 30 minutes to discuss this. You’re not expected to have any kind of final answer, but you are expected to share your thoughts on how well fitted AI is to support a given workflow.

It’s likely you start by ensuring you understand the problem, understand the proposed solution, and have a good sense of how well the solution fits the problem. 

Let’s say, in this case, you’re pretty confident that a particular AI system will significantly enhance the workflow in question. It’s likely to be effective. But you’ve got fifteen minutes left.

To the right, a napkin from the lunch you recently ate. You sketch out 8 horizontal lines, one for each of the principles. On the far left you add the number 1. On the far right the number 7. You now have a Likert scale where 1 represents the worst alignment and 7 represents the best alignment.

You decide to use the Likert scale as a way of assessing how aligned this proposed solution to improve a particular workflow is to each of the 8 AI ethics principles you recently learned about.

You go through the process, plotting how aligned the proposal is to each of the principles.

Principle 1: 5 
Principle 2: 4
Principle 3: 4 and so on
Under each number you add a few dot points of justification. You’re explaining your reasoning.

Ethics on a napkin: image of a napkin with hand drawn Likert scale for each of the eight ethics principles showing the numbers listed above in the article.

Ethics on a napkin: image of a napkin with hand drawn Likert scale for each of the eight ethics principles showing the numbers listed above in the article. 

*Bonus, you could also represent this as a spider plot, which is sometimes an even easier way to see the relationship between each of the principles.

You check the time and there it is. You jump on Teams and start the meeting. 

Ten minutes in you talk about the work you’ve just done. You share a little about the process and explain your reasoning. The team’s really interested in what you’ve shared and wants to know more about the process.

You’ve just kicked off AI ethics, with only a napkin available.

AI ethics isn’t necessarily an easy process. It can take time. It can require us to gather a lot of evidence, engage with a lot of different people with important skills and experience. And sometimes it’ll require us to not to something we thought we were going to be able to do. But, more often than not, it’ll help refine the way you are approaching AI, increasing the likelihood that AI is developed and used in ways that are safe, secure, and responsible.

By keeping these principles in mind, and reflecting on them when relevant, you are well placed to make informed, values aligned decisions about how best to ensure that AI supports your work and helps positively serve the public.

A little more about ethics 

Ethics can be thought of as the process of justifying the best reasons for action. It requires us to step back, take stock, and deeply consider not just what is possible, but what is preferable. Of all the ways we might choose to act, which is best, and why? 

AI ethics(Opens in a new tab/window) is the process of doing this in relation to development and use of AI systems; considering the goals they serve, the values they seek to promote or protect, the ways they’re built, the dynamics of the material supply chain that makes them possible, and plenty more.

Although a simplification, practical, real-world ethics often features the following stages: 

  1. Imagine and Reflect: This is a process of stepping back and considering what type of world we are currently in (where are we now and where have we come from?), along with what type of world we’d most like to create (where would we like to be?). Part of this is also about questioning and clarifying the values that matter most and how they relate to the type of world we’ve imagined might be possible, should we choose to act in certain ways.
  2. Deliberate: This is a process of exploring options, gathering information, weighing evidence, and considering not just what can be done, but what should be done. In other words, how can we act in closest alignment to our values in this context we find ourselves in?
  3. Decide: This is where our imagination, reflection and deliberation come together to inform a direction, something we can commit to going forward. A decision or decisions with real-world actions and clear justifications / reasoning.
  4. Act: This is where we act in alignment with the decisions we have made, doing so with as much integrity as we can muster.
  5. Monitor: This is a process of ongoing observation where we assess how our decisions have affected the world. Did it turn out as we expected? Did something surprising happen? Did we miss something important? It’s useful to recognise here that this isn’t just about effects we can quantity but also relates to the care we may have shown or the type of person we were through the process.
  6. Learn and Improve: This is a process of reflecting on both what we have been doing and what has happened. It’s a commitment to learning and improving our process so that we can do just as well, if not better, the next time around. Think of this as continuous ethics improvement.

Ethics is a very human process. And, if you’ve read this far, you’re probably already realising that you do something like this quite often. You do it at work. You do it at home. Yet when it comes to AI, learning a little about this process can likely help you use the 8 AI ethics principles to reflect, deliberate, and make even better decisions. 

Thank you for reading. 

Nathan (Nate) Kinch 
Nate is an ethicist with extensive global experience working in both the public and private sector. You can learn more about his work by joining an AI CoLab session(Opens in a new tab/window), check on his website(Opens in a new tab/window) or connect with him on LinkedIn.

Profile picture of Nate Kinch