The bottom line
Many folks think of ethics as a somewhat unnecessary ‘cost centre’. This is both misguided and unfortunate. Rather ironically, it also comes at a great cost!
Ethics is a deeply human process. It’s also entirely unavoidable. Everyone is doing it all the time.
The question is:
Do you want to do it well?
Do you want to do it explicitly?
Do you want to do it in ways that verifiably reduce risk, add real value and confer competitive advantage?
If you answered yes to any of the above, have a read of the detailed content in the drop downs.
Once you’ve done that…
-
Purpose is the reason the AI system exists. But, the higher purpose—the reason your organisation exists— is where we start.
Values are the things in life that truly matter. They are what we strive for and seek to protect.
Principles guide our actions, helping us honestly reflect and consistently act in alignment with our values and purpose.
Without clarity in these areas, you risk significant confusion at every level of the organisation and in every area of sociotechnical workflows.
If you’re an organisation that has done significant work in these areas, we’ve got a springboard. If you haven’t, we’ve got some work to do… Work that has foundational / systemic value.
-
You may start with a business problem, an insight or a board directive. But where to from there?
A rigorous process of upfront ethical deliberation is the ethics equivalent of “an ounce of prevention is worth a ton of cure”.
This process—grounded in our purpose, values and principles, and supported by the applied ethics literature—helps to clarify not just the direction of a given AI system, but perhaps just as importantly, its most critical characteristics, such as (non-exhaustive):
Does the system allow effective oversight and control (autonomy)?
Is the system accurate and reliable (harm-benefit)?
Is the system efficient in achieving specific goals (harm-benefit)?
Is the system designed to ensure the equitable distribution of burdens and benefits (justice)?
Does the system promote and protect individual, societal and environmental wellbeing (harm-benefit)?
Is the system auditable and is the system / system designer accountable (justice)?
Have diverse stakeholders views been considered in the system design (justice)?
Is there a clear definition of fairness and can the system demonstrate adherence to this definition (justice)?
Can the system designers demonstrate Privacy and Security by Design principles in practice (autonomy)?
Does the system a have clear mechanism for appeals (justice)?
Does the system have clear monitoring processes and metrics to assess performance over time (harm-benefit)?
Does the system support (or hinder) people in pursuing their goals (autonomy)?
Is the system explainable and / or are the limits of explainability clearly expressed (autonomy)?
As an important aside, principles alone cannot tell us how to act in given situation. They are not complete or coherent systems for ethical decision making. Instead, they provide a useful reference point for diverse reflection and deliberation. Through dialogue (and the work that follows and informs ongoing dialogue), they help us recognise and remember the ethical considerations that take priority in our reflection, deliberation and decision-making.
The process of reflecting and deliberating as a diverse group (as early in the process as possible), considering each of the principles both directly and in relation to one another, helps surface insights that a given team can act on. Insights that guide us towards the best reasons for action. Actions that deliver the most value.
-
What is the spectrum of notable effects that may result from this system? Early in the process of AI system design, is it time to engage in guess work?
No, you engage in Impact Mapping, a collaborative process that:
Determines all the possible impacts, from the most positive to the most negative
Clarifies whether each impact is intended or unintended
Determines whether they are direct, indirect or systemic in nature
Assesses both the likelihood and significant, and thus the ‘category’ of the impact
Describes how to amplify the positive impacts and mitigate the negative impacts
*This feeds critical information into formal risk, governance and assurance processes.
Once you have mapped this out, you have higher quality information from which to base your judgements (this information also supports your process of ethical deliberation using your AI Ethics Principles as the key reference point), supporting strategic decision making at every level and stage of your project lifecycle. It helps determine the very real tactics you will document, refine, prioritise and implement to steer your system towards the greatest overall benefit and the most positive alignment to your principles.
-
Ethically assessing a system, through formal governance and assurance practices, helps offer risk holders visibility of the information they need to make decisions at the time they actually need it.
Although there are many inputs and variables, formal AI Ethics Assurance should (it often isn’t) be a core part of this process.
I can help ensure that critical ethical considerations for a given project are explicitly part of formal governance and assurance practice, reducing ethical risks, drift and debt.
This process can occur at different stages of the end-to-end project / system lifecycle, ensuring it aligns to ongoing governance obligations.
-
AI Ethics should be front loaded. It should also be integrated and embedded into everyday design, development, deployment and monitoring workflows.
There are two ways to do this (they are not mutually exclusive):
Embed an ethicist into a cross-functional team on an ongoing basis
Embed the process of doing ethics into practices, processes, workflows, rituals and tools.
For the most part—if only one option is selected—the second is preferred because it helps build systemic capacity (noting of course it may fail to meet important needs in certain situations. It should not be selected as the defacto option, rather result from a clear understanding of needs and constraints).
I have integrated embedded ethics approaches into dozens of organisations, from 5 person startups through to some of the world’s largest multi-national corporations.
-
Once a system has been released, it’s common to rigorously inspect variables like accuracy, precision, sensitivity and specificity to help assess system performance and alignment to stated goals.
The process of monitoring ‘ethical drift’, however, requires different skills and a far more sociotechnical approach*.
*Noting the adage that “measures that become targets often cease to be good measures” (Goodhart) and the nature of metrics influencing social decision-making and their potential for manipulation (Campbell). In short, not getting too caught up in approximations of real-world phenomena.
Monitoring ethical drift requires teams to get out of the building and assess real-world impacts—especially those that are indirect or systemic in nature—and mapping those real-world impacts to system ‘behaviour’ over time. With such data in hand, an ongoing process can be conducted to help ensure the system is aligned to the principles guiding its development (recognising, of course, the inevitability of certain shifts in the world that mean the principles, how they’re enacted and how they’re interpreted, will change over time).
In addition to explicitly assessing alignment to the principles over time, it is important to go back to such questions as:
Is the system actually net beneficial (utilitarian approach)?
Is the system verifiably respecting of people’s rights and freedoms (rights approach)?
Is the system truly equity enhancing (justice approach)?
Does the system exist and perform in service of the common good (common good approach)?
Is the system, and all of its related sociotechnical workflows, leading me / us (as an organisation) to act as the sort of person I / we would most like to be (virtue approach)?
Is the system consistently demonstrating care, responsibility and a respect for our inherent relationality?
These are, in so many ways, significantly harder questions to ask and usefully answer. Alas, that is the nature of (at least some of) this work!