As organisations step up efforts to leverage the capabilities of artificial intelligence, it is essential for both AI developers and regulators consistently to contemplate, integrate and advocate for ethical considerations throughout the entire process.
While AI promises a plethora of business benefits, responsible use of the technology is key to unlocking its full potential.
AI bias, also referred to as machine-learning bias or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.
Artificial intelligence can transform our lives for the better. But AI systems are only as good as the data fed into them.
Fundamental principles guiding ethical AI encompass transparency, the ability to provide explanations, fairness, non-discrimination, privacy, and the safeguarding of data.
According to Accenture, AI brings unprecedented opportunities to businesses, but also incredible responsibility. The consultancy firm notes that AI’s direct impact on people’s lives has raised considerable questions around AI ethics, data governance, trust and legality.
If not correctly implemented, AI can inadvertently lead to far-reaching biases. AI bias refers to the presence of systematic and unfair discrimination in the outcomes produced by AI systems.
AI bias
Bias can emerge from the data used to train these systems, the algorithms themselves, or a combination of both. Addressing AI bias is an ongoing challenge that requires careful consideration of data selection, algorithm design and ongoing monitoring to ensure that AI systems are fair, transparent and accountable.
An example of where AI showed bias was when Amazon implemented an automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. However, as it turned out, the system showed bias against women.
The AI platform learnt the ability to assess the suitability of individuals for a particular role by analysing resumes from past candidates. Because women had previously been underrepresented in technical roles, the AI system thought that male applicants were consciously preferred. Amazon later ditched the tool in 2017.
In healthcare, the insufficient representation of women or minority groups in data can distort the outcomes of predictive AI algorithms. For instance, computer-aided diagnosis systems have demonstrated lower accuracy in results for black patients compared to white patients.
Businesses cannot derive advantages from systems that yield skewed outcomes and contribute to distrust among individuals from diverse backgrounds, including people of colour, women, individuals with disabilities, the LGBTQ community, and other marginalised groups.
Implementing ethical AI is an ongoing process that requires collaboration, vigilance and a commitment to addressing potential ethical challenges throughout the AI lifecycle.
By integrating these strategies, organisations can develop and deploy AI systems that prioritise fairness, transparency and accountability.
Implementing ethical AI involves a thoughtful and comprehensive approach throughout the entire development lifecycle.
Organisations must consider appointing an external AI ethics advisory board who can help them define the values of AI before implementation.
Establishing an AI ethics advisor is crucial for promoting responsible and ethical AI practices. By incorporating ethical considerations from the outset, organisations can contribute to the development of AI technologies that benefit society while minimising potential harms.
An AI ethical advisor is also key in promoting transparency in AI development and communicating openly about ethical considerations. This helps build trust with users and the wider community.
Organisations can also establish internal ethics committees or advisory boards to provide guidance on ethical considerations throughout AI projects.
Another consideration centres on comprehensive AI training within the organisation. Implementing ethical AI requires a combination of foundational knowledge, practical skills and a commitment to ethical principles.
Foundational principles
The training can delve into foundational ethical principles such as transparency, fairness, accountability and privacy.
Training can also be useful to employees in helping them to recognise the potential biases in AI algorithms and their impact on different demographic groups; as well as providing strategies for identifying, measuring and mitigating bias in AI systems.
Ethical implementation of AI also requires organisations to stay up to date with regulations governing the technology.
Adherence to AI regulations ensures that organisations operate within the bounds of the law. Failure to comply may result in legal consequences, fines or other regulatory actions.
In South Africa, the Information Regulator is already having discussions to find ways to regulate AI as well as generative AI technologies such as ChatGPT.
In the US, the White House in October issued an executive order on safe, secure and trustworthy AI and a blueprint for an AI Bill of Rights. The use of AI in the EU will be regulated by the AI Act, which it says is the world’s first comprehensive AI law.
With all these laws coming, staying up to date with AI regulations is not only a legal requirement but also a strategic imperative for organisations. It helps them build trust, avoid risks, foster responsible AI practices and remain competitive in a rapidly evolving regulatory landscape.
Avoiding AI bias and implementing AI ethically are essential for promoting fairness, trust, legal compliance and positive societal impact. It is not only a moral imperative but also a strategic necessity for organisations aiming to build sustainable, responsible, and widely accepted AI solutions.
- This promoted content was paid for by BCX
- Read more articles by BCX on TechCentral