The AI Risk Trilogy: Essential Ethics and Risk Management for South African Tax Accountants

The rapid advancement of generative artificial intelligence (AI) presents both unprecedented opportunities and significant challenges for the accounting and tax professions. While these tools can enhance productivity and streamline tasks, they introduce new dimensions of ethical and professional risk.

Understanding the "AI Risk Trilogy"

Recent developments highlight three fundamental risks that every practitioner must address to protect their practice and their clients.

1. AI Can Invent Technical Authority

A critical risk of generative AI is "hallucination"—where the system generates plausible but factually incorrect information. In one high-profile legal case, AI provided lawyers with fabricated case law citations and judge quotations, leading to court sanctions. Practitioners must never rely on AI-generated research without independently verifying sources.

2. AI Can Produce Fictional Documentation

AI can draft convincing audit working papers or tax technical memos that appear legitimate but describe procedures never actually performed. This creates "fictional evidence" that undermines documentation integrity and audit quality. Professional documentation must always reflect real analysis and verified reasoning.

3. Regulators Use AI to Detect Weak Work

Tax authorities, including SARS, are investing heavily in AI-driven compliance systems to detect anomalies and patterns in submissions. If a practitioner submits AI-generated explanations that are generic or lack depth, these systems may flag the return for investigation. Sound professional judgment and robust, evidence-based documentation are more critical than ever.

The Professional Framework: AI is a Drafting Assistant, Not a Researcher

To manage these risks, practitioners must understand that generative AI (like ChatGPT or Google Gemini) works by predicting word sequences based on language patterns, not by retrieving verified facts from a database.

Lower-Risk vs. Higher-Risk Applications
Use AI To (Lower Risk)
Do Not Rely on AI For (Higher Risk)

Summarize information

Verify legal authority

Improve writing clarity

Interpret complex legislation

Structure explanations

Provide tax advice

Brainstorming issues

Support aggressive tax positions

Legal and Ethical Obligations in South Africa

The use of AI does not exempt professionals from existing standards.

  • SAICA Code of Professional Conduct: The principles of Integrity, Objectivity, Professional Competence and Due Care, Confidentiality, and Professional Behaviour apply fully to AI usage.
  • POPIA Compliance: Uploading client information to external AI systems raises data security concerns. Practitioners must ensure compliance with the Protection of Personal Information Act (POPIA) by anonymizing prompts.

Conclusion: The Human Element is Non-Negotiable

Ultimately, the professional remains accountable for all work performed, regardless of AI assistance. AI should be treated as a tool for structure and language, while the practitioner provides the verified expertise and contextual judgment.

This article is based on a CPD webinar presented by Caryn Maitland on Ethics and Risk in the Age of Generative AI. If you would like to watch the full on-demand webinar, please click the link below:

Disclaimer: This article provides general information and should not be construed as professional advice. Practitioners should consult the relevant legislation, including the SAICA Code of Professional Conduct and POPIA, and seek professional guidance for specific circumstances.

 

There are not comments for this article at the moment, check back later.
You must be logged in to add a comment, log in now.

Explore Smarty