Team Licence
subjects
cpd types
licence
about

Barely a month goes by without news of another huge breakthrough in the world of AI – or else a scare story about its potentially apocalyptic risks. At the heart of this is a growing call for a focus on the ethics underpinning AI, and what actions we can take to make sure that AI is working for rather than against us. In recent months, leading professional bodies including ACCA have lent their voice to this call.

In early November 2023, the first AI Safety Summit took place at Bletchley Park, Buckinghamshire. The summit aimed to discuss how AI risks can be mitigated through coordinated action and was attended by some leading lights of the political and tech world, including OpenAI’s CEO Sam Altman, Kamala Harris, Ursula von der Leyen, and even Elon Musk.

In response to the summit, ACCA urged governments to put ethics, transparency, and governance at the heart of artificial intelligence policy.

According to ACCA, the only way to ensure AI can be viable in the long term and work for the whole of society is to ensure transparency around the way it is used and its impact.

Jamie Lyon, ACCA’s Head of Skills, Sectors and Technology, suggested that every accountancy organisation across the UK should be urgently considering creating an AI policy, even if they have yet to start actively using AI.

ACCA's research pinpointed critical ethical considerations, including:

  • Hindrances to AI adoption due to insufficient transparency and trust. It’s a cliché but it’s true: people don’t like change. With machine learning, where it can be unclear how the system is achieving the results it is, it can be unsettling and can bring resistance both from internal stakeholders and clients or customers who like what they like.
  • Addressing bias and discrimination concerns in AI applications. In recent years, it has become more apparent that AI systems will often replicate the inherent biases of the data on which they have been trained. As systems become more automated, this is a vital consideration.
  • Safeguarding data privacy and security. AI systems in accounting will often need access to vast amounts of sensitive financial data. Compliance with data protection is both crucial and difficult to ensure at such scale.
  • The need for a comprehensive legal and regulatory framework covering liability. Liability for failures need to be clear – it’s not acceptable to throw our hands up and say: "It is the technology’s fault!” but also, with a lack of full understanding, there is also a growing uncertainty as to where responsibility should ultimately lie.
  • Tackling challenges related to inaccuracy and misinformation. Deepfake technology raises concerns about fraud, privacy, and security, particularly where this technology can undermine some two-factor authentication processes.
  • The "magnification effect” and unintended consequences. It is vital to recognize that a solitary AI error could have more profound consequences than human error, particularly as reliance on the technology grows, and human intervention decreases.

In summary, ACCA’s Jamie Lyon added: "To navigate this complex landscape, individuals and organizations must understand and proactively manage these risks.”

Further Reading

ACCA’s quick guide to AI: https://www.accaglobal.com/gb/en/professional-insights/technology/quick-guide-AI.html

ACCA’s professional insight report "Digital horizons: technology, innovation and the future of accounting”: https://www.accaglobal.com/gb/en/professional-insights/technology/digital-horizons.html

Want to learn more about AI? Check out our course AI for Accountants!

    You need to sign in or register before you can add a contribution.