fbpx
Type
Industry

Is AI a threat or opportunity for Australian Financial Services and Australian Credit Licensees, today?

image description
Paul Derham Managing Partner Linkedin

For the time poor readers, here’s the TL;DR.  Artificial Intelligence (AI) presents unique regulatory and other risks that need to be managed.  The law in Australia today applies to AI, but regulatory changes have been proposed.  The opportunity is greater than the risks.  Learn to control the risks associated with using AI now or risk losing your job in years to come.  The risks posed to Australian Financial Services licensees (AFSLs) and Australian Credit licensees (ACLs) are nuanced, and we explore how to manage some of those risks in this article.

Now, let’s get into the detail.  I’ll start with some stats and a true story.  Consider the following:

  1. According to Deloitte, more than a quarter of the Australian economy will be disrupted by generative AI, which means nearly $600 billion of economic activity faces disruption.[1] Also, more than two-thirds of Australian businesses report using or actively planning to use AI systems in their business operations.[2] McKinsey & Company estimate that AI and automation could contribute an additional 170 billion to 600 billion to Australia’s GDP by 2030,[3] alongside an associated increase of labour productivity by 0.1 to 1.1 points every year[4] until 2030. Also, an International Monetary Fund report estimated AI might impact 60% of jobs in developed nations, such as Australia.[5] The point?  Generative AI produces opportunities that you, as a licensee, can seize today.
  2. UCLA Professor Eugene Volokh asked ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.” The generative AI program replied with an answer explaining that a law professor, Mr Turley, of Georgetown University Law Center, was accused of sexual harassment by a former student during a class trip to Alaska.  The citation for the data was a Washington Post Article dated 21 March 2018.  But wait, there’s more.  Importantly, Mr Turley has never taught at Georgetown University.  Also, the Washington Post article doesn’t exist.  Mr Turley has never been to Alaska with any student, and he has never been accused of sexual harassment.[6] The point?  Generative AI sometimes produces unreliable data.[7]  This is an example of poor system performance – where errors in an AI output have caused distress and reputational harm.  This is one of six harm categories identified by Professor Nicholas Davis and Lauren Solomon in a recent report titled “The State of AI Governance in Australia”.[8] Those harm categories contribute to three organisational risks that are amplified by AI systems: Commercial, Reputational and Regulatory.

This wouldn’t be an AI article if I didn’t ask ChatGPT for help.  So, I asked the machine what financial service providers want to know about AI.  It said (this is the short version):

One of the main overarching questions they often seek to answer is: “How can AI be effectively integrated into our financial services to improve efficiency, accuracy, and customer experience while complying with regulatory requirements?”

It then broke the question into 10 sub-questions, including “Which Specific AI Applications Should We Implement?  How Can We Ensure Data Privacy and Security in AI Solutions?  What Is the Cost-Benefit Analysis of AI Implementation?  How Do We Manage Regulatory Compliance?…”, and so on. 

This article touches on the regulatory risk component in the context of Australian Financial Services Licensees (AFSLs) and Australian Credit Licensees (ACLs).

As Australia has yet to legislate AI-specific laws, it is currently regulated by laws that attempt to be technology-neutral.  We have extracted the following examples from The State of AI Governance in Australia, below (used with permission):[9]

When an AI system (or director) … These laws may apply
Misuses data or personal information
  • Privacy laws
  • Data-security obligations
  • Security of Critical Infrastructure Act
  • Risk management obligations
  • Confidentiality obligations
  • IP laws
Produces an incorrect output
  • Australia Consumer Law – product liability (if the organisation is a manufacturer) and consumer guarantees
  • Privacy laws if the output is personal information
Provides misleading advice or information
  • Australian Consumer Law – misleading and deceptive conduct, unconscionable conduct, false and misleading representation, consumer guarantees
Provides unfair or unreasonably harsh treatment
  • Australian Consumer Law – unconscionable conduct
  • Australian Consumer Law – consumer guarantees
Discriminates based on a protected attribute
  • Anti-discrimination laws
Excludes an individual from access to a service
  • Anti-discrimination laws if the exclusion relates to a protected attribute
  • Essential service obligations (e.g. electricity hardship and disconnection obligations)
  • Australian Consumer Law – unconscionable conduct
Restricts freedoms such as expression, association or movement
  • Human rights acts or charters in Victoria, Queensland, and ACT
Causes physical, economic, or psychological harm
  • Negligence, if there is a breach of a duty of care that causes harm
  • Work, health, and safety laws
  • Australian Consumer Law – product liability (if the organisation is a manufacturer) and consumer guarantees
Directors fail to ensure that effective risk management and compliance systems are in place to assess, measure and manage any risks and impacts associated with a company’s use of AI Corporations Act 2001 s180
Directors failing to be informed about the subject matter and rationally believe their decisions are in the best interests of the company, having properly considered the potential impact of those decisions Corporations Act 2001 s181

Here’s my extra section, for AFSLs and ACLs, which is in addition to those laws described above:

When an AI system … These laws may apply
1.     Provides general financial product advice to a retail client The obligations under the Corporations Act 2001 regarding:

a. False or misleading representations (there are also obligations under the ASIC Act 2001 that would apply, such as misleading or deceptive conduct and unconscionable conduct)

b. A licensee’s general obligations, including to provide services efficiently, honestly and fairly, and to comply with the conditions on its licence, comply with the financial services laws, maintain competence, and have adequate resources to supervise[10] the provision of financial services

c. The Design and Distribution regime to the extent any financial products are captured by that regime

d. Having an AFSL that covers the provision of financial product advice with respect to any financial products that the AI system provides advice on

e. Provision of a Financial Services Guide

f. Provision of a general advice warning

g. The AI-bot’s trainer and at least one Responsible Manager meeting the training requirements of RG 146

2.     Provides personal financial product advice to a retail client The obligations under the Corporations Act 2001 regarding:

a. The matters covered in items 1(a)-(e) above.

b. Provision of a Statement of Advice, and possibly a Product Disclosure Statement

c. Compliance with the Best Interests obligations (best interests duty, appropriateness requirement, conflicts priority rule, and more)

d. The AI-bot’s trainer and at least one Responsible Manager meeting the professional standards imposed by a bundle of laws.  For example, the human is likely to need to meet the requirements of a “relevant provider”,[11] which includes complying with the Code of Ethics, holding certain bachelor level qualifications, and being included on the financial adviser register

3.     Suggests a credit contract to a consumer, or assists a consumer apply for a credit contract (these are forms of “credit assistance”) The obligations under the National Consumer Credit Protection Act 2009 regarding:

a. General conduct obligations

b. Provision of a Credit Guide, Credit Proposal and Credit Quote (if necessary)

c. Where the credit assistance relates to credit contracts secured by mortgages over residential property – meeting the best interests obligations

d. Meeting responsible lending obligations, including preparing written assessment of suitability

e. Having an ACL that covers the provision of the credit activities with respect to any activities that the AI system performs

f. The AI-bot’s trainer and at least one Responsible Manager must meet minimum competency requirements[12]

g. Also, prohibition on misleading or deceptive conduct, unconscionable conduct and other obligations under the ASIC Act 2001, and Design and Distribution Regime under the Corporations Act 2001

 

This table is not even nearly exhaustive, and depending on the interest it generates, we may release more guidance on how other activities are captured, for example, by AML/CTF obligations.

Is the Australian Government legislating for, or regulating, AI specifically?

The Australian Federal Government’s $41.2 million commitment to support the responsible deployment of AI in the national economy in the 2023-24 Budget indicates that they’ve turned their mind to this issue.[13] [14]

Similarly, ASIC has said that as part of its priorities for the supervision of market intermediaries in 2022-23, “We are undertaking a thematic review of artificial intelligence/machine learning (AI/ML) practices and associated risks and controls among market intermediaries and buy-side firms, including the implementation of AI/ML guidance issued by the International Organization of Securities Commissions (IOSCO)”.[15] In a recent address, ASIC chair Joe Longo reiterated ASIC’s aims in the face of “rapidly and constantly evolving AI”. They are:

  • The safety and integrity of the financial system
  • Positive outcomes for consumers and investors.[16]

And, the Department of Industry, Science and Resources (DISR) released its interim response to its consultations on their “Supporting responsible AI: discussion paper” on 17 January 2024.  It concluded that the current laws and regulatory framework do not satisfactorily address AI risks, particularly prevention of these risks before they occur.

It outlined the government’s intention to consider introducing mandatory obligations on the development or use of AI systems that present a high risk. Importantly, the report draws a distinction between high risk and low risk applications of AI, with higher obligations imposed on higher risk applications. The report does not define what constitutes high risk or low risk generative AI systems.

The interim report sets out 5 principles to guide the Government’s interim response.  These are:

  1. evaluating obligations on AI development/use on the level of risk posed by the AI
  2. balancing the need for innovation and competition with community interest considerations
  3. collaborating openly with experts and the public
  4. supporting global action on AI risks in line with the Bletchley Declaration[17]
  5. placing the needs of people and communities at the forefront of considerations.

The government has indicated its intention to ask the National AI Centre to create an AI Safety Standard to give practical guidance for industry to ensure AI systems being developed are safe and secure. It aims to work alongside industry to evaluate voluntary labelling and watermarking of AI-generated materials. Lastly, the DISR will establish an interim expert advisory group to further support the proposed AI guardrails.

All of the above is happening alongside existing regulatory reviews. The report indicates that submissions raised will be considered as part of reforms including the privacy law reforms. It also includes applying submissions to new laws regarding misinformation and disinformation, the review of the Online Safety Act 2021, prospective automated vehicle regulations, ongoing intellectual property reviews, competition and consumer laws impacted by digital platforms, and a framework for generative AI in school and the Government’s cybersecurity strategy.

So, how do AFSLs and ACLs manage these regulatory risks?

As a licensee, you already have a risk management framework, to help you comply with your general obligation to have in place adequate risk management systems.  We think it’s time to dust it off and identify two new risks:

  1. The risk of missing the opportunities that AI presents; and
  2. The regulatory risks associated with using AI.

Remember, most of your staff are already using AI.  So, you probably need to get onto this now.

Ways to control both risks include:

  1. Create a Policy. For starters, you should develop an AI policy for representatives.  It should tell them not to do things like putting personally identifiable information or sensitive information into a search engine or AI system.  Take a look at the Government’s interim guidance for agencies on government use of generative Artificial Intelligence platforms, for some more ideas.[18]
  2. Train representatives – on the policy, and on the law more broadly. The training arm of Holley Nethercote performs lots of half-day sessions on emerging regulatory risks and opportunities, including AI.  In late 2023, we trained over 100 licensees across multiple sessions, discussing reasonable controls to mitigate AI risks.  We’ve also run (and are running) lots of similar in-house sessions for licensees, at the time of writing (mid-2024).  We have a regulatory update service which includes legal commentary on the changes (it’s not just a news service), via our HN Hub.  I also personally recommend listening to podcasts and paid subscription services like Exponential View.
  3. Supervise. If you decide to use an AI system, think of monitoring and supervising AI systems, like you’re a parent:
    • When they’re young (0-4), you’re the caregiver. You feed them and change their nappies lots of times – close monitoring required!
    • When they’re pre-teen, you’re the cop. You set the rules.  As they approach teens, they’ll push back a bit, but you’ll still need to agree on minimum standards.
    • When they’re teenagers, you’re their coach. You stay involved, check-in, review, and give feedback.
    • When they’re adults, you’re their consultant. You never really stop being a parent.  You need to check in regularly to see how they’re going.

Every analogy falls down eventually, and being a “parent” is no exception.  In terms of supervising a healthy, grown-up AI system, you need to have ongoing monthly reporting, measurement of error rates, evidence that staff are checking underlying assumptions, and a bunch of other things that exceed the scope of this article.  Initially, you need to engage lawyers.  We’ve been asked to review the outputs of AI bots, and it’s not a quick job.

AI thought-leader, and previously Chief Business Officer for Google X, Mo Gawdat, says that people won’t lose their jobs to AI, people will lose their jobs to people who use AI.[19] So, what are you waiting for?

How can we help?

We can:

  1. Help licensees develop their risk management program from a regulatory risk perspective, with respect to AI opportunities and risks.
  2. Review licensees’ AI systems to in light of regulatory obligations.
  3. Run in-house training on regulatory risks associated with AI, and how to manage them.
  4. Keep licensees up-to-date regarding regulatory changes via our HN Hub.

Author: Paul Derham (Managing Partner)

Would you like to know more?

Contact Us Our Expert Team Our Training

 

Endnotes

[1] Generative AI: A quarter of Australia’s economy faces significant and imminent disruption | Deloitte Australia

[2] HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au)

[3] Supporting responsible AI: discussion paper – Consult hub (industry.gov.au)

[4] Generative AI and the future of work in Australia | McKinsey

[5] AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity. (imf.org)

[6] ChatGPT falsely accused me of sexual harassment. Can we trust AI? (usatoday.com)

[7] There’s a similar event that happened closer to home, more recently: Victorian Mayor Brian Hood was wrongly named by ChatGPT as a guilty party who served prison time due to a bribery scandal. The small issue: Brian was the whistleblower in this case and was never charged. This instance of a “hallucination” (where AI generates incorrect or misleading results) constituted a considerable reputational risk to an individual whose profession depends on reputation.

[8] HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au)

[9] HTI The State of AI Governance in Australia – 31 May 2023.pdf | University of Technology Sydney (uts.edu.au) page 36.

[10] ASIC’s Regulatory Guide 255: Providing digital financial product advice to retail clients, provides a thorough summary of what ASIC expects in terms of complying with Corporations Act obligations.

[11] Corporations Amendment (Professional Standards of Financial Advisers) Act 2017 (legislation.gov.au)

[12] For example, responsible manager needs at least two years of relevant problem-free experience, and either a credit industry qualification to at least Certificate IV level, or other higher level qualifications.  See RG 206 Credit licensing: Competence and training | ASIC.

[13] The allocation of funding is hardly impressive.  China’s spending on AI, for example, is expected to surpass $38 Billion by 2027: To really grasp AI expectations, look to the trillions being invested | World Economic Forum (weforum.org)

[14] Investments to grow Australia’s critical technologies industries | Department of Industry, Science and Resources

[15] ASIC’s priorities for the supervision of market intermediaries in 2022–23 | ASIC

[16] https://architecture.digital.gov.au/guidance-generative-ai

[17] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023 | Department of Industry Science and Resources

[18] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023 | Department of Industry Science and Resources

[19] Mo Gawdat podcast: EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! – Mo Gawdat | E252 – YouTube.


This article was published in the Financial Standard – AI in compliance.