Type
Industry

Artificial Intelligence and Financial Services: What 2025 taught us

Whether you describe it as the ‘AI revolution’,[1] ‘AI bubble’,[2] or an ‘industry reshape’,[3] 2025 has seen artificial intelligence affect industries across Australia, including financial services. Specifically, we have seen questions of best use, appropriate reliance on AI-driven models, evolving risk management frameworks, data breaches, scams and increased regulatory scrutiny. In this article, we recap some key AI moments in 2025 and take a deep dive into the intersection between AI, data breaches and scams, including steps you can take to protect your business and your clients.

1.Key messages from regulators and government

It seems that everyone was talking about AI during 2025. We have set out some of the key messages from ASIC and the Government that relate to financial services to help you cut through the noise.

 

February Australia joined data protection authorities from South Korea, the Republic of Ireland, France and the United Kingdom in signing a joint declaration to reaffirm commitment to establishing data governance that fosters innovative and privacy-protective AI.[4]

 

Parliament passed the Scam Prevention Framework Bill 2025, which introduced new obligations and rules for “designated” sectors and seeks to reduce the harms caused by scams. The Explanatory Memorandum noted that scammers have become “increasingly sophisticated” in their efforts due to the take-up of newer technologies at their disposal, such as chat bots and artificial intelligence, that allow them to impersonate legitimate entities with far greater accuracy and deploy communications to a wide audience.[5]

 

March The Government released the APS Data, Digital and Cyber Workforce Plan 2025-30 “as a call to action to attract, develop and retain data, digital and cyber talent in a unified and strategic way across the APS”.[6] The Plan noted critical cybersecurity skills shortages across more than 50% of agencies, which was driven in part by artificial intelligence and other emerging technologies that are transforming roles and reshaping skill requirements.[7]

 

April ASIC Commissioner Kate O’Rourke stated that ASIC is “exploring the use of artificial intelligence to assess the high volumes of reports of misconduct and reportable situations that we receive. So, we’re really looking for financial harms. Then we can take regulatory or enforcement action – and scams is a good example.”[8]

 

May ASIC Commissioner Alan Kirkland addressed the Australian Financial Industry Association conference warning that the misuse of AI in the financial services industry opens up “entirely new vectors of potential harm to consumers, and to market integrity.”[9] ASIC was conducting a review into the use of AI in banking, credit, insurance and the financial advice sectors. On the topic of AI regulation, Kirkland emphasised that current legislation already allowed ASIC to respond to certain misuses of AI, citing the examples of misleading representations or inappropriate product distribution.

 

June Changes to the Privacy Act 1988 which introduced a statutory tort for serious invasions of privacy commenced.[10] The tort provides a standalone cause of action for individuals to seek redress for serious privacy breaches. The reform has significant implications for businesses that use AI tools to collect, use or disclose personal information, as individuals can seek redress if their privacy is recklessly or intentionally breached by the AI tool.

 

July ASIC Chairman Joe Longo, speaking at the Australian Banking Association Conference, emphasised that Australia should avoid rushing into AI-specific regulation, noting that targeted rules are often highly complex, difficult for entities to comply with and challenging for regulators to enforce. Notably, Mr Longo emphasised that existing laws, particularly those governing directors’ and officers’ duties, already provide a solid foundation for AI oversight. Mr Longo reinforced that the benchmark for responsible AI innovation is “to keep the customer front and centre.”[11]

 

August Federal Treasurer the Hon Jim Chalmers MP published an opinion piece outlining the government’s AI stance, stating “our intention is to regulate as much as necessary to protect Australians, but as little as possible to encourage innovation.” [12] The Treasurer has separately said “our approach is a sensible, middle path”.[13] In August, the Hon Sussan Ley MP said “AI is important. We should embrace the technology with respect to AI, but we have to get the balance right so we can power the economy. And we have to protect people and content creators.”[14]

 

September Minister for Industry and Innovation and Minister for Science the Hon Tim Ayres MP spoke to the Australian Council of Trade Unions on “Seizing the Opportunities of AI while Protecting the Fair Go”. The Minister announced that the government “will deliver a National AI Capability Plan by the end of the year.”[15]

 

October ASIC’s Annual Report for 2025 noted that whilst AI use was growing, many market intermediaries lacked AI-specific documented governance arrangements, creating gaps in AI risk assessments.[16] The report also discussed ASIC piloting its own use of AI to improve the regulator’s efficiency. In doing so, the regulator has been able to more effectively identify patterns of misconduct and understand AI to better regulate its use in the financial services sector.[17]

 

November OAIC’s Annual Report noted that a major area of focus for the regulator was ensuring emerging technologies, including AI, align with “community expectations and regulatory requirements and targeting current and emerging harms effectively and proportionately while continuing to proactively guide compliance in a dynamic digital environment”.[18]

 

December The Federal Government released Australia’s National AI Plan, confirming that AI will not be governed by a standalone act or sweeping new reforms. Instead, the Government intends to update and adapt existing legislation to ensure appropriate AI regulation. The Government said that this reflects a “proportionate, targeted and responsive” approach to managing AI risks and opportunities.[19]

 

 

Whilst this list is not nearly exhaustive, it reflects the evolving conversation about the role, purpose and regulation of AI.

2.AI, data breaches and scams

The National AI Plan suggests that sweeping AI-specific legislation will not be introduced any time soon. However, as noted by the various stakeholders throughout 2025, there are numerous obligations under existing laws that intersect with licensees’ usage of AI. Indeed, the challenges raised by the use of AI are complex and inherently linked with other considerations, such as data governance, cybersecurity, privacy and ethics practices. We have considered developments during 2025 in relation to AI-related data breaches and scams below.

a) Protecting against data breaches

AI’s growing role can and should be considered when taking steps to protect against potential data breaches. In their May 2025 report, the OAIC highlighted that the number of data breaches reported in the July to December 2024 period represented a 25% increase from reports in 2023.[20] This is important because AI systems are being used by businesses and across supply chains in ways that are often not fully visible;[21] and to collect, use and disclose increasing amounts of personal and sensitive data. An individual’s valuable or essential data can be accessed, extracted, altered or restricted by unauthorised third parties due to, for example, inadequate cybersecurity measures of an AI system. Researchers have highlighted that third parties routinely seek to hack or compromise the integrity of the AI system’s decision-making process.[22]

ASIC’s expectations

ASIC’s strategic priorities for 2025/26 include “pursuing continuous improvement in artificial intelligence (AI) governance and cyber security”.[23]

ASIC’s enforcement actions against licensees offer valuable insights into the regulator’s expectations when it comes to protecting your firm (and your clients) from data breaches. For example, ASIC has pursued financial service providers for cybersecurity failures. In March 2025, the regulator brought an action against FIIG,[24] alleging the licensee failed to take appropriate steps to protect itself and clients from cybersecurity risks over a four-year period. These failures enabled a hacker to enter the FIIG network, resulting in the theft of confidential information from 18,000 clients.[25] Additionally, ASIC alleges that FIIG did not investigate or respond to the hack until a week after it was notified of its occurrence.

Similarly, in July 2025, ASIC brought enforcement action against Fortnum Private Wealth, alleging that the financial advice business failed to put in place adequate systems to ensure cybersecurity. Although Fortnum had introduced a cybersecurity policy in April 2021, ASIC argued that it was inadequate, as during the policy period a cyber-attack led to the personal information of 9,000 clients becoming compromised.[26]

Such regulatory action highlights important examples of ASIC’s expectations for preventing and managing data breaches. The lessons from these cases can and should be extended to the use of AI technologies by licensees.

ASIC has highlighted its concerns about breaches of data privacy and security in its “Beware the gap: Governance arrangements in the face of AI innovation” report released in October 2024. ASIC specifically noted that:[27]

AI models may contain or reproduce confidential or sensitive information without the prior and informed consent of impacted individuals. AI models can also be vulnerable to cyber attacks and data leaks.

It seems that it is only a matter of time before we see ASIC take regulatory action against a licensee for data breaches or governance failures associated with AI-usage.

b) The risk of the “scam”

It is also essential that licensees are aware and prepared for the risks that AI pose to both their business and customers where the uptake in scams continues to increase.

In their Annual Cyber Threat Report 2024-2025, the Australian Government’s Australian Signals Directorates warned:

“The prevalence of artificial intelligence (AI) almost certainly enables malicious cyber actors to execute attacks on a larger scale and at a faster rate. The potential opportunities open to malicious cyber actors continue to grow in line with Australia’s increasing uptake of – and reliance on – internet-connected technology.[28]

Scams also continue to make up a large number of AFCA complaints, with AFCA receiving 5,977 in 2024-25 despite this number only being a proportion of the scams experienced.[29] A central component of AFCA’s dispute resolution framework is its fairness jurisdiction. AFCA evaluates not only a financial firm’s compliance with legal obligations, but also whether the firm acted reasonably and fairly in the circumstances. As a result, firms may be found liable even where customers appear to have voluntarily provided information, particularly if the firm failed to act on warning signs or missed opportunities to mitigate potential losses.

ASIC’s expectations

ASIC has shown that they have not been afraid to use their powers when licensees have not adequately dealt with scammers. For example, over a four-year period, HSBC received around 950 reports of unauthorised transactions amounting to collective customer losses of over $23 million. Scammers posed as HSBC staff to access clients’ accounts. In December of 2024, ASIC alleged HSBC breached their obligations to provide services efficiently, honestly and fairly for having inadequate controls to protect their customers from scams. Due to the extent of damages to customers, ASIC have called these failings “widespread and systematic”.[30]

In its August 2025 Enforcement and Regulatory Update, ASIC also revealed that more than 14,000 investment scams and phishing websites had been knocked out since the takedown capability began two years ago, with ASIC continuing to remove an average of 130 malicious sites every week.

Notably, ASIC observed:[31]

  • Fake trading bots, with scammers claiming their bots use AI to generate passive income and unrealistic returns.
  • Scam website templates – these templates included fake corporate documents and chatbot plugins which help launch convincing copy-cat sites quickly.
  • Scams contained third-party content, such as legitimate-looking charts and chatbots embedded to make fake sites appear credible.
  • Scammers used fake news articles, including AI-generated celebrity and prominent Australian fakes to collect contact information and pitch scams.
  • Scammers applied cloaking, where website content changes based on target audience location and device type.

ASIC is yet to release AI-specific scam guidance for licensees. However, it’s letter to superannuation trustees regarding weak scam and fraud practices in January 2025 is instructive. ASIC called out scammers employing “increasingly sophisticated tactics to manipulate members”, noting the importance of licensees taking prompt and proactive steps to monitor and address scam activity.[32]

For further information on meeting ASIC and AFCA expectations in relation to scams, you can read Holley Nethercote’s recently-published article “Scam lessons from AFCA: Reducing liability for licensees”.

3.Reducing AI-related scams and data breach risks

Many financial services businesses, and their staff, are adopting AI and taking advantage of the many benefits that AI can produce. However, how can licensees balance the benefits of AI with the potential increased risk of scams or data breaches? We have set out some practical tips below.

a) Comprehensive policies and procedures

Review your AI policies that govern how AI is being utilised within your organisation. Other policies and procedures that may intersect with your AI policy include:

b) Preparation and training

As AI continues to be used by nefarious actors facilitating sophisticated scams and data breaches, consider how your business prepares and educates its people. Licensees should consider the following:

  • Training staff and contractors on risks associated with AI, and how to manage them.
  • Monitoring staff and contractor-usage of AI.
  • Establishing clear accountability structures for the management of client information and AI governance.
  • Having robust incident response plan that facilitates a thorough response to a cyber incident.
  • Establishing mechanisms to access offshore service providers’ controls for safeguarding confidential client information.

c) Strengthen fraud detection and monitoring

ASIC expects that you will engage IT security experts to ensure that your cybersecurity systems, process and procedures are sufficiently robust. This may include employing or outsourcing from a third party people with the skills, knowledge and experience in IT security.

Invest in advanced technology that flags suspicious transactions early. Real-time monitoring, two-factor or multi-factor authentication, and sophisticated anomaly detection systems are now considered baseline requirements. Behavioural analytics can help identify emerging scam patterns, and businesses must ensure their systems are robust enough to intervene before funds leave customer accounts. Listen to the advice from your IT security expert.

4.Where to now?

As 2025 draws to a close, one trend stands out above all others: the regulatory landscape for artificial intelligence in financial services is changing. In 2026, we can expect a wave of new and strengthened legal frameworks aimed at governing how AI is deployed, monitored, and controlled across the sector. From stricter compliance obligations under emerging global standards to enhanced accountability for algorithmic decision-making, regulators will demand greater transparency, fairness and risk management. For financial institutions, this means moving beyond experimentation and into a phase of robust governance, embedding ethical AI principles, implementing rigorous audit trails and ensuring data integrity at every stage. Those who act now to align with these evolving requirements will not only mitigate risk but also gain a competitive edge in an increasingly regulated environment. The message is clear: 2026 will not just be another year of innovation, it will be the year of accountability.

Require further assistance?

Contact Us Our Expert Team Our Training

Author: Tali Borowick (Lawyer)

[1] https://ministers.treasury.gov.au/ministers/jim-chalmers-2022/articles/opinion-piece-australia-shouldnt-fear-ai-revolution-we-can

[2] https://www.abc.net.au/news/2025-10-20/ai-crypto-bubbles-speculative-mania/105884508

[3] https://www.forbes.com/sites/brentgleeson/2024/12/03/how-ai-is-reshaping-the-future-of-work-across-industries/

[4] https://www.oaic.gov.au/news/media-centre/joint-statement-on-building-trustworthy-data-governance-frameworks-to-encourage-development-of-innovative-and-privacy-protective-ai

[5] Scams Prevention Framework Bill 2025 – Parliament of Australia, Revised Explanatory Memorandum page 138.

[6] Workforce plan | Data and Digital

[7] In-demand skills | Data and Digital

[8] https://www.asic.gov.au/about-asic/news-centre/speeches/leveraging-data-for-consumer-protection-and-to-support-australian-businesses/

[9] https://www.asic.gov.au/about-asic/news-centre/speeches/asic-s-priorities-in-a-changing-regulatory-environment/

[10] https://www.oaic.gov.au/privacy/your-privacy-rights/more-privacy-rights/statutory-tort-for-serious-invasions-of-privacy

[11] https://www.asic.gov.au/about-asic/news-centre/speeches/ai-a-blueprint-for-better-banking/

[12] https://ministers.treasury.gov.au/ministers/jim-chalmers-2022/articles/opinion-piece-australia-shouldnt-fear-ai-revolution-we-can

[13] https://www.abc.net.au/news/2025-08-07/artificial-intelligence-jim-chalmers-economics-reform-roundtable/105618958

[14] https://www.abc.net.au/news/2025-08-06/federal-politics-august-6/105616964

[15] https://www.minister.industry.gov.au/ministers/timayres/transcripts/speech-actu-symposium-seizing-opportunities-ai-while-protecting-fair-go

[16] https://download.asic.gov.au/media/llbhx4al/asic-2025-annual-report-full-report.pdf

[17] https://download.asic.gov.au/media/llbhx4al/asic-2025-annual-report-full-report.pdf

[18] Annual report 2024-25 | OAIC at page3.

[19] https://www.industry.gov.au/sites/default/files/2025-12/national-ai-plan.pdf

[20] https://hnhub.com.au/dashboard/regulatory-updates/oaic-statistics-show-record-year-for-data-breaches/

[21] https://www.oaic.gov.au/privacy/notifiable-data-breaches/notifiable-data-breaches-publications/notifiable-data-breaches-report-july-to-december-2024

[22] https://www.cyber.gov.au/business-government/secure-design/artificial-intelligence/ai-data-security

[23] https://download.asic.gov.au/media/xbtjrb4m/asic-corporate-plan-2025-26-published-27-august-2025.pdf

[24] https://download.asic.gov.au/media/0ubnrmym/25-035mr-asic-v-fiig-securities-limited-concise-statement-sealed.pdf

[25] https://www.asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-035mr-asic-sues-fiig-securities-for-systemic-and-prolonged-cybersecurity-failures/

[26] https://www.asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-143mr-asic-sues-fortnum-private-wealth-for-allegedly-failing-to-adequately-manage-cybersecurity-risks/

[27] REP 798 Beware the gap: Governance arrangements in the face of AI innovation | ASIC page 9

[28] https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/annual-cyber-threat-report-2024-2025

[29] https://www.afca.org.au/annual-review-overview-of-complaints

[30] https://www.asic.gov.au/about-asic/news-centre/find-a-media-release/2024-releases/24-280mr-asic-sues-hsbc-australia-alleging-failures-to-adequately-protect-customers-from-scams/

[31] https://www.asic.gov.au/about-asic/news-centre/find-a-media-release/2025-releases/25-178mr-asic-s-moneysmart-urges-consumers-to-be-on-the-lookout-for-scams-after-25-per-cent-jump-in-fake-celebrity-finance-endorsements/

[32] ASIC calls out superannuation trustees for weak scam and fraud practices | ASIC