Prevalence of AI use in the workplace
With the exponential growth in the use of artificial intelligence (AI) at work, a question which arises is, who is responsible for AI mistakes?
More than a third of Australians say they use AI at work several times a week – eight per cent use it many times every day – and that proportion is expected to continue to grow rapidly.
A 9News survey found the majority use ChatGPT, Google Gemini and other AI tools for minor tasks like writing emails, but a third of AI users said they use it to produce reports, presentations, translations, research and answers to questions from customers.
As AI is a relatively new technological development, employees must ensure they engage with these systems safely, in accordance with the relevant statute and with policies relevant to their employment.
“I don’t know if my employer has an AI policy”
One in ten employees said their boss doesn’t know they use AI, and a staggering two thirds of users do not know if their employer has a policy regarding AI use. Nonetheless, a fifth of those surveyed admitted to using AI frequently in their daily work.
There is widespread acknowledgment that AI is becoming increasingly part of our working lives. Almost three quarters of the surveyed employees supported the adoption of tougher, stricter rules and regulations.
Almost a third of the surveyed employees reported that their activity at work is monitored by AI. (Please see Thousands of Aussie workers could be risking termination with this AI policy blunder, 9News, 10 February 2026.)
Mistakes, embellishments, hallucinations and confidentiality breaches
But what is the law if AI gets it wrong? What if the AI “hallucinates” and inserts mistakes or false information into medical reports, accounting processes, financial or legal advice?
What happens if AI exposes confidential information? (Please see Artificial Insecurity: how AI tools compromise confidentiality, Access Now, 5 February 2026.)
An employee whose work includes false information generated by AI is likely to be held responsible for not checking that everything was correct before distributing it. But is the employer or AI owner still responsible?
The law has yet to catch up with the enormous growth of AI use in business dealings with customers, or to answer definitively the question of whether it is the person who used AI, or the entity that developed it that is responsible when the AI makes a mistake.
Employees must comply with AI usage policies
Some workplace professionals have stated that, until legislative amendments or revised policies are made, breaches involving the use of AI are likely to follow the typical process for a breach of any other workplace policy.
Simply put, if an employee uses AI in a way that breaches their company’s policy, they could be disciplined, or ultimately terminated.
Legal liability in such events is likely to be determined on a case-by-case basis, depending on the extensiveness of AI’s usage, the negligence in not cross-checking information and the consequences of the mistake.
Risk of reputational damage due to careless use of AI
Deloitte, one of the world’s largest financial institutions, fell victim to its own negligence when it inadvertently submitted a report containing material errors and AI “hallucinations”. This mistake cost the company almost $100,000 and considerable public disrepute.
On this example, the liability of AI’s mistakes seemingly falls on the user. Employees ought to ensure that any information generated by AI is checked extensively for error. (Please see The AI workplace stuff-ups from 2025, Financial Review, 10 December 2025.)
Who gets sued if AI gets it wrong?
In Australia, our laws allow individuals and corporations to bring action against a “legal person”. The Acts Interpretation Act 1901 states that a legal person is limited to individuals, body politics and corporate entities.
Under that framework, AI itself cannot be sued. Instead, the company with proprietary ownership of the software would be the likely target.
However, determining who is liable for the causes of action it creates (ie the reasons to sue a person or company) largely rests on the system’s use, the involvement of the user in providing prompts and amendments, and any negligence involved in the submission of materials.
Copyright owners taking legal action for copyright infringement
Importantly, AI companies have been and continue to be sued overseas. Recently, Yomiuri Shimbun, a major Japanese newspaper, began a suit against AI company Perplexity for using its AI system to “scrape” its articles and reproduce the information to answer the questions of its users. (Please see Lawsuits could spell trouble for AI, UTS Newsroom, 15 August 2025.)
In intellectual property law, although an AI system may generate images, texts or artworks, the individual responsible for the prompt itself can be the owner, provided they spend sufficient time, effort and specificity in its creation.
This application may traverse the legal industry, providing another difficult hurdle in the allocation of liability.
While the area is still developing and liability for AI use is still being attributed, the global viewpoint seems to be that no matter what developmental stage the AI software is in, the person or corporation that owns that system is ultimately responsible for its creations and any breaches of the law contained in those creations, unless there has been sufficient connected and intervening conduct from the AI user.
Law professor Robayet Ferdous Syed at Monash University has pointed out that it could be argued AI should not be held responsible for its mistakes the same way a human or “legal person” can, because it is not a conscious being.
This interpretation aligns largely with the current accepted view. (Please see So sue me: Who should be held liable when AI makes mistakes? – Monash University, 29 March 2023.)
Vicarious liability and AI use
It is important to understand how this order of thinking interacts with the principle of vicarious liability. When applied, vicarious liability renders employers, corporations or other controlling entities liable for causes of action created by one of their employees or agents.
On the current interpretation, while the employer or corporation may face negative consequences for the improper use of AI by the employee or agent, those employees or agents remain at risk of discipline or termination.
As the legislative framework adjusts to the new wave of technologies, it is critical to note that there will naturally be inherent difficulties in extending the liability of AI users to the creator of the platform.
Staying informed and avoiding confidentiality breaches
If you wish to use AI at work, you should first consult your employer’s policies for any mention of AI or its usage. The better informed you are, the more likely you are to use AI in ways that do not breach your obligations and thus avoid the risk of disciplinary action or termination.
When using AI systems for work-based tasks, you must be careful not to provide any information that could be private of confidential. It is your responsibility to ensure that uploaded materials do not contain any private contents.
To ensure that use of AI does not risk exposing private information, you should ask yourself if there would be any risk to your employer or clients if the chosen information were to be made publicly available. Operating with this mentality assists in safe and proper use of AI.
Specific risks for coders and technology workers
For workers in the technology and coding space, AI usage presents justifiable concerns that mistakes causing detriment to their employers may result in termination.
A coder with little AI experience decided to use AI to write code to meet a production deadline.
When that code failed and it was revealed that AI was used in its creation, the coder was swiftly terminated, even though it was someone in upper management who merged his passages of code together. (Please see Techie ‘Promptly Fired’ After AI-Generated Code Causes Production Issue, NDTV, 25 February 2026.)
Risks in AI use for legal practitioners
Similarly, there are documented instances where professionals have neglected to verify the information AI provided to them, leading to dire consequences.
A Victorian legal practitioner faced professional sanctions for using AI to generate a list of case law. While there was no issue with the act of generating that list, the practitioner failed to ensure the information it contained was accurate.
Consequently, he was referred to the Victorian Legal Services Board and had his practising certificate varied, meaning he was no longer entitled to practice as a principal lawyer or handle trust money for two years. (Please see Lawyer caught using AI-generated false citations in court case penalised in Australian first, The Guardian, 3 September 2025.)
Among other things, the Federal Circuit and Family Court of Australia has indicated that if AI is to be used, it is best practice to establish and follow strict training and supervisory guidelines for the employees using such systems.
A South Australian paralegal used AI software to generate submissions which were ultimately used in proceedings. Those submissions contained numerous errors and once revealed, caused detriment to the instructing solicitor, and to counsel.
The work was not checked before being submitted to the court, and the paralegal was terminated for her improper use of AI. (Please see Lawyers to face regulators after AI was used to prepare legal document, ABC News, 5 December 2025.)
AI use has implications for both employers and employees
Employees must ensure that before AI systems are used in the workplace, they consult their employers’ policies concerning AI use, ensure that any information inputs are not private or confidential and, most importantly, ensure that any material generated by AI is checked for errors before submission.
Failure to do so carries a high risk of disciplinary action, including termination, if AI mistakes cause detriment to your employer.
Employers must ensure that they have AI policies which are kept up to date and communicated clearly to all employees.
Further reading on artificial intelligence
Can I claim copyright if I write a novel or research paper using generative AI?
Predicting recidivism – the questionable role of algorithms in the criminal justice system
Driverless cars are coming – but whose fault will it be when they crash?
Guilty or not guilty – could computers replace judges in a court of law?
Inventiveness of ChatGPT poses risk of defamation
Algorithms, artificial intelligence, automated systems and the law