Five Takes logo
Five Takes News
HomeArticlesAbout
Michael
•
© 2026
•
Five Takes News - Multi-Perspective AI News Aggregator
Contact Us
•
Legal

news
Published on
Wednesday, April 29, 2026 at 08:08 AM
AI at Work Spurs New Push to Curb Bosses

Who Gets Watched First

Australia risks repeating the mistakes it made with social media unless it moves quickly to regulate the spread of artificial intelligence in the workplace, according to a new report produced by the John Curtin Research Centre and backed by the SDA, Australia's largest retail and fast-food union. The report warns unchecked AI could intensify worker surveillance, unsafe workloads and job insecurity, and it calls for a national AI taskforce, a review of the Fair Work Act to address AI-related workplace risks, mandatory human oversight of AI at work and an AI expert advisory panel within the Fair Work Commission to help assess AI-related disputes and ensure existing workplace protections continue to apply.

The researchers said employers must consult with workers and unions before AI is deployed and there should be universal access to AI education and skills. That demand lands in a workplace landscape where the people doing the labor are expected to absorb the risk while decisions are made above them, by employers, regulators and policy forums.

Co-author Dominic Meagher said, "AI is so much more powerful than social media." He said, "We do not have the luxury of getting it wrong this time." Dr Meagher said the report was not anti-innovation but warned there needed to be clear regulations laid out in a national strategy, with workers at the centre.

The Machinery of Control

Dr Meagher said, "Just because AI makes a decision, it doesn't mean that it's an excuse for the company to sidestep their obligations [to worker's rights]." He also said, "Companies, where they are working with their workforce, where it's actually integrating it in their workflow, those companies are able to turn AI adoption into more profit." Later, he said, "Australia's spent a long time building a pretty fair place and we shouldn't just sidestep it because new technology comes along." He also said, "In a sense, it's verging on discovering alien life, but we've discovered it in our own computers," and, "AI is going to be the biggest change of all and we really need to make sure it lifts up everyone."

Workplace Relations Minister Amanda Rishworth told the AFR Workforce Summit on Tuesday that the Albanese government had conducted a workforce "gap analysis" into the effects of AI on jobs. Preliminary results cited by the federal government indicate AI has slowed growth in some occupations such as filing clerks and keyboard operators, but the mix of jobs in the economy has not changed faster than usual. The results suggest employment outcomes for young tertiary graduates have been positive, countering fears they would be early casualties of AI.

The government is also developing the capability to monitor the impacts of AI by analysing changes in the labour market since ChatGPT's launch in November 2022, with a focus on entry level jobs and workforce composition. So the apparatus is already building its own dashboard, measuring the damage after the fact while the people at the bottom are told to wait for the data.

What They Call 'Common Understanding'

Ms Rishworth said a forum made up of government, employers and unions would meet for the first time today to examine the "key themes" in adoption of AI in workplaces: trust, capability, transparency, safety and productivity. She said, "These themes will shape our discussions on how we can build common understanding, and translate these themes into agreed outcomes."

Workplace relations and safety lawyer Shannon Chapman said there was no national, overarching piece of legislation that deals with AI in the workplace. She said that if someone asked for advice about implementing biometrics data scanners in the workplace, "that's not necessarily a quick or easy answer. It will be jurisdiction specific." She said, "It will depend on the type of data that's being gathered, how it's going to be stored, how it might be able to be used … it's a complex legal framework."

She said there are also several federal laws to consider, including anti-discrimination, human rights and the Fair Work Act. Ms Chapman said consistency around workplace surveillance laws across states and territories would be helpful, but overarching legislation could also add another layer of legal complexities. She said new AI-related employee rights would not be simple and could complicate the legal landscape and create practical challenges for employers to follow the rules.

She said, "For example, around job security or restrictions on using AI to monitor and track work, then there would be a question there as to how that interacts with existing rights and entitlements and also employer compliance obligations." She also said, "It's really critical from my perspective as someone advising clients to have really fundamental underpinning policies and procedures and terms in your contract of employment that deal with use of AI and what is and is not appropriate," and, "Do you have a policy about AI use? What does it say? Does it clearly tell your employees what the consequences are if they breach that? Have you trained your employees on it?"

Surveillance by Another Name

Notion Digital Forensics managing director Matt O'Kane said most employers were already monitoring their staff to some extent for cybersecurity or cyber insurance purposes. He said, "Microsoft Office 365 might be monitoring actions that you might do." He said, "In general, I see them used after an incident, which most Australians would think that's reasonable if something's happened in the workplace, we need to look into it, no problem."

He said more "intrusive" international tools were being adopted in Australia that could monitor things like on-screen activity and keystrokes. He said there have also been reported incidents through Fair Work where a company has gone "over the line", turning staff laptops into covert listening devices while they worked from home.

He said, "There's a huge trend with technology vendors to introduce AI, and it's easy to see why — we've all tried it, we understand that it can boost our productivity," but warned, "But when they introduce AI, we run a risk. Would we want this to happen if this was managed by a person, would we want a person to monitor this?" He also said, "So we want to be careful that we're not importing technology from overseas where there's different workplace culture, workplace expectations, and bringing it to Australia … without really thinking through [if it's] a reasonable deployment."

He said, "There's a limit to human monitoring but there's typically no limits to AI monitoring," and, "We need a safe workplace, but we want a friendly workplace, a trustworthy workplace." He also said, "Just because it's AI, it doesn't remove your personal responsibility. You need to test that AI to make sure it operates in your name in a way that you're happy with as an employer."

The report, the ministerial forum and the legal advice all point to the same hierarchy: workers are expected to live under systems of monitoring, while employers, government and consultants debate how to manage the fallout. The only concrete protections named in the source are consultation, oversight, legal review and workplace policies — all framed inside the existing machinery that already gives bosses the power to deploy the tools in the first place.

Previous Article

NCAA Bans Players as Betting Machine Spreads

Next Article

Court Weighs Trump Bid to Strip Migrants' Shields
← Back to articles