The Australian Government has released its National AI Plan, setting out how artificial intelligence will be managed, monitored and supported across the economy. Rather than introducing new, standalone AI legislation, the Government has confirmed it will regulate AI through existing legal frameworks, supported by a new national advisory body.

For security providers — particularly those involved in electronic security, surveillance systems and monitoring services — the Plan provides important context about how AI-enabled technologies are likely to be governed in the years ahead.

This article outlines what the National AI Plan says, what it does not say, and why it matters to Australia’s private security industry.

What is the National Ai Plan?

The National AI Plan establishes the Australian Government’s approach to supporting innovation in artificial intelligence while managing potential risks. Central to the Plan is the creation of a AI Safety Institute, which will:

  • Monitor developments in AI technologies
  • Advise government and regulators on emerging risks
  • Identify gaps or pressures in existing laws
  • Develop guidance for industry and regulators

Importantly, the Government has rejected the introduction of mandatory, sector-wide AI “guardrails” at this stage. Instead, it has confirmed that existing laws — including privacy, discrimination, consumer protection and workplace laws — will continue to apply to AI systems.

This signals an approach focused on interpretation and enforcement, rather than wholesale regulatory reform.

Why this matters to the security industry

AI-enabled technologies are already widely used across the security sector, often embedded within systems that may not be explicitly labelled as “AI”. Examples include:

  • Video analytics used in CCTV systems
  • Automated aleting and anomaly detection
  • Facial recognition or behavioural analysis tools
  • Predictive or pattern-based monitoring software

As adoption increases, so does scrutiny — not just of the technology itself, but of how and where it is deployed, and who is responsible when issues arise.

The National AI Plan confirms that AI in security is not operating in a regulatory vacuum. Instead, providers must continue to comply with existing obligations, even as the technology evolves.

Regulation through existing laws: what does it mean?

Rather than creating new AI-specific legislation, the Government has confirmed that current laws will be applied to AI use, including:

  • Privacy and data protection laws
  • Surveillance devices legislation (state-based)
  • Anti-discrimination and equal opportunity laws
  • Workplace and employment regulations
  • Consumer and contract law principles

For security providers, this reinforces and important principle:

Using AI does not removce or reduce legal responsibility – it may increase it.

How AI systems collect data, generate alerts, or influence decision-making will be assessed using the same legal standards that already apply to security operations.

The role of the AI Safety Institute

The new AI Safety Institute will not be a regulator. Instead, it will operate as an expert advisory body, supporting government agencies and helping interpret how AI interacts with existing legal frameworks.

For industry, this suggests that:

  • Guidance and expectations may evolve over time
  • Regulatory focus may sharpen in higher risk use cases
  • Certain applications – such as surveillance and biometric systems – may attracct cloer attention

While the Institute will not issue penalties, its work is likely to influence how regulators approach compliance and enforcement.

What the Plan does not do

It is equally important to understand what the National AI Plan does not introduce:

  • No immediate licensing or registration requirements for AI systems
  • No blanket ban on AI surveillance technologies
  • No mandatory national code to security providers
  • No override of state-based surveillance or licensing laws

For security providers, this means there is no sudden compliance cliff, but there is a clear expectation of responsible, transparent and well-governed use.

A practical takeaway for security providers

At this early stage, the National AI Plan is best understood as a directional signal, not a rulebook. It indicates that:

  • AI use will increasingly be examined through a risk lens
  • Documentation, oversight and governance will matter
  • Providers should understand how AI functions within their systems
  • Responsibility will not be outsourced to technology vendors

For many providers, this may begin with simple but important questions:

  • Where is AI being used in our operations?
  • What decisions does it influence?
  • Who oversees its use?
  • How would its operation be explained to a client or regulator if required?

Sources and further information

This article is based on publicly available information from:

  • Australian Government – Department of Industry, Science and Resources (DISR), National AI Plan and related announcements.
  • ABC News reporting on the release of Australia’s Nation AI Plan and the establishment of the AI Safety Institute.

Members wish to review the Government’s material directly can access it here:

https://www.industry.gov.au/publications/australias-national-ai-plan

SPAAL will be publishing 4 mmore articles on this topic over the coming weeks.

Please note: This article is general information only and does not constitute legal advice. Members should seek professional advice specific to their circumstances.