Australia is changing the way it regulates the internet – and no one is paying attention

When we’re scrolling online, most of us don’t give much thought to what’s going on behind the scenes – who makes the decisions about what content we can or can’t see.

That decision is often in the hands of companies: Facebook, TikTok, and most major social media platforms have rules about what material they accept, but the app can be inconsistent and less than transparent.

In recent years, the federal government has also passed a series of often controversial laws giving it more control over what’s online.

There’s the new Online Safety Act, for example, which was quickly passed in the middle of last year.

Among other powers, this forces the technology industry – which includes not only social media, but also messaging services like SMS, internet service providers and even the company behind your modem – to develop new codes that will regulate the “harmful online content”.

Written by industry groups, these codes will have a lot to say about how our technology is governed, but some worry they could have unintended consequences, not least because they borrow from an outdated classification system.

What are the codes?

After the Online Safety Act came into effect, the Electronic Safety Commissioner called on the industry to develop draft codes to regulate “harmful online content”.

As determined by the eSafety Commissioner, this “harmful” material is dubbed “Class 1” or “Class 2”.

These are borrowed from the National Classification Scheme, which is best known for the ratings you see on movies and computer games. More on that in a moment.

In general, you can consider class 1 as a material that would be refused classification, while class 2 could be classified X18+ or R18+.

Ultimately, the industry came up with draft codes outlining how they will put in place protections against accessing or distributing this material.

Electronic Safety Commissioner Julie Inman Grant is overseeing the new Online Safety Act.(ABC News: Adam Kennedy)

They vary according to the sector and according to the size of the company. For example, the code may require a company to report offensive social media content to law enforcement, have systems in place to take action against users who violate policies, and use technology to automatically detect known material. sexual exploitation of children.

What type of content will be affected?

For now, the draft codes only address what has been dubbed Class 1A and IB hardware.

According to eSafety, Class 1A may include child sexual exploitation material, as well as content advocating terrorism or depicting crimes or extreme violence.

Class 1B, meanwhile, could include material that shows “cases of crime, cruelty or violence without justification”, as well as drug-related content, including detailed instructions on the use of prohibited drugs. (Classes 1C and 2 largely deal with online pornography.)

Obviously, there is content in these categories that the community would find objectionable.

The problem is, critics say, Australia’s approach to classification is confusing and often out of step with public attitudes. The national classification system was promulgated in 1995.

“The classification system has long been criticized for capturing a whole bunch of material that is perfectly legal to create, access and distribute,” said Nicolas Suzor, who studies internet governance at the University of Queensland technology.

And rating a movie for theaters is one thing. Categorizing large-scale content online is another.

Consider potential Class 1B material – crime instructions or information on the use of prohibited drugs.

There are scenarios where we might hypothetically want this information to be available, Dr. Suzor suggested, such as the ability to provide safe medical abortion information to people in certain states in the United States.

“These are really difficult categories to apply to any sort of ‘internet scale’ because you’re very clearly hitting all the gray areas,” he said.

There was a recent review of Australia’s classification regulations and a report was delivered in May 2020, but it remains unclear how this might affect proposed industry codes designed to regulate “harmful online content”.

Will companies need to monitor my messages now?

The codes are meant to affect nearly every industry that touches the internet, and there are concerns about how privacy could be affected when applied to personal messages, files, and other content.

Some major social media platforms already use digital “fingerprinting” technology that tries to proactively detect known child sexual exploitation or pro-terrorist material before it is uploaded.

The eSafety office has expressed interest in code requiring a level of proactive monitoring – intercepting “harmful” content before it is published.

In the draft codes, however, industry groups have said that when it comes to private file storage or communications, extending proactive detection could have a serious impact on privacy.

There are also concerns that the codes will reinforce an approach to content moderation that is really only available to big players. Scanning tools are not necessarily cheap or readily available.

“A lot of these proposed solutions require big tech to stay big to meet these compliance requirements,” said Samantha Floreani, program manager at Digital Rights Watch.

A spokesperson for eSafety said he would not expect industry codes to impose the same level of commitments on small businesses as large companies.

Then there is the question of whether the proactive detection systems are accurate and whether there are avenues of redress.

Gala Vanting, head of national programs at the Scarlet Alliance, said the use of this technology is of particular concern for people working in the sex industry.

“It is very likely that he will overcapture the content. He is very unskilled in reading the context [around] sexual content,” she said.

Another complicating factor is that there is also a Privacy Act review underway, which could affect the operation of these codes. Say, for example, introducing requirements that might limit the scan.

A spokesman for Attorney General Mark Dreyfus said the department would produce a final report later this year recommending reforms to Australia’s privacy law.

And after?

Industry code projects are now open to public comment. Next, the Office of the Online Safety Commissioner will assess whether it considers the codes to be up to date.

But according to some accounts, the consultation was choppy and many civil society groups believe the consultation window is too small and unrealistic.

There is also some frustration that the codes are being developed ahead of the Privacy Act review, among other potential online regulatory changes that are on the table, which could lead to a rather confusing regulatory system for online content.

Then there is the debate over whether Australia is taking the right approach to these issues.

The Online Safety Act itself was controversial, not least because of the discretionary power it gave to the Minister of Communications and the Commissioner of Electronic Safety.

“While there are some obvious elements that wouldn’t make it through…it’s tremendous power in the hands of one person that actually determines what the expectations of the community are,” said Greg Barns of the ‘Australian Lawyers Alliance.

“The broader questions of what constitutes harm then begin to coalesce into questions of free speech, but also transparency and accountability.”

Dr Suzor said that in general he was “totally on board” with the idea that governments want to have more of a say in the standards set for acceptable online content.

But in practice, he suggested there wasn’t much clarity about what the codes were designed to do.

“Codes are agreements to essentially do what the industry is already doing, at least the broadest part of the industry,” he said.

“Actually, I don’t know what they’re supposed to accomplish, to be honest.”

Comments are closed.