Australian Filtering Announcement Raises Questions and Ire
The Australian Telecommunications Minister Stephen Conroy announced on December 31 2007 that mandatory filtering of the Internet would be instituted there. This announcement follows the Rudd government’s plan to provide a “clean feed internet service for all homes” that was unveiled prior to the fall 2007 elections. Many of the reactions to the proposal have been scathing.
If implemented, the policy would mark an important change in Australian filtering practices by shifting the operation of filtering technology away from individual users and computers to Internet Service Providers. The previous plan, to provide all interested families with software to filter the Internet, was designed for individual or household use and rolled out last fall. Soon after the launch of the AU$84 million program, a young man described how he easily dismantled the controls.
This is neither new nor surprising for those that have been following the debate over the use, and hacking, of filtering technology. In placing the controls in the hands of ISPs—who so far seem to be understandably reluctant to take over this role—the calculus of filtering and circumvention change somewhat, but the vulnerability to circumvention remains.
The other major change inherent in this new plan would be that users that wish to see adult content would be required to opt-out of the filtering regime, rather than opt-in, as in the prior configuration. From a privacy and civil liberties standpoint, the notion of compiling a registry of people that would like to see adult material is troubling.
A new China?
When making the announcement, the Australian government chose to compare their plans with the ISP-level filtering that is being used in the UK and in Scandinavian countries to block child pornography. The UK system blocks a relatively small number of sites. It is also widely recognized that it comes up far short of putting an end to the problem. Fair enough, it need not be perfect, though should at least do more good than harm.
Compared to filtering in the UK, however, the scale and scope of material that would be filtered in the new Australia proposal represents a quantum leap. It would require ISPs to block all ‘inappropriate’ material, presumably a substantially larger set of sites. Blocking child pornography is one thing; blocking all ‘inappropriate’ material is another.
The comparison with China is not the best comparison. No one expects Australia to start filtering political opposition and religious groups, with browsers shutting down if a search string includes a sensitive key word. A more apt comparison is with Saudi Arabia and other Gulf states that filter pornography and other socially sensitive material at the national-level by requiring ISPs to block offending websites. These states do so by routing all Internet traffic through proxies.
The problems with instituting a nation-wide filtering system are numerous. Performance and speed are one question. The impact on broadband speeds is complex and may be hard to predict, but it is disingenuous to say that such a plan can be put into place without having an impact on the performance of the Internet. Prior studies from Australia have suggested that the performance costs would be steep. Over-blocking and under-blocking are well-known problems with filtering.
Transparency and accountability are also important considerations. Who will decide what is appropriate or not? The proposal stipulates that an Australian government agency, the ACMA, will make these determinations. The tremendous level of activity over Internet greatly complicates this approach, and more generally, stymies legal remedies to the problem of inappropriate speech on the Internet. The ACMA, which is responsible for regulating on Internet speech, is most likely unprepared to sift through millions of web pages to determine what is pornography and what is not. This is only practically feasible if done by software designed for the purpose, and this naturally implies that software providers will make a great majority of the decisions as to what is appropriate or not for children. Perhaps this is what Australia wants.
Perhaps a better comparison for the Australian filtering proposal is with prior US attempts to legislate a better Internet. The US response to this might be: "been there, done (doing) that". For more than a decade, the US Congress has been formulating legislative strategies to better protect children in the United States from the uglier sides of the Internet. The US courts have been correspondingly busy overturning these laws. For example, the Community Decency Act of 1996 and the Child Online Protection Act of 1998 have both been overturned.
Central to the decisions of the US courts is the potentially deleterious impacts of centralized filtering on free speech. Another key factor is that the blocking lists used to filter Internet content are as easily applied to home computer filtering products as they are at the ISP-level. These factors have led the US courts to conclude that filtering is best applied at the household-level and not at the national level.
When he struck down COPA in March 2007, US District Court Judge Reed wrote:
“I agree with Congress that its goal of protecting children from sexually explicit
materials on the Web deemed harmful to them is especially crucial. This court, along with a broad spectrum of the population across the country yearn for a solution which would protect children from such material with 100 percent effectiveness. However, I am acutely aware of my charge under the law to uphold the principles found in our nation’s Constitution and their enforcement throughout the years by the Supreme Court. I may not turn a blind eye to the law in order to attempt to satisfy my urge to protect this nation’s youth by upholding a flawed statute, especially when a more effective and less restrictive alternative is readily available (although I do recognize that filters are neither a panacea nor necessarily found to be the ultimate solution to the problem at hand).”
It is difficult to see how the circumstances in Australia and the United States could be different enough to lead to radically different policy responses in the end.
A second piece to the puzzle
Also at issue in Australia is the December 20th release of regulations that specifies the implementation of a restricted access system that would require Australian content providers that include adult and mature content to include age verification systems. The shortcomings of this approach are many. The first problem is that on the Internet one does not have to show up in person. This facilitates the trading and spoofing of identities. The anecdote that comes to mind is the quip of one dog to another in a New Yorker cartoon: “On the Internet, nobody knows that you are a dog.”
South Korea is another country that is implementing such a system, where users searching for adult content must enter their name and national resident registration number. Google’s take on the system: “Google wants to increase its local presence in Korea to compete with search engines like Naver that dominate Korea's market. But they can't do that without respecting the law, even if it's absurd.”
Quoting Judge Reed of the US District court again: “From the weight of the evidence, I find that there is no evidence of age verification services or products available on the market to owners of Web sites that actually reliably establish or verify the age of Internet users. Nor is there evidence of such services or products that can effectively prevent access to Web pages by a minor.”
The ultimate problem with the restricted access system is that these provisions are only enforceable on content hosted in Australia. If implemented as planned, adult content hosted in Australia will be behind an age verification screen in the adult section of the Australian-hosted Internet. But what about the websites hosted in other countries? This doesn’t touch any of them. Neither does it account for the Australian content providers that are concerned about the impact of age verification mechanisms on their web traffic who will quickly scurry to hosting providers overseas. This could explain the December 31st announcement describing plans for the mandatory filtering of the Internet; why regulate domestic Internet content with such stringency if this only penalizes local businesses while it has little impact on what is availability on the Internet?
Will it stick in the end?
The application of these decisions must consider the technical, social, economic and legal aspects in addition to the political. So far, we have heard an articulation of the political side of the equation in Australia. The other sides are equally complex, if not more so.
For many, the sum of these two policies unnecessarily interjects the government into decisions that can and should be made at the household level. The Australian government is yet to make the case convincingly that this policy does more good than harm. This is a tough case to make.