This site is an archived version of Indymedia prior to 5th November 2012. The current site is at www.indymedia.org.nz.

MSD Debacle: Free Code for Government IT

in

Danyl Strype of Disintermedia.net.nz comments on the recent exposure by Keith Ng of Public Address (after a tip-off from Ira Bailey) of the complete lack of network security in the Ministry of Social Development (MSD), and the implications for the way government departments protect people's private information.

Much has been written already about the implications of the total lack of privacy control uncovered recently in the MSD network. A recent blog post on the NZ Pirate Party website points out that security needs to be designed in from the ground up, not tacked on as an afterthought, which I thoroughly agree with. The post also seems to blame the decentralized nature of government IT teams for the lack of security, calling for a whole of government approach to ICT. To be honest, I'm sceptical on this point.

I'm all for encouraging collaboration and peer review between government IT teams (a regular contest to see which team who can compromise another department's server the quickest...), and there are certainly benefits to evolving best practice standards, and common protocols, which prevent private data leaking out of government departments. A good start might be a privacy-protection equivalent of the NZGOAL framework, which gives departments advice on making publicly-funded data freely available, including specifying when they shouldn't (eg when it's private, personally identifiable data).

However, I don't think we'd be well served by putting all the decision-making about government IT in the hands of a central committee. There are a number of benefits to having small, independent IT teams, each looking after well-defined areas. One such benefit can be found in the classic book on software development entitled 'The Mythical Man-Month'. Author Fred Brooks describes software project managers attempting to speed progress by adding more programmers to their teams, with exactly the opposite result. The communications overhead added by the need to bring the new programmer up to speed, and maintain compatibility between the components different team members are working on, becomes exponentially greater with every new addition to the team. Experienced IT managers have found that smaller teams, with smaller tasks, produce better results, faster.

Another reason, perhaps more important, is that the geeks who admin a particular network potentially have access to all its data, including private files and transactions. It is possible to set up encryption schemes so that admins can't access user data (perhaps the government could ask Kim Dotcom for advice), but when the IT staff works in small teams, who each have privileged access to only one department's network, the size of the leak when such schemes fail (or fail to be designed in the first place) is reduced.

The Internet doesn't work by putting all IT security decisions in the hands of a single global committee. It works through a set of protocols that delineate public, private and secure areas, leaving each organisation's ICT team (be they in-house or contracted in) to decide which parts of their organisations data belong in which area, and implement security measures appropriately. The current approach of each Ministry having an independent IT team follows this model. While having one central IT team servicing the "whole of government" *might* have avoided the MSD debacle, I think it more likely to have spread that same debacle across the "whole of government".

Another reason not to create a central committe of government IT is that the quickest and easiest way to pass the buck, and the responsibility for pants-down moments like the open front door of the MSD network, is to outsource the entire thing to Microsoft, Oracle or IBM (or some hellish combination of the above). In my opinion, the take home lesson from the MSD experience is that government IT security would improve if more of it was based on proven free code server operating systems like those based on Linux/GNU and the BSDs.

The MSD system is based on Microsoft Windows, a proprietary OS whose security mostly relies on obscurity (eg hoping nobody will open Word, click 'Open file' and poke around). Windows was designed for individual computers, with networking strapped on as an afterthought. An experienced technician can plug most the holes in proprietary systems like Windows, but the MSD example shows what can go wrong when the technicians lack that experience, or simply aren't given that requirement in their brief.

Linux/ GNU and BSD, on the other hand, are children of the internet, designed from the ground up to for security in a networked world. They've been used as the OS for most webservers since the invention of the Web, which means they've been tried and test against the wilderness of the open Internet, and their open source code constantly improved in response. Although it's certainly possible, it takes a great effort of naivety or malicious intent to compromise all the built-in security in these free code OS. 

Money spent on open source IT is much more likely to go to kiwi programmers and technicians. Much of what left goes to the not-for-profit entities which act as stewards for projects like GNU, Linux, Apache etc, whose free code products are effectively a public commons anyone can benefit from.  By contrast, the lion's share of the public money currently being spent on the creaking edifice of governent IT goes to multinational corporations like Microsoft. Security bungles like the one at MSD may harm underprivileged kiwis, but they sure are good for the bottom line of these corporations, whose software is "intellectual property". Ironically, we are effectively subsidizing these companies to develop their private software, at the expense of our privacy.

That said, I wouldn't support the government, or a public service committee, dictating to the various departments' IT teams that they must use free code software, produced by open source developer communities. "If it ain't broke, don't fix it", as the old saying goes, and where existing teams are doing their job well using proprietary tools, by all means let them carry on. However, teams must have at least the same freedom to choose free code software to do their work as they have to choose proprietary software, and considering the greater public good created be open source communities, it makes sense for any incentives that do exist to lean towards embracing free code solutions, not the digital slumlords of "intellectual property".

Comments

The security problems at WINZ

The security problems at WINZ were so bad Linux couldn't save them. It was a failure at the physical network/firewall level, it doesn't matter what runs on the client machines.

Permissions

>> It was a failure at the physical network/firewall level, it doesn't matter what runs on the client machines. <<

Um.. no. The firewall protects the network from intruders coming into the network from outside it, through the internet connection. The MSD debacle was a failure of file permissions, ie, every computer on the network had permission to view every file on the network. This is bad.

A linux OS on the client machines, and a linux fileserver at the back-end, would give no permission by default, which means the admins have to explicitly grant permissions to each user (including the kiosk 'users') to access the things they should be allowed to access.

A windows client/ server, as WINZ found out to their embarrassment, grants all sorts of file permissions by default. Arguably this makes it more "user-friendly", but obviously makes it much less secure by default, as the admins have to think through every possible case where someone might be able to access files inappropriately, and stick their fingers in the dyke to prevent that.

You may be right that the kiosks were physically connected to the internal network servers, instead of only to the internet gateway server, but in a Linux network this wouldn't matter.