☆ Is Windows to blame for viruses?

Bug engaged in exploitA historical post, for a change. A comment on a mailing list tonight – that something was “rather like blaming Windows for getting viruses” –  sent me exploring my recollections of CPU security on Intel chips from my days at IBM. I went scurrying to find a half-remembered explanation from the past of why, in addition to the larger user base making the target much more tempting, Windows has suffered from virus attacks much more than anyone else to date. I couldn’t find it straight away so this post is the result.

Before you add a comment, note I am NOT saying that the only explanation for Windows viruses is this technical one; obviously the huge attack surface of the giant user base attracts attackers. I AM saying, however, that leaving the door open for a decade hasn’t helped and is a major reason why the dominant form of malware on Windows is the virus and not the trojan.


All operating systems have bugs, and I suspect (although haven’t found any data tonight to confirm) that they occur at approximately the same frequency in all mature released operating systems. All operating systems that respect Shaw’s Law are also vulnerable to malware. Malware depends on identifying exploits – defects of some sort in system security that can be “exploited” to permit infestation by the malware.

Not all bugs turn into security exploits, though. In particular, in Unix-like operating systems like OS X, Linux and Solaris, it’s unusual for bugs to lead directly to security exploits; instead, most malware depends on user error or social engineering.  For an exploit to exist, there has to be a way to use knowledge of the bug to gain access to a resource that would otherwise be forbidden.  It certainly happens on *ix systems, but the operating system has checks in place to prevent the most common way of turning bugs into exploits.

Unauthorised Pokes

The most common way for this to happen (although there are many others) is for the operating system to fail to differentiate between data and program code. By treating code and data  as the same thing, a path is opened for malware to use a bug to push some data into a memory location (a “buffer over-run” or a “stack overflow” are examples of this) and then tell the computer to execute it. Hey presto – exploit. All an attacker has to do is push code for a virus (or a virus bootstrap) into memory and ask for it to be executed, and your computer is compromised.

Windows could have prevented this sort of thing from happening by exploiting ring protection offered by Intel x86 architecture from the 80186 chip onwards. A feature of Intel’s x86 architecture makes it possible to prohibit execution of data unless the program in question is privileged (“at ring 0”), usually by being part of the operating system. Application code at ring 3 can be forbidden from executing data.

Indeed, Windows did use ring 0/ring 3 differentiation for some jobs (skipping rings 1 and 2 for cross-platform technical reasons). But access to ring 0 – “able to execute anything you want” – was never prohibited. Doing so would have prevented legacy DOS code from running, so as I remember being told, Microsoft chose not to implement ring 0/ring 3 protection in Windows NT until it was completely sure that deprecating DOS legacy support would no longer be a marketing issue. That was in Windows 8…

Credit Where Due

So actually it’s somewhat appropriate to blame Windows versions prior to Windows 8 for being vulnerable to many viruses which exploited bugs in this way. The existence of the vulnerability was a conscious choice and a marketing decision; in OS/2, which had no legacy to accommodate, the ring 0 separation was enforced.

Yes, Windows also offers a larger attack “surface” because of its wide adoption, and yes, there are other exploit mechanisms. But this tolerated technical vulnerability is the root cause of a large number of exploits. So while it’s true that malware authors are directly to blame for malware, there’s also a culpability for Microsoft that can’t be ignored. Thank goodness Windows 8 has addressed this particular issue.

☝ The End Of The Road For Web Services

The news is out that WS-I will now be absorbed by OASIS according to their PDF release. They told us back in July that it was going to happen. As far as I can tell, that’s the end of the WS-* family of specifications – there will be no more, and they are destined purely for “maintenance” at OASIS. They will be with enterprise developers for years to come, a kind of architectural COBOL.

In case your computer industry history is lacking, WS-I (“Web Services Interoperability Organization”) is the industry consortium that got together a decade ago to create specifications for Web Services protocols (note the capitals, so as not to confuse with actual internet services that use HTTP and REST for loosely-coupled data access). Formed mainly as a competitive action by IBM and Microsoft, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

Read on over at ComputerWorldUK

✍ In A World Without Walls…

World Without Walls T-shirt§ I think the time is coming to fill in a few gaps in technology history – including a little trivia. If you have photos, let me know!

A piece of history that many of us older Java geeks remember is the stunt someone pulled at JavaOne in 1997. Java was the hottest new technology in town, the embodiment of the emerging web culture. The JavaOne conference was in its infancy – it started in 1996 – and was sharing the Moscone Center in San Francisco with another conference, Software Development West. The relationship was friendly – indeed, the two conferences clubbed together to close Howard Street and hold a street party for delegates of both events. Continue reading

%d bloggers like this: