Saturday, September 26, 2009

Go Conficker...

A little less than a year ago, Microsoft announced a critical vulnerability in its Server service. The vulnerability involved specially crafted Remote Procedure Call (RPC) requests, which the Server service would not handle correctly (i.e., drop). The crafter of these requests could use them to execute code of his choice on the server, such as creating or deleting users or changing security policies.

I'd call that a problem. And it wasn't merely potential problem; the exploit for this vulnerability is known world-wide as the Conficker worm, first detected not long after Microsoft's initial security bulletin. Since its inception, Conficker has wreaked havoc on government, business, and home computer systems all over the world, and the investigation to discover its perpetrators is still ongoing.

Our best guess is that the perpetrators are based in the Ukraine, since Variant E of the worm downloads software from a server hosted there. However, they have not yet been fingered, and in the meantime, they continue to control countless infected computers.

Conficker is easy to miss because nothing splashy happens. The most obvious way to tell that you have been infected is that user accounts are locked out or policies have changed, or automatic updates stop running. In the meantime, Conficker has been active on your network or Internet connection, downloading updates for itself and new malware, modifying your registry, establishing restore points, and so on.

There's a Microsoft patch that you can install, and all the major AV companies are able to detach and remove the worm. But despite these facts, Conficker continues to propagate, almost a year after its release, and the perpetrators have still not been found. One of the reasons for this is that Conficker is constantly being updated, and can disable some of the AV solutions before they have a chance to detect and remove it.

There is one very easy way to check to see if you are infected with Conficker: go to this page. If you cannot see all six logos, you may very well have been infected.

Wednesday, September 23, 2009

XSS Is Alive and Well

Not all that long ago, I had an interview where I was asked what cross site scripting was. Now, the thing is, I know very well what it is, and in fact, while I was working for my former employer, I wrote a white paper on the subject that was very widely used by their support and systems engineering personnel.

But have you ever been in the position of knowing something, yet when someone asks you about it, you just go cold? That's what happened to me in the interview. I stammered out a reply that was utterly wrong, and I knew it was utterly wrong, and I've been kicking myself ever since.

In the meantime, on a technical email list, one of my colleagues suggested that cross site scripting -- or XSS -- is no longer that much of an issue (because everyone knows about it and has taken precautions). That position is naive at best, though it's true that XSS is no longer the big deal it was a couple of years ago. However, it's still very much alive and well as a security vulnerability.

One of the problems in understanding XSS, or cross site scripting, is that the term itself is confusing. Originally it meant that a malicious web site could load another web site into a frame or a window, and then use scripting -- usually JavaScript -- to read and/or write data on that site (which is actually close to what I told my interviewer). However, later on, the term changed to mean "code injection" of scripting into a web page.

There are different kinds of XSS vulnerabilities, but the most typical type is where a user is enticed to click on or otherwise activate a URL that includes scripting language. This isn't that hard to do, because often users, especially those who are less technically sophisticated, don't look at where the URL is actually directing them to when they click on it.

Further, and unbeknownst to many users, it's possible to encode an object on a web page with a malicious URL such that just by viewing the page, the user is activating that URL. There's no way for the user to tell what's happened, and the only way to prevent it is to lock down the user's web browser such that it will not execute any scripts it finds on a web page. The problem with this is that locking down the browser to this degree will cause media-rich web sites to malfunction (from the user's point of view). In other words, as has always been the case, security is often sacrificed for ease of use.

XSS vulnerabilities have been exploited since the advent of the World Wide Web, but XSS became a really hot topic in 2005, which is when my former employer asked me to write the white paper about how one of their products addressed the issue. Vulnerability scanning for XSS was all the rage, and web site developers were scrambling to fix their html and scripting such that code injection could no longer work. Gradually, things calmed down to the point where my colleague could declare on a mailing list full of security geeks that it was no longer an issue. Too bad he was wrong.

The other day, one of my friends commented on a LiveJournal™ post I'd made, to the effect that it appeared my post had been hacked. He directed me to a news article on LJ that I'd missed: http://news.livejournal.com/117957.html . You can read the article for yourself, but basically what happened is that someone had managed to infect a Flash™ file with a malicious URL. Anybody who viewed the file would have their latest (at the time) LJ post altered: the infected file would be inserted, any tags or other extra info would be deleted, and (usually) the post's security level (i.e. public/friends only/private, etc.) would be altered.

Sure enough, my post was "infected". My tags and location were removed, a formerly "friends only" post (the default for my journal) was now public, and the infected media had been inserted. However, the site already knew about the problem, and so it had turned off media embedding so that no further users would be affected, and issued a bulletin explaining what had happened, so all I saw were the "boxes" mentioned in the news article, not the infected media. At this point, I have no idea what the media looked like or who I "caught" the infection from, but as it's been contained and fixed (and my post is now "friends only" again), I'm only mildly curious.

The LiveJournal™ problem was caught and mitigated quickly, and while certainly there was a breach of privacy (secure entries becoming public, email addresses being mined), the effects were relatively minor. But it should be pretty obvious that XSS is far from "no longer an issue", given what happened.

Thursday, September 17, 2009

this just in

So, in case you're not aware of it, there's an online petition to appoint Peiter Zatko to the President's Post of Cybersecurity Chief (also known as the Cybersecurity Czar). As soon as I heard of the petition, I clicked through to sign it, even though I'm not sure what, if any, effect the petition will have on Mr. Obama's decision. The fact of the matter is that I admire Mr. Zatko so much that I couldn't fail to sign it.

Mr. Zatko, who is currently 38 years old, has been a researcher in the field of network security for as long as the field has been extant. In 1995, he published the seminal white paper on the buffer overflow attack, "How to Write Buffer Overflows", which remains an important tutorial on the topic today for hackers of all varieties. Choosing to use his abilities to help rather than harm the government, he was one of several hackers to testify on security weaknesses before a Senate committee in 1998. Two years later, he met with then-President Clinton at a summit on network security. Currently, he is a division scientist at BBN, who knows a good thing when they see it (they wooed him back again after losing him in the '90's to @stake, which was essentially the L0pht gone corporate).

I believe that Mr. Zatko could ably perform the position of Cybersecurity Czar; in fact, he is uniquely suited to this position simply because of his background in grey hat hacking. My feeling is that if Mr. Obama is serious about network security, he will appoint Mr. Zatko or someone very much like him to this very important post.

Go for it, Mudge!

Wednesday, September 16, 2009

IETF Publishes a Draft on Remediating Bots in ISP Networks

The Internet Engineering Task Force just published a draft on how ISPs can help to remediate bots on users' systems or home networks. Having read the draft, I do have a couple of thoughts on it, which I will present after a quick synthesis of the draft.

Synthesis of Draft


In short, the draft covers the following subjects:
  • Maintaining privacy of the user

  • Non-interference with legitimate traffic

  • Recommendation for types of tools

  • Challenge of "definitive vs. likely" in informing user

  • Dealing with user complaints

  • Sharing of bot information with other ISPs

  • Use of Honeynets

  • Informing users:

    • Email

    • Telephone

    • Postal Mail

    • "Walled Garden"

    • IM

    • SMS

    • Web browser message

  • Remediation

  • Guided Remediation


So for me this actually brings up a couple of questions. First of all, who's responsible for a bot on the network? And second, what is actually going to work in a situation like this?

Responsibility


If there is a bot -- or really, any malicious piece of code -- on a user's personal system, who is responsible for discovering and/or remediating it? Unfortunately, it's not obvious. The user, obviously, owns all of his computers and networking equipment, up to and in some cases including the DSL or cable modem. That said, the ISP owns the actual connectivity. The ISP also will also get the black eye if malicious packets are coming through its networks, for example, if computers on the ISP's networks are used in DDoS attacks. The hope is that the ISP would have found and remediated the malicious code before that happens, but how far can (and should) the ISP go in the attempt to do so?

What Actually Works


The draft, in discussing options for informing users, talks about a "walled garden". What a "walled garden" does is place the user's account in some degree of isolation from the rest of the network, cutting off access to some or all services. The presumption is that the user will notice that his access is cut off and will contact the ISP, initiating a dialog that can lead to remediation. The draft mentions that the walled garden can persist until the problem is remediated, or it can be lifted as soon as the user has been informed of the malicious code.

In my opinion, the walled garden should actually serve the following multiple purposes:

  • Inform the user of a potential bot or malware based on the ISP's scanning (etc.) activity;

  • Remain as a safety net while the ISP and the user dialog about further malware scanning and remediation, and begin that process;

  • If the malware is found to actually exist, remain as a continued safety net until everything has been done, by both the ISP and the user, to remediate the situation.

In other words, keep the user's account isolated to at least some degree until the problem is fixed. Obviously, this is not going to be something that users will necessarily like, especially if they don't understand what's going on. And here is where I think ISPs need to take more responsibility, from the start, when user first sign up for Internet service.

In general, ISPs sign users up for Internet service, and then they just let them go. For users like myself, who know what they're doing on a network, and just want to be left alone, this is a pretty good option. But any user can become the victim of malicious code, no matter how sophisticated they are, and I think that ISPs are letting users down when they don't try to educate them about malware and what it can do. Just providing users with a CD of "connection" software, which may or may not contain AV and antispyware tools at a minimum, is not enough to fulfill an ISP's responsibility to keep bots and malware from reinfecting the Internet from its users' machines.

That's why I think the extended walled garden approach is necessary, combined with the ISP stepping in and helping the user confirm the presence of the malware and then help them remove it. But I also think that the ISP has to take more responsibility up front, to help the user understand what malware is, what it can do, and what to do to mitigate possible threats. In other words, I like the draft as far as it goes, but I think that it doesn't go far enough.