Thursday, July 18, 2013

a breakout of breaking in (to infosec)

Someone asked me recently how to "break in" to Information Security from another IT field. Thinking my response might be useful to others, I'm sharing it here.

Note: a colleague corrected me on the number of years you need in the industry for the cert...thank you!

*****
Hi Dave,

Thanks for contacting me and I hope you're well. Here's some information on how to break into Infosec. 

I got lucky, I got into Infosec when it was a new thing, but now it's a pretty competitive field. The most important thing is to have a certification, because the US Government has decided that's how to tell if someone knows what they're talking about, and the rest of the country has followed suit. 

There's more than one certification, but the one that has the most prestige (currently and for the last 15 years) is the CISSP, administered by ISC2. You can find out a lot of information on their website (https://isc2.org) but here are the basics:

  • Along with the certification is the expectation that you have worked in the field of Infosec for five years (in two of the "domains", listed below). If you have not worked in the field for five years, you can still be awarded the certification as an "Associate". It's the same as the full certification, and when you've been in the industry for five years, you're a full member without having to do anything else. You can also possibly get a year "off" this requirement if you have (for example) a degree in Infosec or another related certification.
  • The certification period is for three years, after which, if you've fulfilled certain requirements, you will be recertified. If you have not fulfilled the requirements, you will have to retake the certification test. You don't want to have to do this.
    • Each year you have to pay an $85 annual maintenance fee. Currently, they're letting people defer the fee until the recertification period, meaning that you can choose to pay three years' worth of fees all at once (with a small discount). Some companies will allow you to claim the fee as an expense.
    • During the recertification period you also have to earn Continuing Professional Education points, or CPEs. You need to earn 120 every three years and at least 20 per year (i.e. you can't earn them all in the last year). This is to prove you're staying on top of the industry. Earning CPEs is really easy; you typically get one for every hour you spend doing something related to the industry (aside from actually working). So if you attend online classes and webinars, go to conferences, write articles on Infosec, read/review books, and so on, you'll have no problem. I'm usually drowning in CPEs. 
  • It's not necessary, but it's a really good idea to take some sort of training for the CISSP exam. This could consist of buying one of those thick books with the CD in the back, or you could take live or computer based training. This can vary in price, but as an example, what ISC2 charges for their Live Online Seminar series is $2,495. In comparison, their official textbook is $79.95. 
  • ISC2 does have, for free, a webinar series to give you information on what you need to know for the exam. You can sign up for this at  https://www.isc2.org/cissppreview/Default.aspx.
  • The current price for the CISSP exam is $599, and they give you six hours to take it in (I don't test well, and I needed about four). You can take it online, which wasn't an option when I took it, or you can take it at a test center. Typically if you take a training course, the course offers an opportunity to take the exam at the end, and I would definitely recommend doing this if you took a class. Otherwise it's more convenient just to schedule the test online and take it in the comfort of your home. 
  • The exam is 250 multiple choice questions.  25 of the questions are experimental questions which are not graded - they're always changing the content of the test. A score of 700 will give a passing grade; however, you are not told what your score was when the test is graded, just if you passed or failed.
  • The content of the exam involved knowledge of ten "domains" of expertise:
    • Access control
    • Telecommunications and network security
    • Information security governance and risk management
    • Software development security
    • Cryptography
    • Security architecture and design
    • Operations security
    • Business continuity and disaster recovery planning
    • Legal, regulations, investigations and compliance
    • Physical (environmental) security
  • Once you pass the exam, you must find another CISSP holder in good standing to endorse you. This should be someone who knows you well in a professional capacity. 
  • There is a "junior" version of the CISSP, the SSCP, but there's no sense in getting it - you're much better off with the CISSP.
Having a CISSP will automatically open a lot of doors, because without it, most employers won't even talk to you about Infosec jobs. If you can demonstrate that your CISSP plus a knowledge of good coding practices makes you more valuable than someone who (for example) has been working in Infosec longer but doesn't have as diverse a background, you'll be in good shape.

In Infosec, you will never make BAD money. I had issues finding work in the Hampton Roads area, but that was because I didn't have a lot of gov/mil experience, and that's what that particular area demanded. The NY/NJ corridor has a lot of opportunities in the private sector. 

Once you've decided to definitely go for getting your certification, I have some useful employment contacts. My employer is great and I really love working for them. They won't pay any of your certification-related fees, though, which is typical of smaller gov/mil contractors. A lot of private sector companies will pay those fees. 

Please let me know if you have any questions about any of this. I'll be happy to help out however I can.

Wednesday, June 26, 2013

Were You Surprised About The NSA?

Spying on Americans isn’t new. What’s new is that somebody blew the whistle.

The big news in the Federal government for the past month has been the latest leak, i.e. Edward Snowden revealing the NSA’s PRISM and all that entails. There have been a host of denials, both from the NSA and from service providers, and a lot of people are very upset about the possibility of “their data” being spied on by the government.

I’m a little less naive about the situation. For one thing, I’m quite aware that once I’ve posted something somewhere, or used a “cloud” service, that data is no longer “mine” in the way that (say) my jewelry is mine. Perhaps it should be, but it isn’t, and to think that an entity as powerful as the NSA isn’t accessing “your” data in whatever way it wants shows a great deal of credulity. There has never not been a time when the US government hasn’t been able to spy on civilians, nor any reason to think they have not been doing so. Up until recently, what has mitigated the situation is that there was simply too much data to be analyzed for such spying to be useful without a definitive target. Now there’s reason to think that’s no longer the case, and that’s all that’s really changed recently. Big data and the power to crunch through it has its disadvantages.

The really amazing thing about the leak is that it happened, that somebody had not only the ability to find out hard data about the NSA’s activities but the ability to get away (so far) with it. The government has used the former fact to downplay the importance of the leak and/or try to outright deny that Snowden accomplished anything significant. That strategy having failed, they’ve tried to justify the surveillance. That strategy isn’t really working either, mostly because the Obama administration has more than once stated that it is justified in continuing Bush-era surveillance and defense programs instituted soon after 9/11, and the public is fed up with it. PRISM is seen as one more example of the current administration’s overstepping itself, whether or not that is true.

Snowden has stated that he was inspired by other whistleblowers, such as Bradley Manning, a soldier who has been denied due process for his alleged crimes. Manning, however, simply took advantage of a glaring lapse in security, whereas Snowden’s actions took more expertise (a lot more, I hope, frankly). It’s actually hard to say what motivated either man. The idea that the NSA might actually cease to conduct surveillance on American citizens is laughable. The only thing that’s fairly obvious is that Snowden was very aware of what he was doing, and what might be the result - for him - of his actions (something that was never clear about Manning’s decision to leak data).

Not surprisingly, Snowden’s actions - so soon after Manning’s, and with Julian Assange’s future still in doubt - are being seen by many as heroic, even when Manning’s and Assange’s were not. It’s almost funny that the public is more horrified at being spied on than it was at “Collateral Murder”. Funny for those of us who weren’t surprised by it, that is.

Thursday, March 21, 2013

What the Government Needs vs. How the Government Thinks


The Federal Government is trying to update its approach to security; will it succeed?


At the end of 2010, the then-CIO for the US Government, Vivek Kundra, published a paper outlining 25 points to reform Federal IT Management. I’d heard of it at the time, but not read it. However, with the President having recently signed the sequestration order into law, it’s being passed around again with “where are we now and where are we going with this” notes attached.

There's nothing wrong with the paper, per se. In a nutshell, it says that the government should focus its energies on programs that yield obvious benefits, and on hiring programs that will attract rising IT stars. The problem is that, for the most part, the Federal government has no idea what will attract such people to work for and with it. And the reason for that is that the way that most IT professionals - especially the young geniuses that the government is hoping to snare - think is complete antithesis to how the government works, and vice versa.

Like any generalization, of course, there are exceptions to what I’m talking about. There are certainly a lot of brilliant people whose way of thinking is not antithetical to the way the government works, and many of those people are in fact working for the government, or working for companies that support the government. And there are some people who, regardless of the fact that they don’t have a meeting of minds with the government, will still choose to work for it in some capacity, for various reasons. But it’s very unlikely that the government will be able to attract the sorts of people that Kundra’s paper is talking about, at least in large quantities. The fact that the government representatives who talk about this hiring concept don’t realize how unrealistic they’re being is rather worrying.

There are several reasons why the government won’t, for the most part, attract the best and the brightest young minds in IT. First of all, as I’ve mentioned, there’s the fact that the government way of thinking and doing things is antithetical to your typical hacker. (Note to the reader: if you feel that the term hacker is pejorative, then you are not one.) This isn’t true in all cases, and certainly a number of us feel that it’s worthwhile to try to work from within the government system, but I’d venture to guess that those of us who chafe less in the government are a bit older and more staid than the “cyberninjas” the government is trying to attract (most of whom ridicule that term). The government already has a lot of people in its employ who might be able to fit their idea of “cyberninjas”, but the government is not willing to spend the money to train them.

And this lack of desire to spend money where it would do good leads to the second problem: the government works by spending the least amount of money possible to achieve the best result it can. If a particular company or entity is the lowest bidder on a project while promising the same or a better result, that entity will be awarded the job. Because the citizenry are the ones paying for the project through their taxes, that’s the way it has to be. However, it doesn’t give the government a lot to work with in terms of attracting “cyberninjas”, who can make a lot more in the private sector. In general, a government job is comfortable and secure, but it can’t match the perqs and thrills that come with being a famous name in IT culture. In fact, the government would frown on a lot of those perqs and thrills (which brings us back to the difference in mindsets), and it wouldn’t want to or be able to fund even those it would not frown on (such as extensive traveling to security-related conferences and training).

A third issue is something I touched on in my previous article about Continuous Monitoring. The Federal government is so focused on compliance that it’s seemingly forgotten that there’s a lot more to security. I see a lot of people saying “what is the right thing to do” and other people saying “it says right here that...”. Adherence to policy is necessary and I’m not trying to knock it, but it’s not the be-all and end-all of securing information. The government as an entity may know that - although I’m not taking any bets - but I would estimate that 99.9% of the people who are actually in the trenches doing security don’t have the first clue to proceed, other than finding out what the policy is so they can tell other people how to adhere to it. And that is a problem.

Vivek Kundra’s paper was written only a couple of years ago, and it’s still very germane. However, I’m not sure how realistic it is. Kundra has himself returned to the private sector, but I wonder what he thinks now, especially about methods to attract young and brilliant IT professionals. The government really needs fresh blood, but I’m not sure it will know what to do if it can get what it needs.

Wednesday, January 23, 2013

Continuous Monitoring: You’re Doing It Wrong

The Federal Government is not really sure what it’s talking about. Are you?

[This article also appears at Great Lakes Computer.]


One of the surprising challenges of being an Information Security professional is keeping up with the current buzzwords and jargon in the industry, and it’s made more difficult by the fact that not everybody uses those terms in the same way. For instance, given the posture of an organization and whether or not it’s sales-based or based on something else (research, defense, etc.), the term Data Loss Prevention (DLP) can mean different things. However, when I went back to work for the US Federal Government, I thought that Continuous Monitoring was one term that was clear as crystal.

It turns out, however, that the government, which of necessity places a huge emphasis on regulatory compliance, is using the term in an entirely different way from the commercial sector. Continuous Monitoring, as used by the private sector,  is actually a relatively new concept, and that’s because it’s only really been in the last few years that we could spin up machines with enough speed and power to do the job in real time.

What Continuous Monitoring should mean is a way to watch what’s happening on your network in real time, integrated with logging so that you can go back and follow patterns, graph anomalies, and so forth. For instance, a good CM setup should tell you when most of your users are using social media, or if there’s a spike in activity to or from a certain address, and so on. The information has always been there, but it’s been hard to find, and now we have the means to automate grabbing those details and presenting them in such a way that network and security admins can form a much better mental picture of what’s going on.

However, NIST and the Federal Government have made this term their own, and they mean something completely different - and far less useful, security-wise - by it. I think that the government’s use of the term completely misses the point and buries CM under a pile of compliance paperwork, and the result is that nobody is really watching the network the way that they should be.

What the government means by Continuous Monitoring is looking at the NIST-defined security controls - policy statements dealing with regulatory compliance - for a given network and deciding if they are applied correctly, given the threats list that the government subscribes to and the vulnerabilities that routine scans discover.

NIST does define CM as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions”, which sounds as if it would fit in with the automated approach that I described briefly above, but the problem is that most of the people who are actually responsible for performing this monitoring are using completely manual methods to demonstrate compliance. For instance, I have been told that presenting auditors documented evidence of testing security controls in small batches over time constitutes Continuous Monitoring because it shows “continuously” action promoting the security of the network I’m responsible for. I suppose it doesn’t hurt, but that’s not what CM should be.

Change is slow in the government, but I’m hopeful, as many agencies (including mine) are installing new vulnerability and monitoring solutions that I think will really be an eye-opener on the subject of CM. Compliance is important, but it doesn’t define Information Security; it’s just one part of the whole.

Friday, May 18, 2012

...got Klout?


Today I told a recruiter that I didn't want to relocate to a certain metro area for the job he was peddling, and he demanded to know why. Uh...?

But that's not what I want to talk about. Yesterday and today I was involved in a spirited discussion about a fairly new (3 years old) social media platform called Klout. Klout, which purports to measure your "influence" in social media, has been getting a lot of attention recently, mostly due to an article in the current paper edition of Wired.

Like any fairly new meme, Klout is on the receiving end of a lot of bashing, mostly from "old guard" IT wags who term it invasive, meaningless, part of the social media "popularity contest", and so on. Even xkcd's Randall Munroe had something negative to say about it. And I say they're missing the point.

In the last decade, Social Media, as a concept, has become entrenched in First World culture. People who, ten years ago, used the Internet rarely or not at all, use it frequently now because of Social Media and its influence. But a lot of people who, like me, predate their interaction with computer technology to before Social Media (or even the Internet) existed, have a very "get off my lawn" reaction to a lot of Social Media platforms. Having existed before it, and having not thought it up themselves, they think they're above it. (A good counterexample to this type of thinking is Marc Andreessen, who is also featured in this month's Wired.)

I don't love all of it, myself. For instance, although I have an account on Facebook, I have never liked the site and rarely read posts there (I seem to post a lot, but most of it is auto-posted from other sites). I wasn't an early adopter of Klout; I knew it existed but I wanted to see where it would go before jumping on the bandwagon. Wired's article told me what I wanted to know and I jumped in, thereby raising my "score" - which already existed since I tweet publicly - from 22 to 49 (out of 100) in two days.

Does it mean anything? Is it just all a big popularity contest? Of course it is. Go look up who has a score of 100 and you'll see that that's the case. But that is the point. The "Net" - which started out as a government defense research project - has become a world-wide party and the popular kids - who in many cases are also technological and artistic innovators - have taken over. You don't have to play if you don't want to. Your priorities may lie elsewhere, and there's nothing wrong with that. But because a technological innovation doesn't fit your worldview is no reason to disparage it. It didn't work for Jacquard's detractors either, a fact for which modern IT curmudgeons should be grateful.

Wednesday, May 9, 2012

the importance of being anonymous...?

I haven't posted here in a while. I've never been more than kind of intermittent, but currently I have a good reason; I'm writing weekly articles for Crain's Cleveland Business. They're a little non-technical (basic security for businesspeople), so the content is probably not what I'd choose to share here anyway, but with 500-1000 words per week going there, I'm not as motivated to post here.

That said...last night, I went to our local chapter ISSA meeting last night. They weren't charging admission to the meeting, so it was basically free CPEs (and free pizza). I was a little late for the meeting due to mixing it up with another meeting I have next week, so I was a bit flustered as I entered the venue.

As I approached the room where the meeting was being held, I saw a sign on the wall about the meeting saying to text to a certain phone number for a door prize. I had my phone in my hand anyway, so I paused and sent the SMS, then proceeded into the meeting.

Realizing I was late, I quickly sat down. The lecturer, Branson Matheson, was talking about social engineering, which is a subject that's very interesting to me. During the lecture he mentioned the "hack" he'd perpetrated and wondered aloud how many phone numbers he'd captured that way.

Yup, okay, he got me. Turned out, too, that I was the only person in the room who fell for it, although that fact is mitigated to some extent by the fact that not everybody in the room actually saw the sign. But really...did he in fact "get me"? What exactly happened here?

It's no secret, I'm looking for a job. Because of this, I give people my contact details several times a day. I WANT people to have my phone number. In fact, had I arrived at the meeting early, as I'd intended to do, I would have been passing out my virtual business card (via Cardcloud) to anybody who would take it. People very often do, in fact, exchange business cards at such meetings, which will usually include a mobile number among others. So, practically speaking, Mr. Matheson didn't actually gain any knowledge that I wasn't willing for him to have, and in fact, he didn't have a name attached to that number. 

Mr. Matheson's point was that there's nothing stopping a hacker from putting up signs randomly saying "text to [phone number] for free offers" and actually collecting phone numbers that way, and it's a very good point. I was fairly sure, when I sent my text, that I was sending it to an officer of the local board (and he is, in fact, the VP of the local chapter), so while I "fell for" his trick, I'm actually just as happy that he has my phone number. Maybe he'll refer me to a new job. ;)


Tuesday, January 10, 2012

interview questions

So over here  you can find the "top 25 oddball interview questions of 2011". Supposedly these questions were really asked in job interviews.

So I figure, what the hell, I'm currently interviewing...so here's MY take on THEIR take on these wacky questions. I'll split them up into a few entries.


1. “How many people are using Facebook in San Francisco at 2:30pm on a Friday?” Asked at Google

This is one of those questions that there's not going to be a simple, correct answer for. Given that, it seems to me obvious that the object of the question is to ascertain your problem-solving skills. In other words, how would you find out the answer?

It may or may not help to know that the population of The City is roughly 800,000. However, you have to consider that anywhere from a quarter to a third of these people do not use Facebook at all (at a guess). However, people who do not actually live in San Francisco proper are commuting into it to work, and most of these people probably do use Facebook. Ergo, it's not out of the question to just assume that there are around 800K people in town on a weekday who could be using Facebook.

Further, you have to consider the time, i.e. "siesta time" on a Friday. 2:30 is a little too early to skip out for the weekend (unless Monday's a holiday, in which case, why did you even come in?), but it's a great time to check in with your friends. Therefore, I'd say that for any given non-long-weekend Friday, there could be a whole 800K people using Facebook within the San Francisco City Limits.

However, that's just a very rough guess, so I would suggest that if the interviewer really needed to know the answer, searching his network logs (or using a SIEM to search) would tell him how many people on his network, out of the total number of employees, were using Facebook at that time. From that one could extrapolate what percentage of the rest of all white-collar workers were using it, which could further lead to extrapolation for the rest of the population. Don't forget to include pretty much any student with a smartphone.


2. “Just entertain me for five minutes, I’m not going to talk.” Asked at Acosta.

My consulting rate is $250 an hour, with an hour minimum billed. If you want entertainment, you're going to have to pay for it, and I would prefer that up front, please. Next question.


3. “If Germans were the tallest people in the world, how would you prove it?” Asked at Hewlett-Packard

Prove what? Oh, that they are the tallest people in the world? First of all, measure the height of about 100 German people. Next, measure 100 people (or, if 100 people aren't available, as many as possible) from every other country and culture in the world. Tabulate the results. By the way, Germans aren't the tallest people in the world.


4. “What do you think of garden gnomes?” Asked at Trader Joe’s

I prefer flower fairies. Gnomes don't really do anything for me.


5. “Is your college GPA reflective of your potential?” Asked at the Advisory Board. 

God no. I wasn't really ready for college, mostly because I went to a public high school and it was way too easy for me. I got away with murder. In college, I actually had to work, and my GPA ended up not being very good. I learned from that experience, though -- and I've never stopped learning since.


6. “Would Mahatma Gandhi have made a good software engineer?” Asked at Deloitte. 

Gandhi was a lawyer, and a good one, before he decided to effect India's liberation. He was intelligent, innovative, and daring, all of which are good traits for an engineer. He didn't live simply because he was some kind of Luddite; he was doing it to show solidarity with the poor. This shows that he was adaptable and disciplined. Yeah, I'd hire
him.


7. “If you could be #1 employee but have all your coworkers dislike you or you could be #15 employee and have all your coworkers like you, which would you choose?” Asked at ADP. 

This isn't high school. If I am the #1 employee, and I got there because of my talent, hard work, and willingness to be a team player, then my co-workers will respect me, and that's what is important. Are you really asking me this?


8. “How would you cure world hunger?” Asked at Amazon.com.

First I would find every single corrupt person in the world and shoot him. Then...oops.

Seriously, there is no way to end world hunger. It's not that we -- as a species -- don't have the resources to do so. If it were as simple as just growing enough food to feed every single person on this planet, we could do that. But you can't just grow food. You have to distribute that food. You have to pay the people who are growing it, and the people who are distributing it. That's where the problem lies: the food isn't being paid for or distributed, and the reason is that people in power are keeping that from happening so as to control the people who need the food.

I guess if I wanted to end world hunger, I would force everyone into a cooperative hive mind. But while we're all individuals, there really isn't any way, in my opinion, because the more individuals you have, the less cooperation you can achieve.


9. “Room, desk and car – which do you clean first?” Asked at Pinkberry.

Whichever one needs it the most, balanced by how much I need that thing to be clean. If I'm not using my car for a given period of time, I'm not going to care if it's clean, for instance.


More in another blog post...this is kind of fun!

Saturday, December 31, 2011

what my cissp means to me

I've had my CISSP for six and a half years. I would have had it before that, but I couldn't really afford study materials and the test. When the company I was working for in 2005 offered to pay for it, I got it pretty much immediately.

While on the one hand, it's great that my company was willing to pay for it, it was also a bad thing because it signalled a trend. When a certification is necessary to obtain or retain a job, the idea is supposed to be that the people who hold that job are the best and brightest, but what it really means is the opposite, and the certification becomes devalued. When I got my MCP in networking back in 2000, it was the furthest I wanted to go, because Microsoft certification had become a joke. Now the same is happening with the CISSP. A lot of people say it's already happened.

When I took the CISSP exam in April of 2005, I had just finished a week-long bootcamp, also paid for by my company. I don't want to say that the bootcamp didn't teach me anything I didn't already know, but I will say that there was nothing that I didn't know at least something about. For instance, I learned things about encryption that I hadn't known before, but I'd certainly known the base concepts and wasn't "lost" like a lot of the other people in my class. I was pretty sure that I would pass the exam because I had the requisite 10 years experience in Infosec already.

That said, walking out of the exam, I wasn't sure I had passed. I wasn't the first person to leave but I left a lot of other people in there. I was very hopeful, but I honestly had no idea how I had done. I hear this happens a lot. I'd been sitting in an uncomfortable chair all week, and I don't test well (which is part of the reason why I don't have a whole STRING of certs), but I was hopeful.

I was elated to find I had passed. A lot of other people who took the boot camp with me -- including another person from my company -- did not pass at their first sitting. Some of them subsequently went on to get their CISSPs later. Some of them left Infosec. I was glad that part was over, but harder than taking the test was finding a sponsor. It's not that nobody would sponsor me; it's that I took the certification seriously and wanted someone who actually knew my work to endorse my certification. I eventually asked a representative from one of my company's customers, a person with whom I'd worked extensively. Neither of us has ever had cause to regret that decision.

As I say, I take my certification very seriously. As someone who is largely self-taught -- meaning not that other people didn't help to teach or mentor me but that I was not spoon-fed my knowledge, choosing to actively pursue my IT education through nontraditional means -- I am deeply grateful to have that certification and spend a good deal of my time continuing to educate myself. Unfortunately, the more I know, the more I realize I don't know. But that also gives me hope, because I've never believed that there's such a thing as an expert in any field. In fact, if one of my esteemed colleagues -- most of whom are men -- calls himself an "expert", that's a pretty good indication he's not. (It's okay if someone else says it. Just, really...sooo tacky to say it about yourself. Just sayin'.)

I take it seriously, but I've watched it become devalued. For instance, the DoD mandates that IT personnel of a certain level have or obtain a CISSP, including military personnel who are assigned to IT jobs. Well, uh, great, except that I can personally state that some of those personnel have no idea what they're talking about when it comes to IT in general and Infosec in specific. Since they can't exactly be fired (or, not easily) from their jobs for not passing the exam, it follows that the exam must be rendered passable for them, i.e. through extra coaching etc. Basically, in the end, there are a lot of people who have CISSPs who have no real interest or aptitude for Infosec -- never mind the passion that I and a lot of other "older" CISSPs bring to the mix. It's become just another checkbox, like old technology. I see people disparage the cert every day now.

A side effect of all this is that the assumption is often made that if you have a CISSP but a) no other certs or b) someone doesn't personally know your work that you're a newbie and nothing you say is meaningful or relevant. I've been treated this way several times by some of my esteemed colleagues who assume that I'm one of these newly-minted CISSPs who got handed the cert instead of earning it. And really, what can I say? I'm not one of the "old salts" in this business, as some of them are. But at the same time, I'm not one of the newly-minted CISSPs currently rolling off the assembly line. I've been "doing Infosec" since before it became trendy, and I'm certainly passionate about it. I have paid -- and continue to pay -- my dues.

My certification continues, and will continue, to mean a lot to me, regardless of what other people think. I may be the manic pixie dream girl of the Infosec community, but I'm here to stay...and so is my certification.

Thursday, March 31, 2011

why am i selling avon?

A little while ago I started doing something that I've toyed with doing off and on: I started an Avon business. Yup. I'm a girl.

I've been a little embarrassed about it, I guess, probably because I'm underemployed and I don't want to come off as desperate. I'm not desperate, and if I were, I probably wouldn't be starting an Avon business, because you really need to put money into it (as with any business) before you can see any kind of profit.

I also don't want to appear to prey on people. "You're my friennnnnnd...buy from meeeeeeee" just isn't my style. I've mentioned it to people, and I did sort of bully my mom into switching to me for her representative, but my feeling is that if and when people buy cosmetics and fragrances and the other stuff Avon sells, it should be because they want to, not because they feel sorry for their friend. I don't mind being (gently) pushy about asking people to buy it from me as opposed to someone else, but to buy it in the first place...no.

That said, one of the reasons why I started this when I had some free time but wasn't yet strapped for cash (both from being between infosec positions) is that I wanted to buy some of the products myself and take advantage of special representative-only offers and samples so that I could say with authority, "Hey, I like this product and I think you would too," or just respond knowledgeably when people asked me about the product line. For instance, Avon heavily touts their Lotus Shield anti-frizz product, so I ganked a sample packet and it's currently sitting in my hair. I have a very sensitive scalp and very fine, wavy hair, so I should soon know if the product irritates my scalp or makes my hair feel icky, and I'll know how I want to approach selling it. This is something Avon encourages, i.e. getting to know their products, and since I've always liked Avon, they don't need to ask twice.

The question remains: Why am I, a network security analyst, selling Avon? The reasons are threefold:

1. I love makeup. I know, I know, as a feminist I am supposed to feel that wearing makeup is succumbing to oppression or something like that. But I don't. I don't spend a lot of money on makeup or hair or clothing, but I do enjoy buying and using it, a lot.

2. I love showing other people how to do stuff. Showing people how to "do infosec", in whatever way, is one of my favorite things about being in the field, and why I'm so good at whatever job I may hold. I'm hoping that it's also going to be a big part of my Avon business. My favorite part is going to be showing other people how to do stuff and helping them find the best products for their particular needs, just as it is in infosec.

3. I want to know if I can do it, and by "do it" I mean sell something. I've always felt like a total failure at sales of any kind. Believe it or not, I'm painfully shy and I have a hard time approaching people in a selling context. So I'm hoping that selling something that I happen to love myself, when it's not a make-it-or-break-it for me, will make me more confident about the concept of selling in general. I'm not expecting this to be my livelihood; I just want to put myself outside my comfort zone and see what happens.

So ding dong! Avon calling!

Tuesday, March 29, 2011

data loss redux: thinking organically

A little while ago I wrote about DLP, or Data Loss Prevention, and how the term is something of a red herring because, in reality, everything we do is about preventing data loss; ergo, the concept can't be neatly productized. I still feel that way.

However, a few days after I posted it, I was contacted by a fellow named Pablo Osinaga, who has co-founded a startup called Kormox. He wanted me to see his company's DLP solution, profiled by SC Magazine.

After reading SC's blurb on the subject, I was quite intrigued, and arranged a web/phone meeting with Mr. Osinaga. For a little over an hour, we discussed Kormox and the concept of DLP.

As I said, DLP is a very difficult concept to productize. Everyone needs to prevent the loss or leakage of data, but everyone -- every enterprise, every business, every organization, even every person -- has different data and different types of data that they need to protect. Some organizations are concerned with mobile data; some are concerned with file shares; some are concerned with PII; and so on. No one vendor -- no one product -- has a fully comprehensive DLP solution because what DLP means is so dependent on each organization's mission and needs, which not only differs among organizations but can be subject to change within an organization over time.

One of the first things that Mr. Osinaga mentioned, in presenting his company's solution, was that enterprises have become more organic and less structured. I could not agree more. I have worked for many different security solutions vendors, and I hear over and over about the "special snowflake syndrome", how every organization thinks they are "different" in some way, but they are really all the same. The trend, with every security vendor I've worked with, is to pigeonhole potential and existing customers, to basically tell them that they can't have what they say they want, to fit them to the solution that the vendor has, in their infinite wisdom, envisioned and created. Yet as time goes on, and as Mr. Osinaga noted, enterprise structure is becoming more fluid, less definable, and less able to be pigeonholed.

Kormox's solution starts with data classification. It's so simple, and so logical. Of course you have to classify your data. But it's not enough to say "I have to protect medical records" or "I have to protect credit card numbers". In the DLP-productization game, vendors talk about what kind of data you want to protect, and then they talk about how they're going to protect it, but they don't really cover the territory of what, exactly, your data means to the people who are using it. That's your problem.

And that's how Kormox differentiates itself from the crowd: data classification is a major step, and it involves finding out not only what the data is (as opposed to merely what kind of data), but the flow of the data: where it is, who is using it, how they use it, where it's going, where it's been, and so on. All this is part of the classification, and it brings DLP back to the true "asset management" model of Information Security, where the asset is the data itself, not the (often fungible) hardware on which it rests.

After the data has been classified, the product allows the asset owners to implement controls in a similarly organic fashion. In essence, it takes the organization from the situation of "I know I need to protect our data" to "I know where and what all our data is, how it's used, and what controls are on it" -- something that no other DLP solution does.

I'm not laboring under an illusion that this product is perfect; no product could be. But I do think that Kormox is going in a necessary direction with their concept of data flow as a part of classification. At the moment it's a bit clunky looking, but from what I saw in our meeting, it is definitely worth a look.

I'd like to note that I am in no way compensated for writing about Kormox; I'm writing about it because Mr. Osinaga contacted me as a result of my last DLP article, and so I thought it was only fair to talk about what I found out in our meeting.

Friday, March 18, 2011

data loss prevention: a red herring

A few years ago, the acronym DLP, which stands for Data Loss (or Leakage) Prevention, hit the security market. Every enterprise was crazy for it, every vendor touted it, and everybody had a different idea of what, exactly, it was.

Half a decade has passed and we still don't know. The problem is that DLP is a misleading term, because preventing data loss is the key reason for information security in the first place. If you think about it, every component of your enterprise's security solution, from policy to compliance reports, is in place to prevent your data from being lost or leaked.

There is no panacea for the problem of potential data loss, no matter what your vendor of choice might tell you. The smartest vendors don't even try to claim such a thing. Because nobody can agree on what, exactly, DLP is, nobody has a complete solution. However, the industry in general does agree on a few key concepts:

- A product that can recognize credit card / SSN / other identifying data both at rest and in motion and (better) control the transmission of such data is a necessary part of your security solution, if you deal with such data

- A product that can tag certain types of files and control the transmission (in whole or in part, encrypted or not) of those files is key

- A product that can recognize certain types of removable storage device being attached and/or written to and IMMEDIATELY control this activity is important

- If your business employs "mobile warriors" and you do not implement some sort of whole disk and file encryption, your data is at risk

- If your employees use mobile phones for business purposes then you should have some control over what type of data they can access on those devices

These are just a few of the concepts behind DLP, and those concepts keep changing as new risks are discovered. Adding to the complexity is the fact that some issues are going to apply to some enterprises, whereas other issues will be unimportant. For instance, the DoD never, ever transmits SSNs in the clear. However, many private sector businesses transmit SSNs in the clear as a matter of course (although they shouldn't). Ergo, when the DoD talks about data leakage, they are most often concerned with SSNs and other types of personally identifying information (referred to as PII), but protecting credit card information is not so much a concern. The private sector, on the other hand, is much more occupied with protecting credit card information but not so much (say) SSNs, driver license numbers, and other types of PII. Ergo, the part of the DLP solution that identifies certain types of data at rest and in motion needs to be flexible and customizable to be useful for the environment it's being used in.

Whole disk and file encryption is probably the easiest piece of the DLP pie to choose and to implement. In fact, you can get your whole disk encryption from one source and your file encryption from another, and as long as they don't fight with each other you're fine (as long as you remember that nothing is 100% foolproof, that is). But after that, it gets more complex, and vendors only make it worse when they try to convince you that their solution does everything you need for DLP. Well, no, it doesn't.

A smart executive will realize that DLP is not a single concept, and certainly not a single product; rather, it's a method. The first thing to do is to revisit your security policy. If you do not have a section detailing the specific types of data that you need to protect from loss/leakage and some (probably non-vendor-specific) methods for doing so, then it is time for a rewrite. [Note: you should be revisiting and perhaps editing your security policy on at least a quarterly basis anyway.] Sit down with your fellow executives and brainstorm your data pitfalls, and then do the courtship dance with vendors who claim to have solutions to these pitfalls. Again, do not fall into the trap of the One True Solution. It doesn't exist.

As you work on your DLP method, you will see that many of your current solutions and/or their vendors already work towards securing your data...of course, because that, as I said, is the entire point of infosec. For instance, your vulnerability scanner already scans for removable storage devices (both currently inserted and having been inserted at any time). That's great, but it's asynchronous. Does the vendor have a real-time solution (agent or sniffer based) that does the same thing? You already have auditing in place to determine if a file's been touched. How about if it's been excerpted and transmitted without being edited? And so on. If your current vendors have addons that can fit your newly-perceived needs, then that can perhaps save you money and implementation time.

One big problem with potential data leakage is that many businesses, to save money, don't issue their employees mobile phones but rather reimburse the employee if his or her existing phone is used for business purposes. However, in many cases, "business purposes" doesn't mean just calls; if an employee is using a smartphone, he or she is probably also downloading and responding to email and possibly also VPN'ing into the network and accessing corporate resources. If you're not virus scanning and otherwise protecting his phone in the event of theft and other compromise, then all the time and expense that you've gone to in implementing disk and file encryption on his laptop is pretty much useless.

All of this is a lot to think about. The good news, especially if you are a smaller business, is that you don't have to think about it and implement it all at once. This is why you should be always spiralling back to your security policy in order to revist your business's current needs.  Each time, you can tighten up your data security a little more.

Saturday, March 12, 2011

the most important infosec component

Also posted in my Securiteam blog.

When I first started working in Information Security, the big "thing" was firewalls. It's probably hard to believe now, but back then, it wasn't simply a question of which firewall to install but rather whether to install one at all. I spoke to a lot of former sysadmins who had been repurposed, willy-nilly, as security engineers. They didn't know much about network security, but they did know that they probably needed to keep "bad stuff" out: hence, the firewall.

These days, if you are in charge of security for an enterprise, you don't ask yourself if you should install a firewall; instead you're trying to figure out what types and how many different kinds of intrustion prevention you can get away with on your budget, along with asset and vulnerability scanners, SIEMs, and on and on. Information security has been productized to the point where it's easy to forget the single most important infosec component in any business, and by that I mean the people who work for and with it.

Smart CEOs these days will say that their most important assets are their employees. That's very warm and fuzzy, but anybody who has been let go from a company for any reason that isn't related directly to job performance will tell you that upper level management cares much more about the bottom line than about the inner workings of their employees' minds. I'm not crazy: a business has to make money, because that is the reason it exists. But businesses also have to realize that employees are, in fact, both assets and liabilties when it comes to that bottom line.

Consider this: every single one of your employees has a life outside his or her job. Mary is a devout Catholic who sings in her church's award-winning choir. Bill plays in a poker league on Thursday nights and weekends. George and his wife Tess, who both work for different departments, sell Amway together. Jeannette saves up her paid time off to travel all over the world. And Jack? That kind of gothy looking guy with the tattoos that you have working in the infosec department, the one who begs you to send him to SANS and Black Hat every year? Well, when you don't, he splits the difference and goes to LayerOne and SchmooCon and DefCon.

You can't control what your employees do in their spare time, nor should you. But if you think that they are not thinking about what they do in their spare time while they are at work, you are wrong, and that is what so many executives don't take into account when they are thinking about their company's security posture. The "rank and file" care about the company's bottom line insofar as it provides them with a paycheck, and most of the time, that is where their caring stops. They do not realize, because it is not part of their job to do so, that what they are thinking or doing at any given point could affect your business. You don't realize it either, and that's a problem, because it IS part of your job to know that.

If your business is subject to government or industry regulation(s), you very likely have a security policy. This policy defines physical and network assets, who has access to them, and some kind of vulnerability management and compliance schedule, at a minimum. You probably think that the "access" part takes care of intentional or unintentional abuse of your non-human assets by your human assets: they can't use the red stapler; they can't access the HR file server; they can't post to Facebook from the company network. Even if you can't stop them, they know from reading the policy that if they are caught doing any of those things, they could be punished, including losing their jobs.

Your employees are smart and innovative: that is why you hired them. They can, or think they can, outwit your automated security components to do what they want to do, and as long as they are also getting their jobs done, no harm no foul, right? Wrong: every minute a human asset spends doing something at work that is against your security policy is a minute of their salary, and, should it end up causing problems that need to be corrected, the salaries of other human assets. This leads in turn to the company's bottom line being adversely affected over time.

You might think that the obvious solution to this problem is to employ tighter controls and install more automated security components in order to get your human assets to adhere to your security policy. However, I am going to go out on a limb and say that your first step, when faced with employee non-adherence, is to revisit the security policy and determine how it can be brought in line, while still remaining in compliance with governement and industry regulations, with the reality of what is going on with your employees' lives.

Your employees fail to comply with your security policy, for the most part, not because they don't care but because they don't understand how it affects them. Given how smart they are (right?), if they don't understand this, it is because they've never had it explained to them in a way that they can relate to. As an executive of the company, this is your responsibility: to show your employees how they directly affect the amount of money in their paychecks, and to work with them to make the company, and they themselves, earn more rather than stealing from the bottom line.

Alice likes to post to Facebook on company time? Create a company Facebook page and put Alice and her posty friends in charge of it. Mary is spending too much time on choir-related activities at work? See if you can work her choir or a subset thereof into company events, to everybody's benefit. You're worried about Jack's possible hackerish activities? Send him as an official company rep to the conferences he already attends, plus the ones he wants to attend, and encourage him to share his own ideas for strengthening the security posture of your enterprise. All these things will cost money up front, but you will find that when your employees feel that they are being listened to and valued for who they are, those upfront costs will bring in more revenue for the company. Ask Google.

There is absolutely no way to completely automate security, because you can't control what is going on in the heads of your employees. But when you truly treat your employees as the assets you say they are, your security posture WILL improve.

Friday, March 11, 2011

recruiters not to love

Okay, I promise this blog is really about Infosec, not about social issues. But something happened today, and I think I really need to talk about it. I'm going to post it here but also crosspost and link to it elsewhere, because I think it's really important.

I'm looking for a new position at the moment. Looking for a new position in Infosec is not a slam-dunk at the best of times, but the field is so awesome that in my opinion, it really pays off. That said, I will be feeling much happier when I start a new job, because then I can feed my family and we can have insurance and all those other things that people like to have.

Today a recruiter in Connecticut contacted me about doing some contracting in Hartford. I live nowhere near there, and I'm not about to move, but I do have family around that area, so I could imagine making this work.

He told me that he'd offer me X amount. X would have been great, except that it came with no bennies, which made it a good deal less than my most recent position. So I told him that for Y amount where Y was 30K more than X, I'd talk, and I added "a girl's gotta eat."

So far, so good, right? I'm looking at a job that I would really enjoy, at a nice rate, in an geographical area that I wouldn't hate. I am all ready to talk to this guy so that I can begin making my Infosec magic again. Whee! A job.

He sent me back an email agreeing to the rate. I was just about to be really happy when I scrolled down a little and saw the really horrendous picture he'd attached to his email. Except that I use gmail, so the picture was inline, where I could not avoid seeing it.

The picture was of a girl in her teens or maybe 20's...well, really, it's hard to tell, because the only part of her face that you can see is her mouth. She's half naked; she's wearing a tank top but it's pulled way up. Her lower body is....pretty much all bones. I mean, basically, she's a rib cage and hip bones and...organs...I guess. Her navel is pierced.

I saw this picture in my email and I stopped breathing. I really did. I started to gasp like you do when you're having an asthma attack, a phenomenon with which I am not unacquainted. Everything left my mind except the horror of this picture, which I will not visit upon you because of what it did to me. I will tell you, though, that he apparently found it on mywits.com, which is a site for funny pictures.

This picture wasn't funny. I only skimmed the site but I didn't actually see any pictures on it that I thought were funny, only pictures that appeared to be objectifying.

When I was able to breathe, I went looking for his recruiting company online. I tried to call the general number but nobody answered. So I sent him email asking him to have his supervisor call me. Instead, he called me. I asked to speak to his supervisor. He asked me what was wrong, and I told him that I was very offended by that picture and I wanted to speak to his supervisor or someone in HR.

He started to freak out and promise me that he just thought it was funny, that he sends "funny" pictures all the time, that I said "a girl's gotta eat" so he was sending me a picture of a girl who apparently didn't, and on and on and on. He was obviously so upset that I finally just said "I'm letting it go at this." That didn't stop him from apologizing though, and he went on and on and on some more.

Finally I just said, "Thank you very much. I am no longer interested in the position," and I hung up. Gradually, I stopped shaking. I took a long breath, the first in over 30 minutes, and I started working on this article.

Do I feel that I shot myself in the foot, by refusing to play nice and agree that it was funny? Hell to the no. Two years ago, I was doing contract work for another company for the DOE. I sat in an office, staffed with government workers and contractors, and I listened to them spew hate speech about a woman they worked with, and about a foreign national, and I didn't say a word. I am talking about really offensive language here, folks, not just "she's a bitch" or "he has brown skin". I didn't say a word because I needed the job. You know what? No job is worth putting up with that, even if it's not directed at me.

This stuff is not funny, no matter where you get it from. Anorexia is a disease that kills people, not something to make fun of. Emailing pictures of half-naked women to business associates is not appropriate no matter what your reasoning. I can't apply for a position under these circumstances because I will not WORK under these circumstances, and I'm ashamed that I ever did.

I am not going to reveal the name of this recruiter or his company. My guess is that he is young and inexperienced, and he really thought it was funny. In my experience, when I've told a man that his behavior offends me and he apologizes almost to the point of tears like that, he really means it (and I hate to say this, but my usual experience is that instead, the man tells me that there's something wrong with me for reacting negatively). Perhaps I'm just naive, but I think I'd rather stay that way than believe that the guy might have gotten off the phone and laughed his head off with his co-workers about it.

I didn't insist on speaking to his supervisor (and by the way, he didn't refuse, he just wanted to apologize first), because of that feeling that he was really sincere. But I did tell him that I couldn't imagine anyone thinking that the picture was funny, and I told him never, ever to do that again.

And now I think I'm gonna have a drink because...DAMN.

Tuesday, March 8, 2011

don't tell me i can't go there

I'd like to note that this is an opinion piece. That said, it HAS to be an opinion piece because there are no "hard" statistics on the subject that prove a point in any way. There is ONLY anecdotal evidence and conjecture. Ergo, this is MY evidence and conjecture.

I read an article a few days ago that really upset me. The reason that it upset me is that it was written by a woman who, like me, has children and loves them but also has a love and a need to work, to have a career, to not be a stay at home mom. Let me be clear: I don't have anything against stay at home moms. I do have something against people who don't want to work and who use their children as an excuse for staying home, but that's neither here nor there. My point is that I am not the kind of person who can find fulfillment in staying home with her kids, and so I don't choose to do so.

I discovered my career around the time I hit 30. I was very ill prepared for any sort of career, frankly, but I was in the right place at the right time and got hired to do technical support. This wasn't technical support of the "click on My Computer" variety, where you follow a script -- you had to be smart, you had to learn, and you had to think. And I did all these things. I was good at it.

I will bet I'd have been even better at it had I realized how good I was at math in school and pursued it. That said, I'd found my niche and I stayed in tech. I went from that company to an infosec vendor, and I realized I had a passion for Information Security. I learned everything I could.

There's one thing that really got in the way of my career, and it was the fact that I am a girl. Not only am I a girl, but I am a really girly girl. I do not look or act particularly smart. I'm cute and I'm sexy, and neither of those attributes come across well in the world of Information Security. I'm a dancing bear.

At any rate, I recently read this article by a woman who seems a lot like me personality-wise, and what she said is that women are, in general, not good at certain things, such as competition or higher math, and so they shouldn't pursue careers where they would have to be highly competitive or use higher math. And you know, she may be right...but there are plenty of women out there who ARE good at competing and with higher math, but who already have to fight harder than men with similar skills to be hired and to be taken seriously, and this woman's article DOES NOT HELP.

I am one of those girls who, early on, bought into the idea that because I was a girl and because I had big boobs, I was not smart. Seriously, for years I thought I was not very smart, which is hilarious because the evidence that I was, in fact, smart was all around me and I just ignored it because it must have been a fluke. I was good at math, so good at math that I should have been shot into a special class, but they didn't have those when I was a kid, and I actually thought I was BAD at math and stopped taking it the moment I could get away with it. The only thing that I did pursue was languages, and the fact that I was good at languages -- not just speaking or reading, but the technical, formal aspects of languages -- should have alerted somebody, but it didn't. Later on, in college, I astounded some of my professors with how good I was at formal systems, but then I got sick and I had to drop out, and there went any possibility that I'd figure out how good I was.

Five years before I got into tech, I picked up a book on formal systems and idly leafed through it, and then burst into hysterical tears because I FINALLY GOT IT: I realized that I was really, really good at math. I was also pregnant with my second child and had no ability, no opportunity to DO anything about it. Since that time, I have wanted, desperately, to go back to school and do something about it, but for various reasons, and I am not going to recount them all here, I haven't been able to.

Still, I am very good at what I do, much better than I should be considering my lack of formal training. Because I came at tech "sideways", as I like to put it, I very much think outside the box. Most of the time I don't even know there was a box. I'm not saying that I am the best in my field or something like that. But what I am saying is that I am a really good bet to hire, because I love to learn, I love challenges, and I hate being unbusy.

I've never had anyone complain about my technical ability. But I have seen a lot of doors slam in my face nonetheless. I know my personality is way out there and sometimes hard to take, but I will submit that if my personality did not come with big tits and a high voice, it would be much easier to "take".

Because here's the thing: I like being a girl. I love dressing up in outfits that accentuate how cute I am, and wearing makeup. I love to party, particularly if karaoke is involved. I have a lot of interests outside tech, most of which could be considered girly. I am not going to hide or deny that stuff, even though apparently it means that people -- i.e. many men in tech -- can't take me seriously because they can't fit me into a well-defined pigeonhole.

But why should I make it easy for them? Why should I fit into one of their boxes? Why should I accept someone else's definition for what I, as a woman, should be? The answer is that I shouldn't and I won't. If you give me a job to do, I will do that job well, and THAT IS WHAT COUNTS in my field or in any other.

And the last thing I need is another woman telling me that I can't, or shouldn't, try to do what I want to do and am good at because I am a woman.

Monday, June 28, 2010

google's godlike power

I like Google. I use Gmail, Docs, Apps, Reader, Calendar, Chrome...I'm a GooGrrl through and through. I even have an Android phone. I generally think it's funny when people complain about Google doing this or that "evil" thing, such as the flap a few months back about the Aurora hack and Google's role in it. Corporations exist to make money, and Google is no different. The fact that they are as un-evil as they are is pretty impressive. Also, I love my phone.

A few days ago, Google removed two apps from its Android Market and also, more intrusively, removed the installed apps from the phones of any users who had installed them. This is a move somewhat similar to when Amazon removed copies of two Orwell works from Kindle e-readers, which caused a HUGE flap among Kindle owners. Never mind that the works had been pirated; it was a privacy violation! Interestingly enough, I haven't heard the same outcry over Google's actions with the two apps, both of which were proof-of-concept apps from a security researcher. This may simply be due to the fact that Google's Terms of Service are more easy to understand than Amazon's; I don't know because I don't own a Kindle (and my CLIQ can't run the new Kindle Android app yet).

Or rather, I haven't heard much of an outcry. The Register ran an article in which writer Cade Metz compared the Google pull to the one from Amazon. As the article points out, Apple has the same ability -- to pull installed apps -- from its iPhone, but if that ability has ever been used, nobody has said so. Of course, Apple also has more of an application vetting process than Google does.

So was Google evil for pulling the apps or not? On the one hand, I'd like to think that my phone and all its apps and data are sacrosanct. After all, I would be mightily pissed if Microsoft or my ISP started removing apps that they didn't approve of from my desktop or notebook...uh...not that I have any such apps installed! Right. On the other hand, mobile phones, no matter how cool, are not exactly analogous to computers in form, function, or, apparently, terms of service.

Much more potentially sinister, says the security researcher whose apps were pulled, one Jon Oberheide, is the fact that Google can install apps at will. Not because Google might do so -- after all, anything Google might install on your phone in their infinite wisdom could only be for the greater good -- but because the INSTALL_ASSET message contains no source authentication. But don't take my word for it; read his article yourself.

So, in other words, the most evil thing about this latest Google issue is not the power that Google wields nor the fear that they might use it for evil: it's the fact that they are wielding it ineptly. Clean up your act, Google, so I can once more feel confident about the virgins I sacrifice to you. (If anyone has any virgins they're not using, please send them my way: my supply is running low.)

Sunday, June 27, 2010

gee it's been a while

I know, right? Basically, right after my last post, I started a new job. I was working on a DoD contract, and even though I didn't really have access to anything all that exciting, I just felt constrained not to talk about anything Infosec related. However, about six weeks ago I was hired by the coolest company on earth, and while I must provide the disclaimer that in this blog I in no way speak for my employer, I do feel that I can talk about my profession once more.

So what the hell, I'll say something controversial. As probably everyone knows by this point, a hacker named Andrew Auernheimer, also known as Weev, was arrested when he and his security group, Goatse security, revealed some flaws in AT&T's website.

Now Goatse is not exactly the most...dignified group of people, which you can tell just from the name, which refers to a widely distributed pornographic image, a stylized version of which is the group's logo. On the other hand, they contend that they informed AT&T of the security flaw back in March, were ignored, and only then did they publish the exploit and the data.

Now Weev has been charged not only with the exploit, but with possession of pretty much all the drugs in the world. And I have to echo BoingBoing in wondering if this is just spite on AT&T's part. It's also possible that the police and AT&T know they can't really make the hacking charge stick -- especially if there's proof that Goatse contacted AT&T about the issue well before publishing -- so they are finding anything they can to nail Weev with. This isn't necessarily spite, but it's definitely dirty pool.

Honestly, AT&T has no excuse for all the negative press, especially security-related negativity, that they are generating lately. Anybody who's been in the Infosec business for any length of time once viewed AT&T as one of the authorities on the subject. Because AT&T is the only service provider for Apple products -- in itself not the best idea in my opinion -- they need to be much more serious about patching exploits in a timely fashion. Given how popular the iPad is and that all the cool kids rushed out to buy one (I'm not a cool kid), it is in fact reprehensible on their part not to patch it.

AT&T has an easy target in Weev: he's a self-proclaimed drug user, and the picture of the old-time "scruffy hacker". He's not an attractive champion in any sense. But at the same time, AT&T has totally lost its own white knight status with their attitude. Weev's in trouble now no matter what with this new drug charge, but here's hoping that AT&T is found more to blame in the case of the exploit than Goatse.

Saturday, September 26, 2009

Go Conficker...

A little less than a year ago, Microsoft announced a critical vulnerability in its Server service. The vulnerability involved specially crafted Remote Procedure Call (RPC) requests, which the Server service would not handle correctly (i.e., drop). The crafter of these requests could use them to execute code of his choice on the server, such as creating or deleting users or changing security policies.

I'd call that a problem. And it wasn't merely potential problem; the exploit for this vulnerability is known world-wide as the Conficker worm, first detected not long after Microsoft's initial security bulletin. Since its inception, Conficker has wreaked havoc on government, business, and home computer systems all over the world, and the investigation to discover its perpetrators is still ongoing.

Our best guess is that the perpetrators are based in the Ukraine, since Variant E of the worm downloads software from a server hosted there. However, they have not yet been fingered, and in the meantime, they continue to control countless infected computers.

Conficker is easy to miss because nothing splashy happens. The most obvious way to tell that you have been infected is that user accounts are locked out or policies have changed, or automatic updates stop running. In the meantime, Conficker has been active on your network or Internet connection, downloading updates for itself and new malware, modifying your registry, establishing restore points, and so on.

There's a Microsoft patch that you can install, and all the major AV companies are able to detach and remove the worm. But despite these facts, Conficker continues to propagate, almost a year after its release, and the perpetrators have still not been found. One of the reasons for this is that Conficker is constantly being updated, and can disable some of the AV solutions before they have a chance to detect and remove it.

There is one very easy way to check to see if you are infected with Conficker: go to this page. If you cannot see all six logos, you may very well have been infected.

Wednesday, September 23, 2009

XSS Is Alive and Well

Not all that long ago, I had an interview where I was asked what cross site scripting was. Now, the thing is, I know very well what it is, and in fact, while I was working for my former employer, I wrote a white paper on the subject that was very widely used by their support and systems engineering personnel.

But have you ever been in the position of knowing something, yet when someone asks you about it, you just go cold? That's what happened to me in the interview. I stammered out a reply that was utterly wrong, and I knew it was utterly wrong, and I've been kicking myself ever since.

In the meantime, on a technical email list, one of my colleagues suggested that cross site scripting -- or XSS -- is no longer that much of an issue (because everyone knows about it and has taken precautions). That position is naive at best, though it's true that XSS is no longer the big deal it was a couple of years ago. However, it's still very much alive and well as a security vulnerability.

One of the problems in understanding XSS, or cross site scripting, is that the term itself is confusing. Originally it meant that a malicious web site could load another web site into a frame or a window, and then use scripting -- usually JavaScript -- to read and/or write data on that site (which is actually close to what I told my interviewer). However, later on, the term changed to mean "code injection" of scripting into a web page.

There are different kinds of XSS vulnerabilities, but the most typical type is where a user is enticed to click on or otherwise activate a URL that includes scripting language. This isn't that hard to do, because often users, especially those who are less technically sophisticated, don't look at where the URL is actually directing them to when they click on it.

Further, and unbeknownst to many users, it's possible to encode an object on a web page with a malicious URL such that just by viewing the page, the user is activating that URL. There's no way for the user to tell what's happened, and the only way to prevent it is to lock down the user's web browser such that it will not execute any scripts it finds on a web page. The problem with this is that locking down the browser to this degree will cause media-rich web sites to malfunction (from the user's point of view). In other words, as has always been the case, security is often sacrificed for ease of use.

XSS vulnerabilities have been exploited since the advent of the World Wide Web, but XSS became a really hot topic in 2005, which is when my former employer asked me to write the white paper about how one of their products addressed the issue. Vulnerability scanning for XSS was all the rage, and web site developers were scrambling to fix their html and scripting such that code injection could no longer work. Gradually, things calmed down to the point where my colleague could declare on a mailing list full of security geeks that it was no longer an issue. Too bad he was wrong.

The other day, one of my friends commented on a LiveJournal™ post I'd made, to the effect that it appeared my post had been hacked. He directed me to a news article on LJ that I'd missed: http://news.livejournal.com/117957.html . You can read the article for yourself, but basically what happened is that someone had managed to infect a Flash™ file with a malicious URL. Anybody who viewed the file would have their latest (at the time) LJ post altered: the infected file would be inserted, any tags or other extra info would be deleted, and (usually) the post's security level (i.e. public/friends only/private, etc.) would be altered.

Sure enough, my post was "infected". My tags and location were removed, a formerly "friends only" post (the default for my journal) was now public, and the infected media had been inserted. However, the site already knew about the problem, and so it had turned off media embedding so that no further users would be affected, and issued a bulletin explaining what had happened, so all I saw were the "boxes" mentioned in the news article, not the infected media. At this point, I have no idea what the media looked like or who I "caught" the infection from, but as it's been contained and fixed (and my post is now "friends only" again), I'm only mildly curious.

The LiveJournal™ problem was caught and mitigated quickly, and while certainly there was a breach of privacy (secure entries becoming public, email addresses being mined), the effects were relatively minor. But it should be pretty obvious that XSS is far from "no longer an issue", given what happened.

Thursday, September 17, 2009

this just in

So, in case you're not aware of it, there's an online petition to appoint Peiter Zatko to the President's Post of Cybersecurity Chief (also known as the Cybersecurity Czar). As soon as I heard of the petition, I clicked through to sign it, even though I'm not sure what, if any, effect the petition will have on Mr. Obama's decision. The fact of the matter is that I admire Mr. Zatko so much that I couldn't fail to sign it.

Mr. Zatko, who is currently 38 years old, has been a researcher in the field of network security for as long as the field has been extant. In 1995, he published the seminal white paper on the buffer overflow attack, "How to Write Buffer Overflows", which remains an important tutorial on the topic today for hackers of all varieties. Choosing to use his abilities to help rather than harm the government, he was one of several hackers to testify on security weaknesses before a Senate committee in 1998. Two years later, he met with then-President Clinton at a summit on network security. Currently, he is a division scientist at BBN, who knows a good thing when they see it (they wooed him back again after losing him in the '90's to @stake, which was essentially the L0pht gone corporate).

I believe that Mr. Zatko could ably perform the position of Cybersecurity Czar; in fact, he is uniquely suited to this position simply because of his background in grey hat hacking. My feeling is that if Mr. Obama is serious about network security, he will appoint Mr. Zatko or someone very much like him to this very important post.

Go for it, Mudge!

Wednesday, September 16, 2009

IETF Publishes a Draft on Remediating Bots in ISP Networks

The Internet Engineering Task Force just published a draft on how ISPs can help to remediate bots on users' systems or home networks. Having read the draft, I do have a couple of thoughts on it, which I will present after a quick synthesis of the draft.

Synthesis of Draft


In short, the draft covers the following subjects:
  • Maintaining privacy of the user

  • Non-interference with legitimate traffic

  • Recommendation for types of tools

  • Challenge of "definitive vs. likely" in informing user

  • Dealing with user complaints

  • Sharing of bot information with other ISPs

  • Use of Honeynets

  • Informing users:

    • Email

    • Telephone

    • Postal Mail

    • "Walled Garden"

    • IM

    • SMS

    • Web browser message

  • Remediation

  • Guided Remediation


So for me this actually brings up a couple of questions. First of all, who's responsible for a bot on the network? And second, what is actually going to work in a situation like this?

Responsibility


If there is a bot -- or really, any malicious piece of code -- on a user's personal system, who is responsible for discovering and/or remediating it? Unfortunately, it's not obvious. The user, obviously, owns all of his computers and networking equipment, up to and in some cases including the DSL or cable modem. That said, the ISP owns the actual connectivity. The ISP also will also get the black eye if malicious packets are coming through its networks, for example, if computers on the ISP's networks are used in DDoS attacks. The hope is that the ISP would have found and remediated the malicious code before that happens, but how far can (and should) the ISP go in the attempt to do so?

What Actually Works


The draft, in discussing options for informing users, talks about a "walled garden". What a "walled garden" does is place the user's account in some degree of isolation from the rest of the network, cutting off access to some or all services. The presumption is that the user will notice that his access is cut off and will contact the ISP, initiating a dialog that can lead to remediation. The draft mentions that the walled garden can persist until the problem is remediated, or it can be lifted as soon as the user has been informed of the malicious code.

In my opinion, the walled garden should actually serve the following multiple purposes:

  • Inform the user of a potential bot or malware based on the ISP's scanning (etc.) activity;

  • Remain as a safety net while the ISP and the user dialog about further malware scanning and remediation, and begin that process;

  • If the malware is found to actually exist, remain as a continued safety net until everything has been done, by both the ISP and the user, to remediate the situation.

In other words, keep the user's account isolated to at least some degree until the problem is fixed. Obviously, this is not going to be something that users will necessarily like, especially if they don't understand what's going on. And here is where I think ISPs need to take more responsibility, from the start, when user first sign up for Internet service.

In general, ISPs sign users up for Internet service, and then they just let them go. For users like myself, who know what they're doing on a network, and just want to be left alone, this is a pretty good option. But any user can become the victim of malicious code, no matter how sophisticated they are, and I think that ISPs are letting users down when they don't try to educate them about malware and what it can do. Just providing users with a CD of "connection" software, which may or may not contain AV and antispyware tools at a minimum, is not enough to fulfill an ISP's responsibility to keep bots and malware from reinfecting the Internet from its users' machines.

That's why I think the extended walled garden approach is necessary, combined with the ISP stepping in and helping the user confirm the presence of the malware and then help them remove it. But I also think that the ISP has to take more responsibility up front, to help the user understand what malware is, what it can do, and what to do to mitigate possible threats. In other words, I like the draft as far as it goes, but I think that it doesn't go far enough.