Tips for an Information Security Analyst/Pentester career - Ep. 59: Blue team action
What is being a blue teamer all about?
I don't like theoretical blah blah, so in this post I'm trying to give you a realistic flavor of what a blue teamer's work is all about.
I'm going to use AlienVault USM Anywhere Online Demo to explain you what being a blue teamer feels, at least partly, like.
So, what does a blue teamer do?
They check server logs for interesting patterns and, when a single event is found to occur multiple times, it generates an alarm.
In other words, a failed log in per se doesn't represent a relevant threat but, if we have 300 such events in -say- 90 minutes, we might be up with something we want to look at much more thoroughly.
Before getting into alarms it's important to talk about events, directives and correlation.
A single event per se isn't very important, but, when you start having an important number of alike events, you can be up with a certain attack pattern. Directives are rules defining what to do when a certain event is detected.
A correlation engine analyzes a series of logs and, if they match with a certain rule within a specific time frame, it creates an alarm with a specific priority.
Let's go to Activity/Alarms from the dashboard in AlienVault to see what's going on.
You might notice all the alarms shown in this demo are fake and are related to virtualized systems (you see VMware, Hyper-V and AWS environments).
You'll never normally see such macroscopic situations in a real-life example, but there's nonetheless some more interesting alarms we might want to investigate.
Lots of alarms might be triggered by automated vulnerability scans or by scheduled tasks.
One pretty realistic example we might want to look at is related to repeated login failures.
I'm not normally very worried about it, but there are certain cases where you want to have a closer look at them.
Let's now analyze a specific example from the demo.
I analyze the details by clicking the alarm.
You notice this alarm was generated by a series of logs related to the same source.
The first thing we want to do is to analyze source and destination IP address.
I wrote a bash script to do that, called ipchecker.bash. but you can also use OTX (Open Threat Exchange), which is an intelligence database developed by AlienVault, if you have a subscription for it.
By clicking Look up in OTX, it doesn't tell us a lot.
Therefore, I run my script.
In its latest version, I added Virus Total to the sites I normally use to check IPs, as it's a reference I use pretty much every day.
The source IP address results to be located in Botswana.
Talos tells us its reputation is poor, but that doesn't necessarily mean an attack is going on.
As for the destination IP looks like it's the legit office365 portal, even though Virus Total detects some typosquatters.
At the end of the day, we can safely assume it's a false positive.
Someone got locked out of his/her Office 365 account.
We might additionally contact the client to let them know about the issue and add their insights to our analysis, but I'm not hugely concerned about this type of events.
I start being very concerned when, analyzing logs, I find that the Administrator account, or the SID S-15-18, which is a service account used by the system, are involved.
Another thing raising a red flag to me is when I find a code such as 0xC0000072 in the description (User logon to account disabled by administrator).
In fact, this isn't very frequent and might mean someone got hold of an old account and tried to use it in order to login (a former disgruntled employee, maybe?).
The codes you find in raw logs are cryptic and not very human-friendly.
They display a bunch of hex codes, but they do have a very specific meaning.
I use several references to interpret them and one of my favorite is this.
Of course, I analyzed a very macroscopic and clear to define alarm, but real-life scenarios can be much blurrier and that's why being an analyst is hard.
You can easily overlook details in the massive amount of information you have to sift through every day, if you don't have an eye for details.
When this happens, a breach is round the corner.
Another problem with blue team work is the "chicken little-ish" effect (quoting John Svazic).
What I mean is sometimes you may be afraid to wreak havoc and upset the client, for fear of getting in trouble.
Essentially, you hold off on creating an alarm even when there might be some potential red flags, until you gain additional information, but sometimes this can be too little too late.
That's a real problem.
I'd rather talk to the client, asking them for insights they might have, rather than scaring them off altogether.
Sometimes you might find that a bunch of alarms were generated because a specific server was down, for example.
This is something that, if your company provides managed services to the client, the client only might know, as it's their network in the end.
This is something that, if your company provides managed services to the client, the client only might know, as it's their network in the end.
The biggest problem for blue teamers is to tell real alarms from noise.
For example, when I started my current job, monitoring AlienVault, one day I started noticing a bunch of juicy alarms for a customer: XSS, SQL injections, brute force attacks, action finally!
I was all excited thinking: "This is the time".
Then my boss told me it was the result of an automated vulnerability scanning.
So now I know: it's not a real thing.
The problem is: you can't trust 100%.
Maybe one day there'll be something real behind that noise and that's when you're compromised.
What does a shooter do to try escaping?
He/she mixes to the crowd of people running away.
You don't notice a shooter, if he/she's amidst a crowd.
That's what a real threat might do.
Comments
Post a Comment