What is a Red Team?
Finding the cyber security vulnerabilities before the bad guys do.
Finding the cyber security vulnerabilities before the bad guys do.
‘Red team’ activities are concerned with offensive security exercises e.g. trying to gain access to an organisation, a network, a system etc. This is done through a number of means, but those you have probably heard of include:
All of these will almost certainly make use, to some extent, of Open Source Intelligence (OSINT), which concerns gathering information about the target from free resources, such as social media accounts, news reports and public records (take a look at our article on social engineering for more information). This may be achieved by physical reconnaissance or by looking online for data about the target, which can then be used to identify vulnerabilities in order to try to gain access.
Let’s delve a bit deeper into what these methods involve, so you can see how they could benefit your organisation:
A vulnerability scan (vulnerability assessment) is a very high-level test which doesn’t go into as much detail as a penetration test. It’s the equivalent of a burglar trying the doors and windows on a house to see if they’re open – and then not going into the house (which would be a penetration test).
This type of scan identifies how an application, website or other system is vulnerable, but it doesn’t tell you what you could do if you exploited the vulnerability.
A common way of testing web sites and web applications is to run a penetration test. This is where ethical testers—i.e. people with prior written permission from an organisation—run tests to see if they can find vulnerabilities, and find out what would happen if those vulnerabilities are exploited.
Penetration tests can take a number of forms:
Typically, the testers will provide a report documenting their findings, and the organisation being tested will then fix any issues found by the testers.
Much like taking your car in for an MOT and service, penetration tests should be run on a regular basis, because new vulnerabilities, including zero day threats, are constantly being discovered.
The ‘attacking’ team will make use of social engineering (in the context of information security, this refers to psychological manipulation of people into performing actions or divulging confidential information) as part of their efforts to gain access to a building or premises.
Physical testing is typically engaged by senior management to assess processes—such as visitor registration, tailgating, signing in, staff challenging non-wearers of passes etc.—to see how far a potential intruder could get into a building.
These tests may have a specific objective e.g. to access a specific server in a data centre, or to place a keylogger on a desktop PC to try to capture passwords, or to install a rogue Wi-Fi access to point to capture network traffic.
The intention of these tests is to identify weaknesses in policies, processes, procedures and training, so they can be addressed, and improvements made.
Red-teaming is often considered the highest standard of threat emulation and is suited to organisations who have an active security programme and are looking to validate the effectiveness of their approach and the alertness of their defensive solution.
Essentially, a team of offensive security professionals are engaged to perform a specific task; be it compromising a network, accessing a specific file and taking a copy, or gaining access to an individual’s business emails. Typically, an objective is specified and the team’s creativity is unleashed (within limits, of course). This more closely simulates what a genuine attacker would do – explore and search for the easiest way into a target using their skills to create opportunities when none currently exist.
Regardless of whether the team meet the objective, PGI consultants will explain what they did and how this was achieved. This can be compared to any discovered actions to ensure that logging and monitoring levels are sufficient, and to identify the attack taking place, preventing a genuine intruder from taking a similar approach.
One red-teaming technique involves the delivery of a special parcel to the building—marked private and confidential—addressed to a fictitious employee. Within the package is a Wi-Fi-hot spot and 4G modem along with batteries, allowing the red-team to hack the wireless network without entering the target building. They can then leverage this capability for further attacks, such as gaining access to the human resources system and adding a team member to the database as a new employee, who is able to enter the site after being checked by security. This member of the red-team then poses as a member of the IT team and approaches users, asking for them to print out a file that he needs. Once he has obtained a hard copy of the document, he can walk out of the building with the file, having completed his assignment.
Red team activities can identify weaknesses and vulnerabilities in the physical processes and defences in the controls you have in place to protect your online systems. It is better to find these weaknesses yourself than for an attacker to find them, because you are then in a position to put in place better defences, enhancing your controls and protecting your data.
The type of threat changes regularly, new methods of attack are constantly being developed, so it is important that you test your systems regularly.
For more information on how PGI can help you find your vulnerabilities, please contact us.
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.