Cyber criminals have many tricks up their sleeves when it comes to compromising sensitive data. They don’t always rely on system vulnerabilities and sophisticated hacks; they’re just as likely to target an organisation’s employees.
The attack methods they use to do this are known as social engineering.
In this blog, we explain how social engineering works, look at common techniques and show you how to avoid social engineering scams.
What is social engineering?
Social engineering is a collective term for ways in which fraudsters manipulate people into performing certain actions.
It’s generally used in an information security context to refer to the tactics crooks use to trick people into handing over sensitive information or exposing their devices to malware.
This often comes in the form of phishing scams – messages supposedly from a legitimate sender that ask the recipient to download an attachment or follow a link that directs them to a bogus website.
However, social engineering isn’t always malicious. For example, say you need someone to do you a favour, but you’re unsure that they’ll agree if you ask them apropos of nothing.
You might grease the wheels by offering to do something for them first, making them feel obliged to say yes when you ask them to return the favour.
That’s a form of social engineering. You’re performing an action that will compel the person to do something that will benefit you.
Understanding social engineering in this context helps you see that social engineering isn’t simply an IT problem. It’s a vulnerability in the way we make decisions and perceive others – something we delve into more in the next section.
Why social engineering works
Think of the human brain as a security network and its susceptibility to being fooled as a system vulnerability. That makes social engineering the exploit that fraudsters use to take advantage of that vulnerability.
But instead of malware injection or credential stuffing, criminals use rhetorical devices – ways of speaking that persuade us to follow their direction.
For an idea of how they do this, let’s take a look at Robert Cialdini’s six principles of persuasion:
This is the notion that, when you do something for someone, they feel obliged to return the favour.
Cialdini uses the example of a waiter or waitress in a restaurant giving you a small gift with your bill, such as a mint or a fortune cookie.
This gesture has been proven to increase the tip customers use by as much as 14% – and when the item is implied to be a special reward (“For you nice people, here’s an extra mint”), the tip increases by 23%.
Reciprocity can be particularly dangerous from a cyber security perspective because it shows how rarely we think about the motives behind supposedly generous acts – or, if we aware of them, how we stick to our social obligations anyway.
For example, say that a colleague in your workplace offers to take a task off your hands.
When you receive an email from them a few days later asking for something in return – access to a file they’re not supposed to have – you may feel tempted to comply even if you know it’s against company policy.
This can be even more insidious if, due to your gratitude for their initial help, it never crosses your mind that they shouldn’t access this information.
It’s impossible to guide potential victims away from such actions, because they never consciously realise that this isn’t a legitimate request.
This principle states that people are likely to want something if they know there’s a finite supply.
It works particularly well when the person or organisation providing the service announces a reduction, emphasising how scarce the service is.
For example, when British Airways announced that it would be cutting back on its London–New York Concorde service due to a lack of customers, ticket sales jumped.
Nothing about the service had changed, nor had the price dropped. British Airways hadn’t reinforced the benefits of flying by Concorde or announced that it would be stopping the service altogether.
But what it did was imply that the service might not be available in the future.
This technique can also be seen when organisations market their product as “while stocks last”. The aim is to create a sense of urgency, forcing people to act now for fear of missing out.
You’ll also see it in scams that offer similarly great deals. Victims rush to make their purchase assuming the deal won’t be there if they wait too long.
This is the concept that people trust experts in their fields – particularly when they can back up their knowledge with evidence.
Cialdini notes, for example, that we’re more likely to follow a medical professional’s advice if we’re aware of their credentials.
By highlighting their expertise – whether that’s by displaying their qualifications on the wall, referring to themselves as ‘Doctor’ or listing their professional experience – they assure the patient that they are trustworthy.
This technique is used in almost every successful phishing campaign, with scammers claiming to be a trusted figure – whether that’s, for example, the government or a popular online service such as Netflix or PayPal.
This principle exploits people’s unwillingness to be hypocritical. The social engineer nudges the victim into a seemingly harmless opinion or act, then uses that logic to force them into a larger, more consequential position.
Cialdini cites the example of homeowners who had agreed to place a small postcard in the front windows of their homes that supported a Drive Safely campaign.
A few weeks later, those people were far more likely to agree to erect a large, unsightly billboard in their gardens displaying the same message when compared to a neighbourhood that hadn’t first been asked to display postcards.
In another example, a health centre found that patients were 18% less likely to miss an appointment if they were required to write down the details instead of a receptionist.
The simple act of writing down the appointment details reinforced the fact that it was the patient’s obligation to turn up.
This technique can also be seen in the likes of the sunk cost fallacy – in which someone continues to spend time, money or effort on something because they don’t want to accept that they made a mistake.
The fifth principle – that people are more likely to agree to something when asked by someone they like – occurs accidentally as it is does deliberately.
After all, some people are simply likeable, and through no conscious effort on their part, they find that others are more willing to do them favours.
But what makes a person likeable? Cialdini says that there are three important factors: we like people who are similar to us, who pay us compliments and who cooperate towards mutual goals.
Cialdini refers to a study in which a group of business students were almost twice as successful in a sales negotiation when they shared some personal information with the prospective investor and found something the two parties have in common before getting down to business.
However, there’s another factor at play in this example. It’s not just that the students asked the right questions; it’s the way those questions were asked.
Perhaps the most important thing that makes someone likeable is if they appear genuine. People are generally good at spotting when someone is disingenuous, so it can be very hard to affect likeability in face-to-face correspondence.
But via email, we have the time to curate what we say – something that’s particularly true for scammers, who often spend a great deal of time crafting templates for their emails.
The final principle is consensus, which states that when people are unsure what to do, they follow the actions and behaviours of others.
Cialdini uses the example of a study in which hotels tried to get guests to reuse towels and linens.
It found that the most effective way of doing that wasn’t to highlight the benefits of reusing towels (such as it being environmentally friendly) but to simply state that the majority of guests already do this.
At first, it seems incomprehensible that we’re more effectively persuaded by an argument that’s essentially ‘everyone else is doing it’ rather than being presented with evidence, but it aligns with a lot of our other actions.
Consider the last time you were in an unfamiliar environment; did you not look at how others were acting and follow their lead?
The principle of consensus demonstrates that people don’t need to be given a reason to comply with a request; rather, they can be influenced by pointing the actions of those around them.
Common social engineering attack techniques
This refers to the creation of a false scenario – or pretext – to contact victims.
In a typical social engineering scam, the pretext might be that there has been suspicious activity on your bank account or that you need to confirm your payment card details for an Amazon order.
This is a specific type of phishing scam in which the scammers claim they have something beneficial for victims if they follow their instructions.
Whereas the examples we listed above use fear as a motivator – ‘someone is trying to break into your account’, ‘your package won’t arrive’ – baiting relies on curiosity and desire.
For example, a scam might direct the victim towards a website where they can supposedly download music, TV series or films. However, that website is designed to capture personal information or trick people into downloading infected files.
Baiting has also been used in physical attacks, with scammers leaving infected USB drives lying around conspicuously, waiting for someone to pick them up thinking that there might be something interesting on them.
Quid pro quo
Similar to baiting, quid pro quo attacks claim to help the victim – usually by offering a service – in exchange for information. The difference is that these types of attacks are supposedly mutually beneficial.
The prototypical quid pro quo attack was the Nigerian prince scam: the attacker has vast sums of money they need help transferring, and if you give them the cash to do that, you’ll be recompensed.
Attacks have become more credible since then. For example, an attacker might phone up employees claiming to be from technical support.
Eventually they’ll find someone who was genuinely waiting for assistance and allow the scammer to do whatever they want to their computer under the assumption that they are a colleague who is solving the issue.
This attack is designed to trick people into buying unnecessary software. It begins with a pop-up ad – generally imitating a Windows error message or antivirus program – claiming that the victim’s computer has been infected with malware.
Alongside the message, the ad will claim that you need to purchase or upgrade your software to fix the issue.
Those that do end up installing bogus software that appears to scan your system but is in fact either doing nothing or installing malware.
This is a specific type of phishing in which scammers pose as customer representatives on social media.
They create accounts that imitate an official brand and wait for someone to post a complaint about that organisation on Facebook or Twitter.
The scammer will respond in one of two ways. They might link to what appears to be an official complaint channel or offer the victim something by way of an apology, such as a discount on their next purchase.
Both these approaches are designed to direct the victim to a site or email address controlled by the attacker, where they’ll attempt to steal the victim’s personal information.
How to protect yourself from social engineering
There are many ways you can protect yourself from social engineering attacks. For example, you should:
Learn the most common techniques that criminals use in phishing attacks;Implement two-factor authentication to secure your accounts; andEnsure that your antivirus software is regularly updated.
Organisations that want to address the threat of social engineering should test employees’ susceptibility with a social engineering penetration test.
With this service, one of our experts will try to trick your employees into handing over sensitive information and monitor how they respond.
Do they fall right into the trap right away? Do they recognise that it’s a scam and ignore it? Do they contact a senior colleague to warn them?
With this information, along with our detailed report containing our findings guidance, you can pinpoint your security weaknesses and fix them before you’re targeted for real.
A version of this blog was originally published on 19 June 2020.
Read more: itgovernance.co.uk