Learn Social Engineering, page 38
There are still loads of hygiene things that all organizations should do (and most aren't yet doing all of them). Make sure you patch your software, protect your privileged credentials from credential theft, modernize your operating systems, middleware, and applications (and turn the security features on!), but as you continue to implement and evolve your strategy, continue to think of the economics of both your organization and the adversary in order to make the best possible defensive decisions! Think about human frailty, both within your organization and with your adversaries, and develop your strategy appropriately. Good luck, cyber warriors! special case. It is rather challenging to covertly investigate a member of the IT team. We had to think out-of-the-box as to how we would accomplish our task. We eventually came up with a plan we felt would work. The CSO was a retired FBI agent and still had ties to the bureau. We told him that it would not be unusual for him to get a call warning him about the potential compromise of their network or the laptops of the executive staff (who had traveled overseas recently). We would pretend to be private contractors working for a certain government agency investigating an undisclosed cyber threat. Because of who we were supposed to be and what they did, we would pretend that we could not share the details of our investigation with him. According to the plan, we would show up at the location while our suspect was working and would have his laptop with him. Following this strategy, we went into the CSO's office, called the CIO, and carried out an Oscar-worthy performance convincing him of who we were and what we were there to do. The CSO confirmed who we were and told the suspect that he had received the call and knew we were coming. The suspect was initially upset that the CSO did not share any of this with him prior to our arrival, but we convinced him that the CSO was instructed not to share the information with anyone pending our arrival. We asked for the help of the CIO and his staff to gather all laptops belonging to executives who had traveled overseas during the last three months, obviously knowing he was one of them. We forensically imaged a total of 12 laptops as part of the scenario. Carved internet artifacts showed that the suspect was in fact using a web-based interface to log into the CEO's and other executive's emails, among many other things that they were not aware of. Once we had recovered the proof from his laptop, we returned to the client's location and confronted the CIO with the evidence we recovered from his computer. When he saw the printouts, he admitted to every instance of misconduct we discovered.
Daniel Weis
Dan has been in the IT industry for over 20 years (since 1995), he has worked within government, charitable sectors, system integrators, businesses, and industry/infrastructure from all areas both in a technical and solution capacity, through to providing security services and security consulting. Currently, Dan is heading up the security team at Kiandra IT. As the red team leader, he performs and provides guidance to testers on penetration testing and security services and performs testing on some of the most secure environments in Australia.
Dan is also a regular on the speaker circuit and presents at various conferences and events on cyber security, cybercrime, hacking, and the darknet and has a number of published articles/media on security to his name.
Social engineering is defined as any act that influences a person to take action that may or may not be in their best interest, such as convincing people to reveal confidential information or to do something, to click on a link, to open an attachment, and so on.
Social engineering can be very easy and often yield great results.
Steve Riley has one of the oldest and best presentations out there on Defending layer 8 and I highly recommend it. Steve identifies the following types of exploits, which I can confirm work great for us on engagements all the time, and we have incorporated his presentation into our training for testers.
Diffusion of responsibility
If targets can be made to believe that they are not solely responsible for their actions, they are more likely to grant the social engineer's request. The social engineer may drop names of other employees involved in the decision-making process or claim another employee of higher status has authorized the action.
The very important person says you won't bear any responsibility.
Chance for ingratiation
If targets believe compliance with the request enhances their chances of receiving benefits in return, the chances of success are greater. This includes gaining advantage over a competitor, getting in good with management, or giving assistance to an unknown, yet sultry sounding female (although often it's a computer modulated male's voice) over the phone.
Look at what you might get out of this!
Trust relationships
Often, the social engineer expends time developing a trust relationship with the intended victim, then exploits that trust. Following a series of small interactions with the target that were positive in nature, the social engineer moves in for the big strike. Chances are the request will be granted.
He's a good guy, I think I can trust him.
Moral duty
Encouraging the target to act out of a sense of moral duty or moral outrage enhances the chances for success. This exploit requires the social engineer to gather information on the target, and the organization. If the target believes that there is a wrong that compliance will mitigate, and can be made to believe that detection is unlikely, chances of success are increased.
You must help me! Aren't you mad about this?
Guilt
Most individuals attempt to avoid feeling guilty if possible. Social engineers are often masters of psychodrama, creating situations and scenarios designed to tug at heartstrings, manipulate empathy, and create sympathy. If granting the request will lead to avoidance of guilty feelings, or not granting the requested information will lead to significant problems for the requestor, these are often enough to weigh the balance in favor of compliance with the request.
What, you don't want to help me?
Identification
The more the target is able to identify with the social engineer, the more likely the request is to be granted. The social engineer will attempt to build a connection with the target based on intelligence gathered prior to, or during, the contact. Glibness is another trait social engineers excel at and use to enhance compliance.
You and I are really two of a kind, huh?
Desire to be helpful
Social engineers rely on people's desire to be helpful to others. Exploits include asking someone to hold a door, or with help logging on to an account. Social engineers are also aware that many individuals have poor refusal skills and rely on a lack of assertiveness to gather information.
Would you help me here, please?
Cooperation
The less conflict with the target the better. The social engineer usually acts as the voice of reason, logic, and patience. Pulling rank, barking orders, getting angry, and being annoying rarely works to gain compliance. That is not to say that these ploys aren't resorted to as a last-ditch attempt to break unyielding resistance.
Let's work together. We can do so much.
Fear
This is normally the final stand. A social engineer will use fear to try and coerce the target. This can be threatening, and usually happens due to failure of cooperation from the mark or the inexperience or frustration at a lack of success from the mark.
Don't you know who I am? If you don't help me I'm going to make sure you get fired!
These exploits get leveraged in all social engineering attacks, such as vishing, phishing, and smishing.
The success of an attack depends upon a number of factors including:
Type of person and position: Are they customer-facing, such as a service desk person or receptionist? If so they are more likely to help.
How busy they are: Similar to the previous point, is their objective to move on to the next call or to the next task?
Male or female: On average, I find that we have a 40% better success rate using females for social engineering attacks than males. Females are naturally more trustworthy.
How social are they: It is typical to find that people who have a large social media presence and are very public are more likely to respond to social media requests and emails containing pictures for example. A lot of the time, these types of individuals are needing to have that attention.
Education: How Tech savvy is the user, how aware are they of social engineering attacks, and do they have a heightened level of suspicion?
As with all of the previous points, do your homework and know your mark.
Phishing
Phishing falls under the category of social engineering and always has been, and will continue to be, the easiest way into most organizations today. Phishing is so dangerous as it usually bypasses all defenses in place and has a low likelihood of detection.
Everyone knows the common indicators as follows:
The sender is unknown, or you are not expecting an email from the person
Similar sounding domain names, eBay-secure.com, paypol.com, and so on
Incentive-based surveys, prizes
Missing logos, spelling, and/or grammatical mistakes
Generic greetings
Links with alternate URLs, such as shorteners (tinyurl, bit.ly, and so on)
There are a number of reasons why they continue to work:
The human element, sometimes the user knows it looks dodgy but will continue anyway out of curiosity or confusion.
People have a natural desire to be helpful (and curious).
The user is distracted, tired, and it only takes one slip of the concentration, exhaustion from a newborn baby for example.
The user is lacking in cyber security awareness.
The user is expecting a package or similar and mistakes the phish for a real email.
Fear. A classic social engineering tactic is to utilize fear to invoke an immediate response without thinking, such as a speed camera fine notification, email from the CEO, and so on.
Each day, phishing emails get more sophisticated and harder to spot, which is why it is important for you to stay abreast of the latest techniques and campaigns.
Recent campaigns leverage utility bills and Office 365 scenarios, similar to the following:
Classic scenarios that work well for us include new systems scenarios, such as new email archiving, AV, or a cloud service. Merchandising and free stuff (people go crazy over the word free ). Fake notifications such as dropbox, sharepoint, and so on have also yielded success in the past. It is important that the campaign looks and feels as real as possible. In our recon, if we identify through metadata or other public information that an organization is using ADP for their payroll, we will create a new site and domain called adpp.com or similar and use that.
If we identify an accounts-payable person through LinkedIn, they are the perfect person to send a fake-payloaded, outstanding invoice to. It should be addressed to them of course, no generic names!
Again, the targeted email or global campaign should be specific and seem legitimate to the target.
Along with the previous, I can tell you that myself and my team get into a large number of heavily secure environments using social engineering. This includes phishing, physical access, USB drops, and fake/evil Wi-Fi APs.
Our phishing assessments yield, on average, a 20% click rate, with 25% of people happily providing us their passwords. We also have 1-2 repeat offenders on every single engagement. A repeat offender is someone who comes back to the phishing site two, or multiple, times and gives us their passwords multiple times, just in case it doesn't work the first time.
We have breached physical environments, through arriving on-site dressed as an air-conditioning / service guy—We have been alerted to an issue with the HVAC system in your server room and we're here to investigate, or, We are here to test the fire alarm, and through imitation of legitimate users as well, we will leverage classic tactics such as tailgating when it comes to imitating a legitimate employee.
On one of my earliest engagements, I remember performing an assessment for a large internet marketing company. This company had two wireless networks, a guest and a corporate, like most organizations today. Obviously I was after the passwords for those networks. From the outside, they were quite secure, firewalls, IPS, MFA, and so on. So, I called the receptionist—Hi, this is Dan from XYZ, I'm working with Bill in sales (of course I didn't know Bill from a bar of soap, I just got his details off LinkedIn). Bill told me I should contact you to get hold of the wireless password, so I can set up for a presentation I'm doing for you guys on Friday. She responds to me, Oh sure, which password were you after? The guest or the corporate? I'm playing stupid—I think Bill told me I need the corporate one. She then replies, I tell you what, why don't I email both of the passwords to you, and you can work out which one you want to use? I'm like, That sounds great. So, she sent them through and I finished early that day.
We also have success using USB drops. In the old days, and the earlier versions of Windows, we could get away with own agents and autoruns, but these days we leverage tools such as the Rubber Duckys to generate our own shellcode and to bypass restrictions at https://hakshop.com/products/usb-rubber-ducky-deluxe.
Please refer to the link https://www.packtpub.com/sites/default/files/downloads/LearnSocialEngineering_ColorImages.pdf for the images of this chapter.
Ask the Experts – Part 3
Raymond P.L. Comvalius
Raymond Comvalius is an independent IT architect and trainer from the Netherlands. He has been active in the IT industry for more than 30 years, of which 20 years were focused on Microsoft infrastructure products for both government and financial institutes. Raymond is the author of multiple books on Windows and security. As an architect, he supports organizations in IT strategy and realization of their next-generation workplace infrastructures. Raymond is actively involved with the national and international IT communities and has been a speaker at multiple international Microsoft events. Raymond runs a blog at www.nextxpert.com.
Raymond on the future of pretexting
We have seen that pretexting has become easier with mail and telephony, as they traditionally have low levels of security built in. It allowed pretexters to set the scene using genuine-looking mails or well-practiced telephone calls. As an organization, it is hard to prevent pretexters without taking very user-unfriendly measures. As a consultant, I have seen companies that require personnel present onsite for a password reset, or do not allow any form of remote access to the office. It is debatable if these kinds of measures really prevent bad things from happening.
As every organization defines where to draw the line, outcomes will differ markedly depending on the type of business, culture, and the risk appetite that comes with it. A bank, for instance, will handle things quite differently to an educational institute.
While these are very different institutions, still the same issues arise when assets are at stake. Both types of organizations require strict processes and procedures to prevent pretexters from succeeding in successfully creating the scene and obtaining high-profile assets. Still, practice proves that as an organization, you're always one step behind when it comes to the use of technology. History shows how the industry was surprised by the use of telephony and mail and how improvements were made to better enable us to check their authenticity.
For the future, we will see that the options for pretexters will move to new ways to set up scams. I had a very interesting conversation with fellow MVP and security guru Andy Malone once when we started to brainstorm about what modern technology is about to offer in the way of new features for social engineering. We had both seen how new video-altering techniques may provide a new path for pretexters. For instance, there is the project by Prof. Matthias Niessner who surprised the world in 2016 with real-time video manipulation technology that allows us to use a normal webcam and some existing video footage of our subject to create new images with our own face to make the chosen subject make those faces for us in a video. What if someone uses manipulated video for pretexting? If you ever thought that video was once trustworthy, those times are over:
More recently, we have seen the Deepfakes project on Reddit, using AI to swap faces in video. In the beginning, the technology was often used for the manipulation of porn videos. Those videos have been quickly banned from the internet after the creation of numerous clips with all sorts of celebrities in manipulated movies. But the technology that came with the application FakeApp also allows for other purposes, both good and bad. A nice example of a good purpose is to use the technology for video interviews to remove gender or racial bias when hiring. But the technology is evolving fast and allows us to swap faces in such a way that we can no longer see that the faces have been swapped in the video. Right now, we have a fantastic new way to set up the next pretexting scam.
