Recently, I had a conversation with the leader of a large Government Agency. It was about Artificial Intelligence (AI) and how to prevent Social Engineering attacks. It is refreshing to see how much care these leaders put into making sure that public information is safe and not misused.
Oxford Languages defines Social Engineering in the context of information security as,
“The use of deception to manipulate individuals into divulging confidential or personal information that may be used for fraudulent purposes.” (1)
According to Kevin Mitnick, a popular social engineer, it is much easier to trick someone into revealing a password for a system than to exert the effort of hacking into the system. (2)
Here is a list of Social Engineering attacks and their definition: (3)
- Phishing: Using spoofed email addresses and links to trick people into providing personal information.
- Whaling: A variation of phishing that targets high-level business executives and government agency heads.
- Vishing: Using voice communication, like phone calls, to trick individuals into providing personal information.
- Baiting: Luring victims into providing sensitive information by promising something of value for free.
- Honey Trap: Pretending to be romantically interested to lure victims into revealing information or paying money.
- Pretexting: Creating a false pretext or scenario to manipulate individuals into providing information or taking action.
- Tailgating: Gaining unauthorized access to a restricted area by following an authorized individual.
- Quid Pro Quo: Offering a benefit in exchange for information, like offering IT support in exchange for login credentials.
- Watering Hole: Compromising websites frequented by the target to infect them with malware.
- Impersonation: Pretending to be someone else to gain trust and access to information or systems.
How do you build a system (possibly using AI) designed to help users determine the status of a process like a driver license renewal without exposing them to possible Social Engineering attacks?
Well, you ask the AI itself for help.
Here is what I have done:
- I created an AI Assistant.
- I trained it with content related to the mobile Driver License (mDL).
- I asked the AI Assistant to analyze its own vulnerabilities in relation to Social Engineering attacks.
Here is the exchange:
AI Assistant: Hello, how may I help you today?
User: What is the document about?
AI Assistant: The document provides information and guidance on the mobile driver’s license (mDL) program. It covers various aspects such as how to enroll, common error messages, what to do if your physical DL/ID card is expired or lost, where the mDL is stored, how to use the mDL for identification and verification, and more.
User: List the Social Engineering Attacks mentioned in the document.
AI Assistant: The document mentions several types of social engineering attacks, including Phishing, Whaling, Vishing, Baiting, Honey Trap, Pretexting, Tailgating, Quid Pro Quo, Watering Hole, and Impersonation.
User: On a scale 1 through 10 (with 10 being the highest risk) what are the chances that the current content could be used for the purpose of a Social Engineering attack?
AI Assistant: The content provided about the mDL program itself does not contain specific information that could easily be exploited for a social engineering attack. The details shared are primarily about how to manage your mDL, its security features, and what to do in case of certain scenarios. Therefore, the risk of this content being used for a social engineering attack is relatively low. I would rate it a 2 out of 10 on the scale of risk.(4)
The approach above represents a first step towards using the AI itself to discover the vulnerabilities of the system. This idea needs to be enhanced. For example, having separate AI systems (Agents) checking on each other to avoid bias rather than having one checking on itself. Also, we could have probed further by asking the AI Assistant to list the steps to perpetrate a Social Engineering attack based on the provided content to see if anything was possible. Also, the AI Assistant can be used to improve its own security through questions such as: “Are there any changes I could make to this specific content to make it even less vulnerable to Social Engineering attacks?”
What other ideas do you think I should try?
Resources
1) Oxford Languages – https://languages.oup.com/google-dictionary-en/
2) Social Engineering by Christopher Hadnagy – https://www.amazon.com/Social-Engineering-Art-Human-Hacking/dp/0470639539
3) Copado Team – https://www.copado.com/resources/blog/12-types-of-social-engineering-attacks-to-look-out-for
4) SICURA® AI Platform: https://www.nohold.com/