The Plebeian Dev

dunce_logo

View My GitHub Profile

AI and Cybersecurity for 2025: Trends and predictions for the new quarter century

Jan 5th, 2025

First and foremost, I want to thank everyone who has taken the time to read through these blogs. This has been a great experience with not only getting back on the horse of regularly writing but also validating my own professional knowledge and building on my skills. Being able to write out the things that you learn really helps to solidify those concepts in my brain. I also just want to make sure to remind everyone to take the time to give thanks and be grateful. I know there is always a lot of doom and gloom with cyber but I always see things as opportunities rather than just another messed up weekend because some new vulnerabilities made it so. I hope 2025 brings a whole new meaning to your life and productivity.

With all the thanks and praise out of the way. I wanted to go into what I think 2025 will bring to the table as it relates to up and coming cybersecurity trends and things I think we as security practitioners, engineers and administrators need to take into consideration and even try to learn more about to get ready for when that next great tech boom happens. The more I look where that tech booms seems to be on track from is in Artificial Intelligence. I will be honest, I am not that versed in AI and as such, I want to 2025 to learn more about it, how organizations manage these types of services and what risks it brings along.

Prediction 1: AI and Data compliance will be talked about much more

If not talked about more, enforced by organizations and governments alike. in 2024, we started to see more US states such as California, Colorado, and Virginia write laws around AI and consumer data privacy. Most notably California Privacy Rights Act paved the way for states approach to writing and enforcing such laws which was amended from the original CCPA back in 2020. In 2024, new regulations were voted into law providing more requirements for Data Brokers and other industries creating rules that refine the procedures for data brokers and increase public awareness around how user data is managed.

With all this talk about big data sets and how to manage it, sounds like something AI would love to deal with. That is where new technology platforms for Regulatory Technologies (RegTech for short) may fill in some gaps for organizations. Not only will these new technologies directly integrate with AI but they will help to report risks and automate workflows to potentially mitigate new risks. Really no one knows how AI will affect the technology landscape in 2025, but I would be willing to bet that this one comes true.

Prediction 2: Shadow AI

Riding the wave and buzz of AI, naturally everyone from your interns to your senior managers will be wanting to leverage it to show that they are increasing productivity. Either way, this will start to cause security teams and data managers much more concern when not knowing how to account for all the possible organizational data may inadvertently leak when allowing users to access and utilize these services and machine models. Shadow AI is much shadow IT, things IT either can’t account for or directly manage due to bad or over permissive policies. 2025 will likely be the start of the conversation.

Prediction 3: More threat intelligence

Again, we know AI loves big data sets. When I think of some of the biggest data sets that I have worked on through 2024 were threat feeds and application logs. We will see more AI applied to not only the analysis of logs and feeds but possibly more so in the response. Although AI itself isn’t known for taking direct action per se, AI is good at summarizing, and I could be seeing things like an AI assistant that can summarize a particular incident or investigation. AI has also shown to be good at correlating data and making it easier for users to prompt for complex queries. In 2025 we may see AI bridge the gap between big data aspects of threat intel and synthesize it into an operational one where threat teams better hunt and catch new and emerging threats.

Prediction 4: Deep Fakes will become a bigger problem

Another take for 2025 will involve deep fakes and video or audio impersonations of famous/important people. We did already see such thing in 2024 during the election cycle when people received automated phone calls which appeared to sound like President Joe Biden telling folks NOT to vote in the primary. I believe more of these types of things will proliferate more due to how much more access there is to AI models that can take a relatively small amount of data (such as a hand full of YouTube videos) and be able to recreate someone's exact speech patterns and have them say anything. There just isn’t a lot of ways to combat this at this time, thus it makes me believe that we will see a lot more of it in 2025.

Prediction 5: Accepting the Dead Internet

Lastly, I think the one thing people will start to realize and possibly talk about more is the Dead Internet theory and if we are living in it. I mention this because also, we saw a huge spike in sock account usage and social media bot profiles throughout 2024. Many of these accounts are even powered by AI and even start conversations amongst themselves to appear as legitimate users. Meta has even announced that in 2025 they will be utilizing MORE ai accounts that they generate. Meta says that by doing this they will improve engagement on their platform. Once Meta starts it, all of the other social media platforms will start to do this same in the sake of seeing platform numbers grow and investors happy seeing numbers on PowerPoints going up.

So that is it, those are some of my predictions for 2025. All of these have come from recent events and articles that I have passed by throughout my social feeds and throughout the intel sharing spaces that I work through in my day job. I really hope that everyone has a great new year, we learn a lot and grow and hopefully don’t have to deal with any major incidents that keep us up over the weekends. Have a great year and thank you again for supporting and reading these blogs.

Kerberos and NTLM: Authentication protocols and their history

Dec 30th, 2024

When we talk about authentication, what I really mean to say is authentication in Microsoft networked system. The main protocol for such infrastructure is Kerberos and NTLM. Both have interesting histories and distinct mechanisms, making them good for certain use cases. Now and days, NTLM has been seen as deprecated though due to its weaker cryptography.

Understanding these protocols’ evolution and operation is key to grasping how modern authentication systems work and how to be a better IT systems engineer. So, let’s take some time to go over the timeline in which these authentication protocols came into being and how they will look moving forward into the future.

The Legacy Protocol that started it

NTLM (NT LAN Manager) is a suite of Microsoft security protocols introduced in the early 1990s with Windows NT 3.1. At the time, it was a significant step forward, offering challenge-response authentication to ensure that user passwords weren't transmitted directly over the network. Before NTLM, Microsoft alongside IBM used LAN Manager as a Network Operating system to provide system authentication which fell short of many security features. NTLM relies on hashed password values and uses these hashes in its authentication processes which was not a new concept but improved upon from its predecessor.

The protocol's simplicity made it widely adopted, but with its simplicity, also introduced vulnerabilities. NTLM's reliance on outdated cryptographic methods like MD4 and its susceptibility to relay attacks as well as brute-force password guessing became increasingly problematic as cyber threats began to evolve. Despite these flaws, NTLM remains in use today for backward compatibility, especially in legacy systems. As of November 2024, Microsoft Windows 11 has removed NTLM, including LANMAN, NTLMv1, and NTLMv2 and has implemented Negotiate as their default authentication method utilizing Kerberos and only falls back to NTLM when necessary or configured to do so. Now speaking of Kerberos.

The Rise of Kerberos

In 1999, Microsoft introduced Kerberos as their default authentication protocol starting at Windows 2000. Originally developed by MIT in the 1980s for the Project Athena network, Kerberos was designed to address many of the security shortcomings of earlier protocols like NTLM. In brief, Project Athena was an MIT project to pretty much create a computer network lab for students to learn about computer science since these types of mainframes were not generally used by students, rather by faculty. Kerberos derives its name from the mythical three-headed dog, reflecting the protocol's three core components: the client, the server, and the trusted Key Distribution Center (KDC).

Interestingly, the first public version available had export bans on it given that the United States government had controls over the exporting of technologies such as DES encryption. The development team then went on to create a new version without the encryption and named it “Bones” under the BSD licensing model.

Kerberos relies on a “ticket-based” authentication system. This means when a user attempts to log into Windows or another system using Kerberos, the KDC first issues a ticket-granting ticket (TGT) encrypted with the user’s secret key. This TGT is then used to request service tickets, which grant access to the specific network resources. The protocol also uses strong cryptographic methods, such as symmetric or even asymmetric encryption, ensuring that credentials are not easily compromised or crackable.

So, what are the key differences between the two?

NTLM uses a challenge-response authentication mechanism wherein a server challenges the client to prove its identity without transmitting a password. In contrast, Kerberos utilizes a ticket-based system, reducing the need for frequent password exchanges over the wire.

Kerberos is inherently more secure. It uses stronger types of encryption and ticketing minimizes vulnerabilities like replay attacks that NTLM is notoriously suspectable to. NTLM, by comparison, is much less secure, especially given its reliance on older hashing algorithms that have been mentioned throughout this blog. Hopefully you learned a bit about the difference between NTLM and Kerberos.

Token Binding and Session Protection: Mitigating Session Replay with Phishing Resistant MFA

Dec 10th, 2024

If you have been reading these blogs, you will see a consistent mention on one particular subject, phishing. It is by far the biggest attack vector and the one I deal with on a day-to-day basis with my employment. Something we are also working on is improving MFA as a whole including having controls for things such as “session jacking”.

As more organizations and individuals adopt Multi-Factor Authentication (MFA) to mitigate risks, hackers adapt, employing sophisticated techniques to bypass your security. Token binding emerges as a critical technology in improving and bolstering authentication mechanisms, offering a more robust defense against MFA-resistant phishing attacks where attackers can trick your users in handing over that precious session information. Before we get ahead of ourselves though, let’s learn a bit about what session tokens are and how binding them protects your users.

Understanding Sessions and how Token Binding happens

When a client (e.g., a web browser or an application) interacts with a server, it generates a cryptographic key pair. The private key remains securely stored on the client device, while the public key is shared with the server during the authentication process. The server then binds the authentication token, such as a session cookie or OAuth token, to the public key. If you are not familiar, session cookies typically manifest themselves on your user's computer as small files stored by the web browser, while OAuth tokens may be stored in application memory or local storage, depending on the particular implementation. Token binding is a cryptographic process that binds the authentication tokens to a specific client or device. You can make this super easy on your users and manage to use Microsoft’s Authenticator app as your binding device.

This binding makes sure that the authentication token cannot be used by any other device or client, even if intercepted in a Attacker-in-the-Middle (AitM) scenario. We will talk more about that later. At this point, the token becomes useless without the private key stored on the original client device, significantly mitigating risks associated with stolen credentials or token replay attacks where an attacker takes that session information and replays that information in a different browsing session from a different IP and or device. These are pretty interesting to see in the wild but should be addressed as soon as your IT team can handle.

What are most attackers are Phishing and MFA Vulnerabilities they exploit?

Attackers are after those juicy user credentials, always and forever. MFA started to add an additional layer of security into the mix, starting out and often with just a one-time password (OTP), push notification, or biometric authentication. However, attackers have developed advanced techniques like adversary-in-the-middle (AiTM) attacks, where they intercept and relay authentication tokens in real-time, effectively bypassing MFA protections.

Here’s an example, in an AiTM phishing attack, the attacker tricks your user into entering their credentials and MFA code on a fake website. The attacker then forwards those details to the legitimate website on the attacker’s behalf, successfully gaining access to the victim's session. This type of attack highlights the limitations of traditional MFA solutions in protecting against sophisticated phishing methods.

Now we prevent the MFA-Resistant Phishing

Token binding addresses the vulnerabilities mentioned above by ensuring that authentication tokens cannot be reused or hijacked by attackers. The binding process ties the token to a specific client’s cryptographic keys known as Cryptographic Anchoring. Even if an attacker intercepted the token, they cannot use it without the private key stored on the legitimate device.

In an AiTM scenario, the attacker may relay authentication tokens, but without the private key corresponding to the token binding, the server will reject any unauthorized requests. This makes it virtually impossible for the attacker to impersonate the user. Token binding ensures that sessions remain tied to the original client integrity. Any attempt to hijack or reuse a session token from a different device or browser will fail.

How can your organization adopt this?

Token binding as a feature is supported by modern web protocols like TLS and HTTP/2. It’s also integrated into certain identity frameworks and platforms such as Microsoft 365 and Conditional Access, enhancing their ability to defend against advanced threats. Organizations should be looking to adopt token binding and should ensure their systems and clients are compatible and configured to leverage this feature effectively moving into 2025.

Attackers are constantly refining their techniques to bypass traditional cyber defenses, the need for more granular security measures like token binding becomes very apparent. When we cryptographically bind user session tokens to specific clients, we eliminate the risk of token reuse, rendering those phishing attacks that rely on the interception or relay of a stolen token ineffective. Coupled with good overall MFA practices, token binding represents a big step forward in safeguarding our organizations digital identities and maintaining the integrity between online authentication systems.

The Technicals on Multifactor Authentication in Microsft Entra ID

Dec 7th, 2024

Multi-factor Authentication or better known as “MFA” is one of the most important security measures that you can have for your Microsoft 365 environment. MFA is managed through Microsoft Entra ID when it comes to 365. MFA helps to enforce additional verifications layers, making sure only those who are authorized can access your organizations systems and resources. This is even if the user's password credentials, or primary creds are compromised. In this blog post I will lay out some of the technical of Entra ID MFA.

First, Authentication Protocols

Microsoft Entra ID supports modern authentication protocols such as OAuth 2.0, OpenID Connect, and SAML. These protocols enable secure token-based authentication for MFA workflows. When you or a user sign-ins, Entra ID evaluates the authentication context based on the requesting application and policies tied to the user or group.

Primary Authentication

When a user first enters their credentials (credentials in this context being a username and password). These credentials are validated against Entra ID’s Kerberos or NTLM integration for hybrid identities with on-premises AD. Hybrid meaning that the organization has a mix of on-premises domain controllers with active directory and active directory in the cloud. Cloud-only identities such as in Entra ID can be secured with password hash synchronization or pass-through authentication. This primary authentication is often called the first factor in the authentication chain.

Secondary Authentication (the MFA part)

After the primary authentication, Entra ID invokes the MFA challenge based on predefined policies. Admin-enforced rules requiring MFA for all users or certain groups is highly encouraged and utilized in big organizations. Conditional Access (CA) policies: Dynamic rules requiring MFA under specific risk conditions, such as unfamiliar locations or IP anomalies. We will talk in more detail about these later in this post.

So how does the MFA workflow operate?

The MFA workflow involves real-time decisions and token generation by Microsoft Entra ID. A user attempts to log into a Microsoft 365 service, such as Teams or Outlook. Entra ID then intercepts the access request and determines whether the user must undergo MFA, considering CA policies and session context. Some environments let sessions persist meaning they do not have to go through MFA every time they try to login to a new service, but this is a configuration that would need to be set up additionally and does not work like this out of the box.

Entra ID calculates a real-time risk score using inputs from the Identity Protection system. Several factors are evaluated such as unusual IP addresses or geolocation. Entra aggregates this information and if it deems the IP or location is not normally seen for that user, that login could get flagged and heighten the user risk profile. Entra also checks policies for device compliance such as if it is enrolled in Intune or is a corporate owned device. If you have policies that dictate that only certain devices can be authenticated with, you may see logon failures due to that specific policy violation.

Once the previous steps are completed, MFA is triggered, and the user is prompted to complete the secondary authentication factor. Supported methods for this includes Microsoft Authenticator push notifications or time-based one-time passwords (TOTP). This either looks like a prompt asking you to enter a 2-digit number provided by where you are attempting to login or a login asking for a 6-digit number. Generally when you are onboarding new users, you will walk them through setting up MS Authenticator app and test to make sure that they are receiving the prompts correctly.

SMS/Voice Calls can be used as a one-time passcode delivered to the user’s phone. Hardware Tokens such as FIDO2 keys or OATH tokens. These take form as a physical hardware such piece such as a USB type C dongle that you must have on hand at the time of your login attempt. These MFA methods are generally handled by RESTful APIs that integrate with Entra ID, which then interact with the Microsoft Authentication Broker for that secure verification.

Token Issuance is when upon a successful MFA verification, Entra ID generates a short-lived Access Token and a Refresh Token. These tokens are encoded using JSON Web Tokens (JWT) and are signed with Microsoft’s public/private key infrastructure (PKI) to prevent any sort of tampering. The access token then allows the user to interact with Microsoft 365 services until it expires, while the refresh token enables silent reauthentication.

Improving things with Conditional Access and Adaptive MFA

Microsoft Entra ID augments MFA with advanced Conditional Access capabilities, providing even more granular control over authentication triggers. Conditional Access policies use Microsoft backend machine learning to dynamically assess risks, such as Impossible Travel by detecting simultaneous logins from different geo locations. Things such as Sign-in Frequency making sure that MFA is enforced after predefined intervals or types of end user behaviors.

Session Context helps to evaluate factors like device health and application types when authenticating. For organizations leveraging Identity Protection, Microsoft’s adaptive authentication capabilities adjust MFA requirements in real time based on evolving threats. This includes actions like blocking access entirely or requiring additional factors for high-risk users.

Note that you must have all your end users licensed for either Azure AD Premium P1, Azure AD Premium P2 license, or Microsoft 365 Business Premium license

But wow does Microsoft enforce security through their MFA?

Microsoft relies on a mix of their secure cloud-based infrastructure and client-side integrations Cloud Back-End. Entra ID stores MFA configurations and risk policies in its distributed global infrastructure. Authentication logs and other telemetry are then processed in real time, with data fed into Microsoft’s Security Graph for broader threat intelligence.

The Microsoft Authentication Library (MSAL) client integration is used by client applications to handle MFA challenges programmatically. Browser-based applications rely on OpenID Connect flows, where Entra ID redirects users to complete MFA before issuing tokens.

Microsoft’s implementation of MFA through Entra ID (Azure AD) acts as a sophisticated and layered system designed to help protect identities against user account compromise. By combining real-time risk analysis with AI, robust authentication protocols, and seamless user experiences, Microsoft does a pretty good job at ensuring organizations using Microsoft 365 can be reasonably safeguarded without compromising productivity which in my experience is the number one compliant about MFA in general. Got to keep those end users happy.

Protecting From Email Impersonations: How SPF, DKIM, and DMARC Work

Dec 1st, 2024

As ubiquitous as email is in today’s day and age, it stands to be one of the older means of communication relative to the rise of the world wide web. This primes it for being one of the most sought-after targets from cyber threats, spam and phishing being the primary vectors for attack to organizations. Attackers are getting better at posing as organizations and hiding their true identities through email, and it will only get tougher with the rise of AI.

One way to start to combat this type of abuse is to use special DNS records to help identify legitimate, original and trustworthy sources of email are getting to those who we wish to reach out. In today’s blog post, I will go into some detail as to what these records are and how they can help protect you and your organization from common email impersonations. First let’s ask the simple but important question.

What Are DNS Records?

Email DNS records are DNS (Domain Name System) configuration files that help authenticate email messages. They serve as the digital verification system. They can help provide information like which servers are authorized to send emails on behalf of your domain or if the email is digitally signed through PKI (Public Key Infrastructure). Without these records, bad actors can easily spoof emails, impersonating trusted vendors or folks within the org to deceive end users.

So What Records Should I Have to Make Our Email is Secure?

SPF (Sender Policy Framework) records indicate which mail servers are authorized to send emails for a domain. When an email is sent, the receiving mail server checks the SPF record to confirm whether the email originated from an approved server. If the sender isn't listed, the email may be marked as spam or rejected.

DKIM (DomainKeys Identified Mail) adds a digital cryptographic signature to emails, ensuring its contents haven’t been tampered with during transmission. With the cryptographic keys, the sender’s domain publishes the public key in its DNS, and the private key (used by the user sending the email) is leveraged to sign outgoing messages. The recipient’s server verifies the signature which ensures the email is authentic.

DMARC (Domain-based Message Authentication, Reporting, and Conformance) builds on SPF and DKIM by specifying what action a receiving server should take when an email fails authentication. For example, the domain owner can choose to reject, quarantine, or monitor emails that don’t pass SPF or DKIM checks. DMARC also provides reports, helping domain owners understand and mitigate unauthorized email activity. So, we know the three most important records for protecting your email but what does this look like in action?

How These Records Protect Against Phishing and Spam

Phishing attacks often rely on email spoofing, where attackers forge the "From" address to appear as a trusted sender. Email records like SPF and DKIM help detect and block spoofed messages, making it harder for attackers to impersonate legitimate senders.

DKIM makes sure that the content of an email remains unchanged during its transit. This prevents attackers from intercepting and altering legitimate emails to insert malicious links or attachments.

Implementing email records not only reduces spam but also boosts your domain's reputation with big email providers. Emails from domains with proper authentication are less likely to be flagged as spam, ensuring legitimate communications reach their intended recipients.

DMARC reports provide valuable insights into unauthorized email activity, helping organizations fine-tune their email security policies and detect malicious actors targeting their domain. With this, you can take a proactive approach and flag and report domains that are being used in active attack campaigns.

Best Practices for Email Records

Implement all three SPF, DKIM, and DMARC. Any comprehensive email authentication strategy will use at least these three to maximize protection. Monitoring and updating regularly to evolving threats which means updating your records whenever necessary. As always, you will want to educate stakeholders. Train employees to recognize phishing attempts, as no system is foolproof.

These records are the cornerstone of email security. After reading this blog and implementing SPF, DKIM, and DMARC, your teams and organization will significantly reduce their risk to phishing and spam, safeguarding both brand reputation and users. With these configurations in place, email becomes a more secure and reliable communication channel for everyone. Even if it’s as old as the dinosaours.

Creating Secure Networks in Your Azure Environment

Nov 24th, 2024

One of the most critical parts of secure azure VM’s is making sure that your infrastructure is secure from outside threats. Luckily, Azure offers a ton of built-in tools and configurations to help secure your scalable network. By using these features, you will be able to protect the sensitive data you ingest, reduce attack surfaces, and ensure you are compliant with security standards. Here is how you would do that.

Design with security in mind from the get-go

Creating a well-designed network, let alone a virtual network (VNet) is quite the task. First off, VNets are isolated, logical networks within Azure that serve as the foundation of your environment. Within the VNet, you will want to segment things into subnets to bundle resources based on functionality and security requirements.

Enable Azure Bastion for secure remote access to VMs without exposing RDP or SSH ports to the internet. This eliminates the need for public IP addresses on VMs, reducing potential attack vectors.

Network Security Groups (NSGs) for your VNet Firewalling

NSGs act as virtual firewalls for your VNets and subnets, allowing you to control inbound and outbound traffic. Only allow traffic you want and block everything else. For example, you can configure rules to allow only HTTP/HTTPS traffic to a web server while denying all other inbound traffic. Ensure your NSGs follow the principle of least privilege. Regularly audit rules to remove outdated or overly permissive configurations that could expose your environment.

Using Azure Firewall or Web Application Firewall (WAF)

You can enhance protection by deploying Azure Firewall to filter traffic at the network perimeter. The network perimeter is where your internal network meets the external connections. Azure Firewall provides centralized policy management and advanced threat protection capabilities, such as URL filtering and deep packet inspection.

If your VMs host web applications, integrate Azure WAF to safeguard against common threats like SQL injection and cross-site scripting (XSS). Azure WAF can be enabled through Azure Front Door or Application Gateway, providing robust protection for web-facing workloads.

Implement Private Endpoints

Using Azure Private Link helps to create private endpoints for your resources. This ensures that traffic between VMs and Azure services like Storage or SQL Database remains within the Azure backbone network, effectively bypassing the public internet. When you eliminate public endpoints, you significantly reduce the attack surface of your environment.

Secure Communication with Encryption

Always encrypt data in transit using protocols like TLS 1.2 or higher. Configure Azure resources and VMs to use encrypted communication for all internal and external connections. For sensitive workloads, turn on Azure Disk Encryption to protect data at rest and use Key Vault to manage encryption keys securely. Don’t forget to rotate your keys eventually as you don’t want those to be stale and or, in the event of compromise, they are eventually cracked.

Monitoring and Incident Respond

Enable Azure Monitor and Azure Network Watcher to track network activity and identify anomalies. Using something like Azure Sentinel, Microsoft’s cloud-native SIEM tool, to analyze logs and detect threats in real-time. You can also send network logs to a different SIEM tool, it doesn’t matter, just make sure you are tracking those logs.

Deploy Azure DDoS Protection to defend against distributed denial-of-service attacks. The Standard tier provides these types of enhanced mitigation and application-layer protection necessary to cover all your monitoring and IR activities.

Testing Security and Review recommendations from Security Center

Performing regularly scheduled security assessments using Azure Security Center is a must. Its recommendations can guide you in improving your network security posture. Conduct penetration testing or simulated attack exercises to identify and address vulnerabilities.

If you review and implement these recommendations, you can build a robust and secure network for your Azure VM environment. Having a proactive approach to securing your Azure infrastructure ensures operational resilience and helps mitigate modern cyber threats and attack vectors.

Managing your tools: Tips on having a secure investigation envoirnment and hardening your virtual machines.

Nov 20th, 2024

When working on security incidents you will be analyzing sensitive data and interacting with potentially malicious novel software. A virtual machine (VM) is an excellent choice for this purpose due to its flexibility and security features. However, ensuring the VM is secure is critical to protect the host system and maintain the integrity of the investigation. I will show you some steps to secure a basic Linux VM for cyber investigations and some choices you need to consider. Ask yourself first and foremost.

What is a reliable and Secure Host Environment?

Before configuring the VM, ensure the host machine itself is secure. Consider using a robust, up-to-date operating system, enable full-disk encryption, and regularly apply security patches. This all sounds a lot easier said than done type of situation. Host-level security is crucial because a compromised host can jeopardize the VM's security. This means being able to quickly deploy and manage any virtual host.

For my day to day, I personally find Kali Linux to be especially helpful given all the investigation tools baked into the OS itself. Kali is also unique in that the base operating system images come configured with security measures such as rootless mode, forensics mode and support for encryption.

What’s the right Virtualization Platform?

Choosing a trusted virtualization platform such as VMware, VirtualBox, or QEMU/KVM provides you with a robust community with vast knowledge on managing and deploying vm’s. Always download the platform from its official website and verify its integrity using checksums or digital signatures. Ensure the virtualization software is updated to the latest version to protect yourself from known vulnerabilities.

Keeping a Minimal Linux Distribution

For a cyber investigation VM, use a minimal Linux distribution like Debian, Ubuntu Server, or parotOS. A minimal installation reduces the attack surface by excluding unnecessary services and software. That may be difficult since the OS’s recommended here have hundreds if not thousands of tools included in them. The principal remains, keep things simple.

Harden your virtual machine

Review and disable services not required for the investigation. Tools like systemctl or chkconfig can help manage services. Enable some Firewall rules. Use iptables or ufw to block unnecessary incoming and outgoing traffic. Restrict access to only essential ports. Always remember to install security updates whenever you can. You can write a bash script to automatically update the VM using your distribution’s package manager (e.g., apt or yum). Consider using automated tools like unattended-upgrades for critical updates. Implement AppArmor or SELinux: These tools enforce strict access control policies on processes and applications.

Network Hardening tips to consider

Configure the VM’s network interface to NAT or host-only mode. Avoid bridging the VM directly to the host’s network to limit exposure. Regularly take snapshots to preserve a clean state of the VM. This allows you to revert quickly if malware compromises the system. Disable shared folders or set them to read-only to prevent malicious files from affecting the host.

Encrypt Your Virtual Disks

When creating the VM, opt for encrypted virtual disks. This protects investigation data from unauthorized access if the VM files are compromised.

Deploy Investigation Tools in a Controlled Manner

Install only verified investigation tools from trusted sources. Consider using containers like Docker within the VM for running potentially dangerous software, providing an additional layer of isolation.

Log everything, all the time

Enable logging for the VM using tools like auditd or Syslog. Monitor logs regularly for unusual activities. Tools like fail2ban can help automate responses to repeated unauthorized access attempts.

Sandbox Environment when and where you can

If the investigation involves handling malware, deploy the VM in a sandboxed environment or use hypervisor-level isolation. Platforms like Firejail or Cuckoo Sandbox can help. Securing a Linux VM is essential for cyber investigations to ensure data integrity, protect the host system, and maintain operational confidentiality. When I need a free sandbox, I usually use any.run. Always be mindful of what samples you may be uploading; you never want to inadvertently expose private information or possible PII. By following these steps, investigators can create a reliable and secure virtual environment to carry out their work.

PKI and how Certificates protect networks

Nov 8th, 2024

Network certificates are an essential security measure that enables encrypted, trusted communication between devices on a network. By verifying the identity of users and devices, network certificates help ensure that only authorized individuals and systems can access sensitive resources, reducing the risk of cyberattacks and unauthorized access.

What Are Network Certificates?

Network certificates, also known as digital certificates, are digital files that confirm the identity of devices or users on a network. These certificates are issued by a trusted Certificate Authority (CA) and contain valuable information, including the public key of the entity being certified, as well as details about the entity’s identity and the CA that issued the certificate. This information enables secure communication by providing a way to verify and encrypt connections between systems, like a client and server, or between two devices within a network.

How Network Certificates Work

Network certificates rely on Public Key Infrastructure (PKI), a framework that uses a pair of cryptographic keys, a public key, and a private key to secure data transmission. A straightforward way of looking at PKI is the mail service. A mailbox represents a public key. Anyone can send mail to that mailbox but only a person with the private key can view them.

A user or device requests a certificate from a CA, this can be called Certificate Issuance. Which verifies the identity of the requester. Once verified, the CA issues a certificate containing the requester’s public key, among other details. You can view a CA as the post office from the example above. They are trusted source of issuing a mailbox and key, the Postal Office verifies each person’s identity

When a device or user wants to access network resources, the system checks the certificate against a list of trusted certificates to ensure its valid and issued by a trusted CA. This validation step prevents unauthorized devices or users from accessing the network.

Once verified, the certificate allows encrypted communication using SSL/TLS (Secure Sockets Layer/Transport Layer Security) protocols, which protect data as it is transmitted over the network. This encryption ensures that sensitive data like passwords, financial information, and proprietary business data are secure from interception.

Certificates have an expiration date though and must be renewed periodically to maintain security and integrity. This makes sure that outdated or potentially compromised certificates are not used indefinitely, reinforcing security.

Why Network Certificates Are Essential for Security?

Network certificates validate identities, ensuring that only trusted users and devices access the network. This trust is pivotal in preventing unauthorized access that could compromise data and lead to breaches.

By encrypting communication between devices, certificates protect data from interception and tampering. This is essential when transmitting sensitive information, such as client data or proprietary business details.

In Man-in-the-Middle (MitM) attacks, attackers intercept communication between two systems to steal or alter data. Certificates help prevent these attacks by authenticating both parties in a connection, ensuring secure transmission.

Many industries have data protection regulations that require encrypted data transmission. Certificates enable encryption and help organizations meet these standards, avoiding penalties and demonstrating commitment to data security. Some of these regulations include HIPPA, SOX, GDPR, and PCI-DSS which mention protecting data in-transit.

Best Practices for Managing Network Certificates?

Always use a reputable Certificate Authority for certificate issuance to ensure certificates are reliable and widely recognized. Some recognized CA’s would be DigiCert, Entrust, Global Sign, or for something free LetsEncrypt.

Expired certificates can disrupt network access, so implementing automated renewal processes ensures continuous security. Having dealt with this, I can tell you that taking the time to renew a certificate while customers are without access is stressful. Regularly auditing certificate usage and validity to identify any expired or unused certificates that could become security vulnerabilities.

Network certificates are a fundamental part of network security, providing authentication, encryption, and trust across network connections. By verifying user and device identities and encrypting communications, network certificates protect organizations from data breaches, cyberattacks, and unauthorized access. Implementing and maintaining a robust certificate management process is essential for safeguarding sensitive data and ensuring trusted interactions across the organization’s network.

Recognizing Business Email Compromise and it's risk

Nov 5th, 2024

Business Email Compromise (BEC) is a sophisticated type of cyberattack that targets organizations by social engineering employees into taking actions that help the attacker, typically by transferring funds or sharing sensitive information. BEC attacks leverage trust, urgency, intimidation, authority to deceive recipients, making it one of the costliest forms of cybercrime today. Understanding BEC and how to defend against it is pivotal for organizations aiming to protect their business assets and data.

But what is Business Email Compromise?

BEC attacks often involve attackers impersonating a trusted person within the organization such as a CEO, manager, vendor, or compromising an actual employee’s email account. Once trust is established attackers use phishing techniques to manipulate employees into taking actions like wiring funds or changing payment information on invoices.

Unlike traditional phishing attacks, which rely on web links or malicious attachments, BEC attacks often involve thoughtfully crafted emails and do not present themselves with any typical signs of being a phishing attempt. Attackers do their research, gathering information about a company's operations, team hierarchies, and financial protocols to make their requests seem plausible. They may also time their emails for maximum impact, such as during peak work hours or when key executives are traveling, making it more likely employees will comply without verifying the request.

What does the attack look like?

Attackers can impersonate a CEO or other executive, often asking finance teams to urgently transfer funds or make payments. You should think, does the CEO really want me to send a gift card to him during the day?

An employee’s email account is hacked and then used to request payments from clients or internal departments, with funds redirected to accounts controlled by the attacker. Ask, always ask if the request is anticipated.

Attackers impersonate vendors, asking for outstanding payments but directing funds to new or different accounts under their control.

In high-stakes or confidential situations, attackers pose as attorneys or legal representatives, pressuring employees to act swiftly without verification. Some of these interactions can prey on admin and IT teams want to help and sense of urgency.

Attackers target HR or finance departments to obtain sensitive information like employee tax data, which can then be sold or used for identity theft.

Why BEC is So Effective

BEC attacks succeed because they exploit human psychology rather than relying on technological weaknesses. Attackers understand that employees want to act promptly, especially when responding to executives or legal authorities. BEC messages often use urgent language, emphasizing confidentiality or time-sensitivity, which reduces the likelihood of thorough scrutiny.

Preventing Business Email Compromise

MFA significantly reduces the chances of account compromise by adding a verification step that attackers are unlikely to bypass.

Regularly educate employees on recognizing BEC attacks, including how to identify unusual requests and verify communications before acting.

Encourage employees to use out-of-band verification, such as calling a known number or contacting a manager directly, before fulfilling requests involving financial transactions or data transfers. This means sometimes just giving an end user a phone call directly or a zoom call to see them online.

Be cautious about publicly sharing information about team roles, travel plans, and vendor relationships, as attackers often use these details to craft convincing BEC emails.Attackers who gain access to email accounts may set up automatic forwarding to external addresses to monitor communications. Regular audits can detect and block this activity.

Business Email Compromise is always costly and a growing threat that requires vigilance and a proactive approach to prevent. By educating and making employees aware, verifying requests, and implementing security measures like MFA, organizations can significantly reduce their risk of falling victim to BEC. As BEC tactics evolve (as they always do), staying informed and reinforcing defenses are essential steps toward safeguarding the organization’s financial and data assets.

Securing Your Organization with Conditional Access Policies

Nov 4th, 2024

Conditional Access is a dynamic security framework that evaluates specific conditions such as user identity, device health, location, and real-time risk—to determine whether to grant, restrict, or require additional verification for access. By implementing CA policies, organizations create a flexible, security-first approach that meets the demands of today’s digital workplace while protecting sensitive data. This is often utilized within Microsoft environments to help protect legitimate user accounts from tampering.

So, what is conditional access?

Conditional Access is a set of rules applied to access requests for applications, data, and network resources. Often managed through identity platforms like Microsoft Azure Active Directory, CA enables organizations to enforce specific requirements for access. For instance, access can be allowed if users meet certain conditions, such as using compliant devices, signing in from trusted locations, or passing multi-factor authentication (MFA). This approach tailors access policies to match different security needs, making it ideal for environments with diverse devices, roles, and work locations to still be adequately protected.

What are the key benefits of conditional access?

CA helps prevent unauthorized access by enforcing security requirements based on real-time context. For example, if an employee logs in from an unfamiliar location, CA policies can require MFA or restrict access entirely. By adapting access requirements based on conditions, organizations reduce the risk of breaches from compromised credentials, phishing attacks, or unauthorized devices being present on the network. Real-time risk assessments in CA, such as Microsoft Identity Protection, detect anomalies and apply security policies instantly to keep those resources safe.

In today’s work from home or anywhere work environment, employees may log in from various locations and devices across the globe. CA allows organizations to secure remote access by enforcing specific requirements, such as ensuring devices are compliant with company security standards or limiting access to known trusted networks. These are often implemented by managing devices through Microsoft Intune and then applying policies to them. This balance of security and accessibility helps remote teams work productively while safeguarding sensitive resources.

CA can assist organizations in meeting regulatory requirements by enforcing policies that protect sensitive data and ensure controlled access. Regulations like GDPR, HIPAA, and PCI-DSS mandate secure data handling, and CA’s ability to restrict access based on location, device compliance, or identity verification is key to maintaining compliance. For instance, CA can enforce MFA for all access to sensitive data or limit access based on regional regulations, ensuring adherence to legal standards.

Conditional Access improves the user experience by only requiring additional authentication when risk conditions warrant it. For example, an employee logging in from a known device and location may not need MFA, but the same login from a different location would require it. By applying security measures dynamically, CA minimizes disruptions, letting users work without repeated verification while maintaining high security.

What are some Best Practices for Implementing Conditional Access?

Start with Clear Policies. Define security requirements based on user roles, device types, and access scenarios to target the highest-risk areas effectively.

Use Risk-Based Policies help to enable CA policies to trigger additional steps, like MFA, only when necessary, such as during unusual login activity.

Reviewing and Adapting CA policies to changing security needs and refine access requirements as the organization grows.

Conditional Access is a powerful tool that combines flexibility and security for today’s evolving workplace. By enforcing access requirements dynamically based on user and device context, organizations can secure their resources, support remote work, and meet compliance needs without compromising the user experience. With the increasing sophistication of cyber threats, Conditional Access is essential for organizations dedicated to securing their digital environments.

How Content Filtering helps with security and productivity

Nov 3rd, 2024

Controlling the flow of end-user content is essential to having a productive, secure, and compliant environment. Content filtering and monitoring is a tool that you can control inbound and outbound data helps IT teams prevent unwanted or harmful content, like spam, malware, and inappropriate material, from being on the network. By using keyword matching, URL categorization, and reputation-based filtering, your organizations can minimize those named risks. So why content filtering?

Content filtering offers significant benefits across several key areas. By blocking spam, phishing attempts, and malware-infected links, it reduces the risk of breaches and malware, sparing your organizations the costly recovery effort. With many security gateways, AI can even improve on this by learning what is normally seen in your environment and proactively protect your users. Microsoft 365 has this sort of AI that works in the background to understand User and entity behavior analytics (UEBA).

You can enhance user productivity by blocking access to non-work-related content such as social media or streaming sites. This in turn helps employees stay focused and productive during normal working hours. Although this can be helpful, you always must balance productivity with convivence. Every organization is different depending on what risks they are willing to accept based on how end-users react.

Many industries are governed by regulations requiring secure data handling. Content filtering helps prevent accidental data leaks and ensures your organization is adhering to standards, like HIPAA in healthcare and PCI-DSS in payment processing. Normally this is done through content filtering for outbound or egress data. You always want to filter both in and out of your organization to prevent even accidental storage of such data.

Blocking inappropriate or harmful content helps create a respectful, professional workspace. This filtering protects the organization’s reputation and ensures compliance with workplace standards. You don’t want to get caught in that situation of finding material that goes against company policy, so set content filters so that isn’t even a thing to begin with.

Key Content Filtering Features A robust content filtering solution goes beyond basic keyword blocking: Real-Time Filtering: Constant monitoring is essential to keep up with evolving threats. Granular Control: Organizations can tailor filtering policies by department or role, providing necessary flexibility for productivity and security.

Content Categorization: By categorizing sites and emails, organizations can apply specific filtering rules to several types of content. Allowlist and Blocklist Options: Filtering solutions allow trusted sites to be whitelisted while blocking known risky domains.

Keyword and Phrase Matching: Content filters can prevent sensitive data leaks by flagging or quarantining emails with specific keywords. This is normally achieved by utilizing regular expressions to find exact or similar matches for words or phrases.

Filtering effectively means implementing those controls

Setting clear policies helps to define acceptable usage and filtering rules aligned with security and compliance needs. These should always be reviewed and accepted by senior leadership. Customizing for roles and tailoring filtering policies to match the needs of different departments. Not everyone needs to have the same blanket controls, often times that only hampers certain departments (mostly security and IT who need to be able to view things to determine issues or causes for issues).

Update rules regularly and regular reviewing in place policies is super important as cyber threats evolve. Like they say, there is no rest for the wicked. Monitor your logs and reviewing content filtering logs can reveal access attempts or emerging security concerns. You want to get ahead of that and then start to educate employees on the risks. Training on content filtering’s purpose and policies can help employees recognize credential harvesting attempts and understand secure content use.

Content Filtering and the future workforce

As remote work grows, cloud-based filtering solutions offer flexibility and scalability, protecting employees wherever they are. AI enhancements are also helping content filters detect complex threats more accurately, reducing false positives.

Content filtering is critical for ensuring security, productivity, and compliance. Implemented thoughtfully and tested carefully, it creates an effective shield, allowing employees to work safely and efficiently without unnecessary distractions or risks. As digital threats continue to grow, content filtering remains an essential aspect of defense for yours's and every organization.

Cybesecurty Awareness Month: Ghost Prompting

Oct 21st, 2024

Just in time for Halloween and Cyber Security Awareness Month, Imagine this: It’s a dark and stormy night. Your phone suddenly buzzes with an MFA request. Then another. And another! Eventually, feeling overwhelmed by this eerie persistence, you approve one to make them stop. But just like letting a vampire over the threshold, that approval grants the attacker access to your account, and the haunting is complete. These digital specters are especially fond of stalking high-value corporate targets, where the treasure behind the door is worth the effort. You can call this type of attack MFA ghost prompts—deceptive or fake Multi-Factor Authentication requests that haunt users' devices like digital poltergeists

What Are MFA Ghost Prompts?

MFA ghost prompts appear out of nowhere, fooling users into believing they’re authenticating a legitimate service. However, lurking beneath these ghostly requests is a malevolent scheme: hackers trying to gain unauthorized access to accounts. These chilling prompts seem innocent on the surface but serve as conduits for cyber trickery.

How Do These Ghosts Haunt Users?

(Initial Access): Attackers begin by creeping into a user's account, often through stolen credentials or crafty social engineering. (MFA Bombing): Once inside, the attacker unleashes a barrage of MFA requests, haunting the user like endless knocks on a creaky door. Frustrated or startled, the user may approve one just to stop the constant interruptions. (Exploiting Familiarity): Users place their trust in MFA as a guardian spirit, protecting their accounts. But ghost prompts prey on this trust, tricking users into granting access to their very own accounts.

Why Do These Digital Ghosts Work So Well?

Users often report that they approved a phantom prompt to escape the torment. This is called “MFA Fatigue” as attackers act like an ominous curse that relentlessly notifies users to their illegitimate access attempts. Users have also mentioned that they are so used to trusting their MFA prompts, they thought they’re simply authenticating their own activities. Outside of these two examples, many users simply don’t realize that multiple, unexpected MFA requests are the sign of a haunting and let the ghost in through the door.

How to Ward Off These Ghosts

Regular security awareness training - Teach users to recognize the signs of a digital haunting—unexpected MFA requests are often an evil omen. Reject or report unfamiliar requests - Users should be vigilant and reject any spooky MFA requests they didn’t summon themselves.

Easy Github Commands

Nov 20, 2017

So I always thought it was pretty intimidating working with github. I figured I would need to learn a whole bunch of commands and spend many frustrating hours having to figure out what I am doing wrong. I came up on this codecademy video on youtube that was less than 6 minutes that helped me break through that. Here is what I learned.

First you will want to make sure that you create a repository for your project and name it how ever you would like. Then you will want to get into CLI and navigate to somewhere where you would like to download your project or repo to. The command you will want to use to download your repo will into said directory will be
git clone https://www.github.com/youruser/nameofyourrepo

Once you have gone ahead and got your repo you will want to use git init to initialize that directory so we can eventually push what is in that directory up to github.

Go ahead and make some changes. In my case I have a github.io website setup with a single index that I make updates to here. Once you are satisfied with your changes you will want to add all of the changes made so github knows what to send up to be eventually commited. the code for this will be git add . the dot or period is indicating the whole directory, so whatever changes are made in the entire directory will be commited to your repo.

Next I always use git status to check what files will be committed and what changes were made. This helps me to understand if I have made the correct code changes or if something should not be commited to the repo.

Now comes the fun part, we will now comment on our code then commit it. To comment on our repo changes you will use git commit -m "your comment here"

After this then you can push your changes up by typing git push -u master origin You will likely be prompted to enter your username and password and that is about it.