Securing 3rd Party Contractor Access to IT/OT with Remote PAM
Securing 3rd Party Contractor Access to IT/OT with Remote PAM Organizations today increasingly rely on third-party contractors for IT and operational technology (OT) system support,
This cyberattack narrative, which unfolded a few years ago, is an everyday reality for some organizations that fall victim to malicious actors. These perpetrators seek inglorious trophies, extorted bitcoins, or patriotic recognition from belligerent governments.
It was recounted by the Chief Information Officer (CIO) of an organization with several thousand employees, operating with critical services. While anonymized, the account is entirely truthful.
The human angle of this crisis is its common thread. It’s often discussed less than the financial impacts and the technological challenges and stakes involved.
Why this approach? At Systancia, we constantly focus on the human dimension at the core of everything we do. It’s not always simple, it’s not always the quickest path, and we don’t always succeed, but it remains our guiding principle.
Here, this story shows that no matter the technological aspect, the criticality of the crisis, or the scale of the attack, it’s the men and women of the IT team who will ultimately help put out the fire.
What’s striking first when this story unfolds is the level of preparation and foresight. Just a few months before the attack, a full-scale rehearsal had allowed them to refine the processes and ensure everyone was ready.
The organization is well-equipped, using both a Security Operations Center (SOC) and Endpoint Detection and Response (EDR) tools.
The weak point lies with the service provider, something discovered only after the fact. In the moment, there can be a sense of helplessness when trying to ensure essential security practices are being followed. This form of outsourcing is, in fact, a frequent source of major attacks, a point strongly emphasized by NIS2.
Without dwelling on them, since we deploy them and they’re not the focus of this article, there are also tools specifically designed for controlling and tracing service providers’ access.
For this organization, the identified cause was straightforward: the same password was used for all of the service provider’s clients, coupled with a complete absence (or disregard) of a robust password policy. After the cyberattack, a representative from the company, who was on-site and couldn’t access a specific system, was left red-faced when an incident response team member provided him with the password after cracking it in just a few minutes.
The cyberattack came from here.
The EDR, managed by the SOC, did detect an anomaly a few days before the IT system was encrypted. However, it was configured to raise this as a simple alert, meaning it was poorly configured, a significant issue in itself. The SOC analysts didn’t act on it, and the alert was repeated multiple times.
This wasn’t what ultimately alerted the internal teams. Of course, it happened at the most inconvenient time, the weekend. The user support team, on call, was alerted by weekend users who encountered an attacker’s message on their screens. There was no gradation; within minutes, the system was encrypted via the hypervisors. The directory was almost the only element saved and subsequently recovered by the incident response teams.
The CIO quickly arrived on-site with a colleague. They immediately performed actions driven by the belief that every second counted, some unfortunately already in vain, like unplugging the network. This wasn’t even immediate due to an issue with server blade labels – a minor flaw that becomes incredibly frustrating at a critical moment.
The board was informed, and the CEO repeatedly asked the CIO: “Are you certain this is a cyberattack?“
The initial reaction was shock and disbelief at all levels.
And a near-total dedication from everyone to confront it. A very strong cohesion in the face of this adversity.
Some people truly step up in such situations. The CIO recalls one employee who, without making a fuss or showing off excessive enthusiasm, just did their job. No one imagined they’d become a pillar of the crisis. Conversely, the absence of IT services allowed others to maintain complete discretion.
Hours no longer mattered. Adrenaline, stress, and even excitement, the word was actually used in the conversation, predominated. While a cyberattack is everyone’s nightmare, certain mechanisms kick in: the brain adapts, defends itself, rationalizes, accepts the situation, and then pushes the levers that will drive engagement.
For 15 days, there was no fatigue despite the lack of sleep. They quickly had to switch to a new mode, freeing up evenings so everyone could get some rest.
Reactions to this shock varied widely, even leading to improbable situations. One effective team member, tasked with repeating a priority task on a large scale to re-equip employees, simply couldn’t do it under the stress. He no longer knew how to perform something he had done countless times. It became crucial to manage each individual situation, taking the time to understand and adapt their assignments as needed.
The organization had prepared procedures and prioritized services for restoration. This was undeniably helpful. However, in practice, some of it wasn’t applicable given the reality of the situation. “This” service is prioritized? No, that one is far more critical.
What’s most admirable is the ingenuity deployed for unforeseen situations not detailed in any procedures. This included urgently connecting certain user teams with autonomous networks and the creation, implementation, and deployment of a new type of ultra-modern computer, “the typewriter.” These were machines with no internal network, generated as “trusted” with the absolute bare minimum, and equipped with a 4G key for connectivity.
The core business system was down, and it couldn’t be fully replicated by a paper-based procedure. For instance, they still needed to create unique identifiers, which meant resorting to a good old-fashioned spreadsheet.
Sustaining the effort beyond the initial fifteen days meant, first and foremost, planning for what came next: the optimal reconstruction scenario. This involved weighing the pros and cons of recovering from backups versus starting from scratch, a discussion beyond the scope of this article. However, it was crucial to return to security basics and be uncompromising on that front, even with the frustration it might cause users.
It’s hard not to be deeply affected by such a situation. A psychologist was available for those who wished and also made some visits.
The executive leadership established a weekly “coffee meeting” ritual for the IT teams, which continued for several months. These were moments of relaxation, presence, and support.
The conclusion is clear: the human impact of these human errors is not fully measurable. The prolonged crisis significantly worsened the situation. In the first few weeks, professional dedication led to personal sacrifices, exacerbating already fragile personal situations. Similarly, some employees found themselves unable to readjust to a normal, calmer routine once the crisis subsided.
Securing 3rd Party Contractor Access to IT/OT with Remote PAM Organizations today increasingly rely on third-party contractors for IT and operational technology (OT) system support,