I often get asked to share examples of the types of alerts we send to clients. I work on the front lines here at Digital Shadows (now ReliaQuest) as a Sales Engineer, and I see a wide variety of alerts being sent to our clients on a daily basis. Most, thankfully, require simple but nonetheless necessary remediation – a minor settings tweak or an email to another department informing them that the Tweet they just posted was a little too forthcoming with sensitive company information.

Others take a little more work to resolve.

In this blog series, we’ll share some tales from the front lines – keeping client names anonymous, of course. We’ll investigate some of SearchLight’s most impactful findings, and more importantly, shed light on how our customers are using the alerts we provide them to make a tangible impact on the security of their organizations.

 

Today’s tale is one of data leakage detection and third-party exposure. It all began in our internal analysis platform, the area where our analysts triage mentions of customer assets before they even become alerts in a customer portal. Initially, this looked like a routine commit to GitHub by a company email address – a common occurrence for which there is a simple remediation. All it takes is a simple Google search for “GitHub sensitive data” to get an idea of the scale of the problem – you’ll see a variety of frenzied StackExchange (et al) posts looking for advice and reports by security commentators on the latest large organization to be compromised after inadvertently leaving the keys to the kingdom publicly available.

An example technical leakage alert in the  SearchLight (now ReliaQuest's GreyMatter Digital Risk Protection) platform

An example technical leakage alert in the SearchLight (now ReliaQuest’s GreyMatter Digital Risk Protection) platform.

 

The internal alert came through with a match for the committer email, a customer domain, and the string “pwd” near to what looked like a cleartext password – again, nothing too out of the ordinary there. It would not be an exaggeration to say that we see user, network, database, and API passwords exposed on GitHub and other code repositories on an almost daily basis – in this instance, the password was for “localadmin”.

One of our analysts on the threat intelligence team wrote up and published this alert to the customer. On the next regular review call we held with the customer, they shared their satisfaction with us for having detected the exposure and for alerting them to it. They informed us that they had successfully removed the offending content. Usually, this would be the end of the story – the ideal outcome for us is that a client, using the information we pass to them, is able to remediate an exposure before it becomes a risk.

This time, it didn’t end there.

After the call, we tried to visit the repository on GitHub. It appeared as though the file had been deleted, however, as anyone familiar to version control will know, deleting a file is not enough.

We quickly got back in touch with the client to inform them that while they had removed the file from version control, the contents of the file were still publicly available inside the commit history of the project. The client was grateful when we shared some advice on how to fully remove a sensitive file from GitHub (in their words, “it happens enough that we have a whole help doc on this”: https://help.github.com/articles/removing-sensitive-data-from-a-repository), and we actually ended up altering the mitigation guidance included within our alerts to ensure that this advice was included for future exposures on code repositories.

As this example illustrates, organizations are faced with numerous points of exposure arising from digital transformation and an increased number of systems and services that are outside of their environments, and often outside of their sphere of visibility. Whether these external services are sanctioned or not, employees (not to mention third-party developers, who seem to be the worst culprits) are likely using them, potentially exposing sensitive information.

And who can blame them? Code repositories allow agile development through flexible collaboration. Unfortunately, cybercriminals and other nefarious individuals have their finger on the pulse and are acutely aware of how useful these types of platforms can be, so they’re on the lookout for anything that might give them a way into an organization.

github technical data leak example

 

So there you have it! That’s it for this first insider glance at some of the alerts we send to our customers, and how they help them to better secure their organizations.

The SearchLight (now ReliaQuest’s GreyMatter Digital Risk Protection) service allows our customers to continuously identify and mitigate risks arising from the exposure of sensitive information on GitHub, on other code repositories, and elsewhere across the open, deep, and dark web.

If you’re interested in learning more about how we help with detecting leaked sensitive data for our clients, check out the link below.

Data Loss Detection Overview