The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

How a bug in Visual Studio 2015 exposed my source code on GitHub and cost me $6,500 in a few hours


You can read all the updates in detail at the bottom of the post.

Updates are listed briefly here :



As a developer with a number of years experience, I didn't think it was likely that I could be the victim of a data breach.

But a simple bug in Visual Studio meant that source code that was destined for a secure and private source code repository was instead published to a public repository. What followed was a sequence of events which left me with a $6,500 bill.


Some background, and the bug that started it all

I recently upgraded to Visual Studio 2015, and have been impressed with all the new features on offer. One of the newer features is Git integration in Visual Studio which allows you to easily commit your changes to a local Git repository. You can also sync your local repository to a remote GitHub repository.

I am a paid GitHub subscriber which allows me to create private repositories - something I have been using for a while with the help of the Git command line. With the new GitHub integration features in Visual Studio, I decided to depart from my usual way of using GitHub which is through the command line and instead use Visual Studio's new features to commit one of my local Git repositories to a new private GitHub repository. Visual Studio allows you to do this with a few clicks and even allows you to create a new private repository if one doesn't exist yet.

Have a look at the screen below.

Pretty simple and straight forward, right? Nothing looks suspicious here and there's no reason to believe that your code would be exposed to the public - after all Visual Studio, a trusted platform I have used for well over 10 years, explicitly asks you if you want to create a private repository.

An expensive Visual Studio Bug

So I went ahead and published my HumanKode lib repository through this method and suspected nothing untoward. The code was synced in less than a minute and I continued working.

10 Minutes later I got an email from Amazon.

The subject line read "Your AWS account is compromised". The body of the mail then further detailed that an access key was exposed through my GitHub repository that I just created.

How could this be? I created a private repository.To my dismay, I discovered that the repository was created as a public repository instead of a private repository. Not only has my source code been compromised, but an Amazon access key for the Alexa web information service, contained in a configuration file, has been exposed in the wild.

Damage Control

I logged on to AWS and checked my charges. Nothing there. Next, I changed my AWS root password by generating a new 30 character strong password through my password manager of choice, 1Password. Then, I revoked all of my access keys and created new ones. Before logging off, I checked my billing again. Close call, or so I thought.

A few minutes later I got a confirmation email from amazon confirming that I deleted the exposed access key. By now it was close to 1am in the morning and I went to bed, thinking nothing more of the situation. At least it was only an Amazon / Alexa information service access key which was revoked in minutes.

But how could it be? Did I really make a noob mistake by publishing to a public repo?

Just to be sure, I created a new blank project in Visual Studio, checked in the changes against my local repository, and then synced it against a new GitHub repository by specifying a new repository and marking it as private. My suspicions were confirmed - it's a bug.

Step 2 : Commit to Local Repository

Step 3 : Click Sync to bring up the option to create a remote repository

Step 4 : create a private repository on GitHub and publish to the private repository

The bug : Visual studio creates a public repository


Wake up call

I woke up at around 7am with an email notification from Amazon saying thank you for signing up for Amazon EC2. I knew something wasn't right, so I immediately logged on to check my account. The balance was $1,700.

In a state of panic, I fired off a support email to AWS, outlining the sequence of events.

For the next half an hour I terminated Amazon EC2 instances in my region as they kept on popping up again shortly after terminating them. I've never used Amazon EC2 before, so at this point in time I was completely unaware that there were at least 20 instances in each region.

40 minutes later I fired off another support request for assistance. And 20 minutes later, I finally managed to speak to an AWS support assistant on the phone after trying to get through to the right department. They asked me to change my root password, revoke all my keys, and kill all the instances which I gladly did. Finally, this breach is under control, or so I thought.

20 minutes later my account was sitting at well over $3,000. Support notified me that there were still instances running, and that I had to kill the spot instances, not just the instances. By the time everything was contained, my final bill was sitting at about $6,500.


AWS Instances and Spot Instances




So how exactly did this happen?


How is it possible that my data was breached so quickly?


What could be done to prevent and mitigate this?


Some questions that remain unanswered

How could the hackers continue to use my account despite the fact that all keys were deleted and recreated with different credentials?(This has been answered at the top of the post)Why did Amazon take so long to respond? They were quick to pick up on the issue, but didn't send the necesarry information to prevent any further damage. The right information early on, such as exactly how to stop all the instances would have prevented all the costs entirely. I was terminating instances, but never knew about the spot instances that were still up and running, being billed for by the minute. This is clearly something that has happened a few times before, so better process here will save everyone time and money.With Amazon SES access keys, there are clear limits in place. For instance, you can only send a certain amount of mails a day, and to go beyond this level your account needs to be approved for each additional level. Why are these checks not in place for Amazon EC2 which can rack up charges of $5,000 in a day on a customer's account that would otherwise not be charged more than $1 a month for AWS services?


Some further reading and other people that have been affected in the past

Developers, Check Your Amazon Bills For Bitcoin Miners

Amazon AWS Account Hacking and How to Avoid it

How my Amazon S3 account was hacked with 10,776$ in billing


A Reflection on Data Security in the Cloud

There's a common saying that goes "The only way to completely secure your computer is to disconnect it from the internet and all networks". As more and more people store their personal and company data in the cloud, so does the risk for that data being compromised increase. The cloud can be a great offsite storage solution, but it also brings a huge risk of data being compromised due to the easy nature of accessing the data. All you need is an access key. When compared to the way we stored data previously, that type of data is much harder to come by. Sure, a few years ago we might have all stored our data in Visual SourceSafe on a private network. Lack of access controls wouldn't expose your data as the internal network would be isolated from the internet. But with cloud based source control, there really is a lot of risk for data to be exposed somewhere along the line. Anyone, anywhere can access your data. All they need is a key.

So who's the victim here? Right now it's me. But in the long run, I fear that as more and more people and companies store their data in the cloud, these types of incidents will surely increase. This incident shows that it doesn't take a hacker with fancy tools to get access to data.

All they need is a single key. A single click of a button through a buggy UI, or an unintended click can expose a your data to the world in an instant. Most companies store source code and data in the cloud in one form or another, and with multiple developers having access to these repositories, we're going to see some significant data leaks in the future. That's a scary prospect for all of us.

One doesn't need to look far to see how Apple's iCloud service have effected non technical people, as demonstrated by the recent celebrity accounts that were hacked. When it came to the hacked iCloud accounts, the common thread was that if you don't want your private pictures to be seen by others, don't upload it into the cloud - simple, right?

Will we take a different stance to company information, or will we reach the same conclusion in time : if you don't want your company's data to be at risk, don't upload it into the cloud?


Lessons learnt

Data breaches are becoming more and more prevelant in our daily lives. At face value one might say it's simple : don't publish your access keys to a public repository, which is what many before me have done. In my instance, however, I specifically published to a private repository, but a bug in visual studio meant that the code was published to a public repository. As soon as it was out in the wild, it was too late. Bots scan GitHub repositories and it only takes 2 or 3 minutes for some of them to pick this up.

I am certainly not innocent here and some mistakes were made on my part. When working with sensitive information, you can never be too careful, and this is where I assumed something would work a certain way when in fact it didn't. The result thereof, whilst costly, could have been much worse.

Security should always be a multi layered approach. To this end, excluding configuration settings from GitHub would have prevented the AWS charges - and this is certainly the approach I would take from now on.

Sure, I shouldn't have checked in the keys to a repository, but even without the keys sensitive information would still have been exposed. Moreover, once the code was breached, it took a long time to contain the issue when it should have just been a simple click of a button in the AWS console to stop the issue.

The AWS console is not exactly easy to use even for experienced developers and one often has to resort to documentation just to do something as simple as disabling accounts, creating accounts, or finding out which processes are running, etc. If your account is hacked and you are being charged per the minute, you don't exactly have time to go through piles of documents. A maximum daily limit would also go a long way in preventing this sort of thing from happening again. Cloud accounts with infinite limits are very dangerous and should not be enabled at all. A few small changes on Amazon's part could make a big difference in preventing future attacks like this.

Finally, as developers, we need to be aware of best practices when it comes to pushing source to the cloud. Things are not exactly set in stone yet, we are making up the rules as we go along.

Ultimately this could make for an interesting case study and I hope that it raises awareness around the potential dangers of version control in the cloud, expecially when used in conjunction with limitless cloud accounts like AWS.

There is a patch available for the Visual Studio Git extension



September 1, 2015Scott Hanselman, Phil Haack, the GitHub team and Microsoft have all been in contact with me about this. Unlike the title of this post suggests, this is actually a problem with the GitHub extension that ships with Visual Stuido 2015. The GitHub Extension for Visual Studio is an open source project primarily maintained by GitHub but was initially jointly developed with Microsoft.There is a fix available for Visual Studio here.According to the original issue, the API no longer allows creating new public repositories for older versions of the extension. This prevents people using older versions of the extension from running into this problem.

September 2, 2015Some details on the sequence of events that led to this charge. Root access keys on my account were never enabled. The key that got compromised was an individual IAM key. In the few minutes that my code was exposed, hackers managed to launch at least a 120 spot instances on Amazon EC2. As soon as I was notified of the breach, I revoked all access keys and created new ones. However, as the spot instances were already launched, the charges kept ticking away. It was only several hours later that I managed to shut down all the spot instances across all the regions with the help on Amazon AWS support. (Read more about this further down the post).

September 2, 2015 (2)Michael Needham, the Principal Solutions Architect from the AWS team based in South Africa reached out to me today to lend a helpful hand and offer some support. He also pointed out that the Amazon AWS accounts are not limitless out the box, as there are default instance ’soft' limits in place, which can be increased on request. The most common limits are 20 per instance type, per region but it does vary depending on the instance type. This still leaves room for a huge bill to rack up in hours, but it's important to note that it is not limitless by default as said earlier in the post and therefore gives customers a degree of built-in protection.


AWS Default Instance Limits

September 3, 2015Amazon has indicated that they will refund the full amount. The process could take several business days.

September 7, 2015Amazon refunded the whole amount.

Continue reading on