We all face the problem of growing amounts of evidence on a regular basis. Improving raw acquisition speed is one way to limit the impact of this, and Evimetry has been consistently delivering the fastest acquisition speeds bar none since we launched two years ago.
Yet we aren’t the only solution claiming to be the “fastest” or have “unparalleled” speeds.
The following graph shows the acquisition rate of a 1TB Samsung 960 Pro NVMe drive. We used Evimetry to undertake linear acquisitions to 4x Samsung 512 GB 860 Pro SSD’s as striped images, using a 6-core Xeon-D CPU. The variable is drive allocation: we started with an empty (TRIM’ed) drive, then filled it with a Windows 10 OS install and a corpus of common corporate documents and video. These figures don’t account for verification time.
We can acquire an empty 1TB NVMe drive in 4 minutes 52s. That’s a rate of 200 GB/m, or 12 TB/h. No other product comes close to these speeds.
In the real world, suspect drives contain data rather than empty runs of 0×00, and Evimetry’s acquisition speeds depend on how much actual data is stored on the suspect drive. For a drive that is 40% utilized it takes us 7m48s (still faster than anyone else’s claim) and at 95% utilized it takes us 12m57s.
In absence of substantiation from other quarters we remain confident that we offer the fastest acquisition solution available today. We encourage you to do your own validation of both our results, and the claims of other tools.
I prefer to make a python virtualenv specifically for working with volatility. In this example, I’m using MacOS with brew for my python (the python shipped with MacOS is broken in regard to pip’s TLS authentication). Hence the -p argument.
virtualenv -p /usr/local/bin/python volmem
Install all the dependencies with the following (the last two aren’t strictly necessary, but prevent a load of complaints from Volatility).
This works equally well for newer kernels with kernel address space layout randomisation (KASLR). To test this, I created a new volatility profile for kernel 4.10 on Ubuntu 16.04.4 per the instructions at https://github.com/volatilityfoundation/volatility/wiki/Linux . You can see below the output of the linux_bash plugin run against a VM that I first used to generate the profile and then use as the target of acqusition using the Evimetry live agent.
If you can’t find a profile, and haven’t done it before, I’d encourage you to give it a go. It is extremely easy to create a new one (especially using VMWare, as it breezes through the install of the the target Linux OS). All up it took me about 5 minutes to install Ubuntu 16.04.4 and create a profile for it. Don’t forget to go the extra step contributing back to the community with the new profile (as I did here).
For a long time now, operating systems such as Windows and MacOS have prevented user space applications from accessing the raw physical memory of the machine. Physical acquisition and virtualisation approaches aside, this has led the field to require the use of kernel drivers to export physical ram for acquisition. In the linux realm, Joe Sylve’s LiME is the go-to for many.
It appears not widely known that on Linux x64, acquisition of physical memory is possible without using a driver such as LiME. The prerequisite here is that /proc/kcore is enabled, which fortunately is widely the case: Ubuntu ships with it enabled by default, as does Redhat. On x64 the full physical address space is mapped into the kernel address space, and /proc/kcore exports this as a part of its virtual ELF file view.
Fun fact: /proc/kcore is big: 128 TB.
bradley@ubuntu:~$ ls -lh /proc/kcore
-r——-- 1 root root 128T Jun 8 18:44 /proc/kcore
You don’t want to acquire /proc/kcorexn--just the relevant part.
Acquisition via this technique is something that Rekall pioneered, as far as I know (please correct me if you know better). Evimetry supports the technique in our live agent for remote acquisition. The following serves as a short howto on acquisition using currently available tools.
How to acquire: Evimetry
Copy the Evimetry linux liveagent (x64) onto the suspect Linux host, along with its security certificates. Run the agent with the IP address of a Controller or a Dead Boot or Cloud agent as the destination:
root@ubuntu:~# ./evimetry.agent 192.168.189.1
Evimetry Lightweight Agent v3.0.8, a lightweight forensic acquisition agent.
Application IP Address: 192.168.189.207
Application IP Address: fe80::20c:29ff:fed7:3540
Application MAC Address: 00:0c:29:d7:35:40
Memory Size: 971.6MiB
Memory Allocation Alignment Size: 4096
Runnable IO threads: 1
Starting device enumeration
/dev/sda [20.0GiB] : VMware_Virtual_S
/dev/sda1 (ext4) [19.0GiB] /
/dev/sda2 (unknown) [975.0MiB]
/dev/sda5 (swap) [975.0MiB]
/dev/sr0 [1024.0MiB] : VMware_Virtual_SATA_CDRW_Drive
No medium found
/dev/sr1 [1024.0MiB] : VMware_Virtual_SATA_CDRW_Drive
No medium found
/dev/fd0 [4.0KiB] : Unknown Model
No such device or address
Insufficient privileges to access device!
Checking Memory Map setup.
Memory Description: [971.6MiB / 4.0GiB]
Checking Certificate setup.
Secure Communications Enabled
Starting Fabric Manager
Attempting Fabric Connection
Using the attached Evimetry Controler, acquisition is a simple GUI operation.
When it comes to preserving evidence, DF labs generally fall into two camps. Those that acquire in the field, and those that collect evidence in the field, only later doing acquisition in-lab. Over the last two years, Evimetry’s product offerings have been primarily aimed at the former. Practitioners have benefited from the fastest in-field acquisitions, while at the same time enabling meaningful analysis work to occur while waiting for acquisition complete.
Evimetry Lab, announced last week at the EnFuse conference, changes the game for the latter group. This groundbreaking approach enables analysis and time-consuming processing tasks (such as indexing) to begin immediately after acquisition begins. The traditional delay between waiting for acquisition to complete prior to beginning processing is removed, leading to processing tasks completing hours earlier, and answers sooner.
How much sooner? The comparison above shows a time-consuming indexing run using NUIX completing hours earlier when using Evimetry Lab as opposed to traditional forensic imaging workflows. There is also some live analysis using EnCase thrown in good measure.
We have just shipped two releases of Evimetry: v3.0.7 (in our stable stream) & v3.1.5 (in our pre-release stream). Recent releases bring native Deadboot media creation, and introduce an improved Deadboot Imager UI.
Native Deadboot Media Creation.
We can now create Evimetry Deadboot USB’s directly from the Controller, and for larger drives, use the additional space for evidence storage. With a single hard drive serving both as an Evimetry Deadboot and Evidence Repository, scarce USB ports are freed up on target devices, workflow is simplified, and the number of devices to manage limited.
Small USB flash drives are setup solely as a Deadboot, just like our former workflow.
For a while now the Deadboot agent has included a simple ASCII console-based Imager application. This is useful for acquiring single computers, when it is either inconvenient or unfeasable to use the Controller and a network.
While we love the retro feel and simplicity of an ASCII/curses interface, the world is no longer friendly to text-mode UI’s, with high-DPI monitors and text-mode free UEFI implementations meaning that text-mode no longer works everywhere. A graphical window based UI is now necessary.
In the v3.1.3 pre-release we launched a *graphical* Imager application, and in today’s prerelease (v3.1.5) the layout of the Imager UI has been refined.
Pulling it all together.
The following video demonstrates the workflow of preparing a Deadboot USB and then subsequent acquisition of a 500G NVMe drive in under 6 minutes.
Full release notes are available via the releases page. The software may be downloaded from the portal.
In the last two weeks, two of our favourite disk forensic tools integrated native read support for the AFF4 forensic format. Forensic Explorer released v4 of their product, with native AFF4 read support, and X-Ways Forensics released v19.5, which has a plugin API supporting our AFF4 read plugin.
This represents a big step forward towards general adoption of the next-generation image format.
Evimetry’s filesystem bridge provides a straightforward and efficient way of consuming AFF4 images from any commercial forensic tool, and results in faster analysis & processing than E01′s. Despite this, it is convenient to be able to open AFF4 images directly from tools without having this dependency.
For the last year and a half, Evimetry have been investing significant effort in growing the AFF4 ecosystem, by standardising the format, providing open-source implementations, integrations with leading open source forensic software, and working with commercial vendors to integrate read support.
In October we worked closely with X-Ways to define a plug-in API to support new forensic image formats, which X-Ways integrated in the 19.5 beta releases. We followed this up by producing an X-Ways plugin to read AFF4 images via our C++ based Evimetry libAFF4 Reader DLL. Around the same time, we provided the reader DLL’s to the folks behind Forensic Explorer (FEX). In no time, the v4 beta builds of FEX supported reading AFF4 images too.
Usage: X-Ways >= 19.5
Download the current Evimetry X-Ways AFF4 reader plugin, and copy the contents into the X-Ways installation folder. Make sure you have the Visual C++ 2015 Runtime installed.
CAVEAT: Only x64 is supported for now.
UPDATE: We now support x86 (32 bit) as well.
Usage: Forensic Explorer >= 4.0
The current FEX 4.0 build already integrates the Evimetry libAFF4 reader DLLs. This DLL contains a bug that has since been fixed in a later version of the DLL. We anticipate that this will make it into the next release of FEX. In the meantime replace the libaff4 DLL in Forensic Explorer with the one contained in the Evimetry libAFF4 reader DLL package.
Caveat: BETA code quality
Please note that the Evimetry libAFF4 reader DLLs are currently at BETA quality, while we undertake further testing and importantly, tuning. If you strike any issues, please submit a bug report to firstname.lastname@example.org .
You have been tasked with forensic acquisition of 6 servers in the AWS cloud, with a total of 2TB of storage. How do you do it?
This post will describe the method I applied in a recent case, where we collect the storage, acquired it into forensic images, and pulled down the images into our custody overnight. While I will be describing how I did it using Evimetry, the method is easily translatable to other tools.
Storage forensics in AWS
Unlike many cloud IAAS platforms, AWS provides us with the ability to take a Snapshot of the storage of a Virtual Computer (an Instance, in EC2 parlance). This gives you a point in time copy of the storage device. This isn’t a forensic image, as there isn’t a hash protecting the copy.
This enables us to quickly Collect a copy of the storage, without affecting the availability of the Target device. To truly collect the copy though, we need to take it under our control. To do this, we rely on the ability of AWS to share Snapshots between accounts.
Once we have access to the Snapshot in our own Security Domain (account), we can then shift to forensic acquisition of the copy. This is best achieved by generating a Volume from the Snapshot, attaching the Volume to a purpose-built acquisition server, and acquiring using regular forensic processes. Once a forensic image is acquired, we then Transfer it out of the cloud to store on a storage device that we Possess.
The following sections step through the process of undertaking the method.
Evidence Isolation & Location
First up, create your own Security Domain for Collecting the Snapshots into, and undertaking acquisition. In AWS, this is easily achieved by maintaining your own account, separate to the TARGET account. The below screenshot displays an account I have established under my own name, logged into the AWS Console.
I recommend running the two separate AWS security domains (the TARGET and EVIDENCE) using two separate web browser windows, one of them using private mode browsing so that you can use the TARGET’s credentials in one, and your security domain’s credentials in another.
Note the Account ID (993480464498) – this will be required to identify this security domain when we come to share a Snapshot from the TARGET to our security domain.
In the TARGET AWS console, use the left menu, “Instances” to show the instances, and find the instance that you want to collect.
In the screen capture below, one can see the TARGET instance, the instance ID (in this case i-065e4cd1fbf56c92e), the block device volume (vol-08c5f1566ec4ea6c5), and the Availability Zone “ap-southeast-2c”. These identifiers should be documented to establish the provenance of the evidence.
In the left menu, “Elastic Block Store”, select “Snapshots”, and then “Create Snapshot”.
Select the volume of our TARGET server (ol-08c5f1566ec4ea6c5), and describe the evidence.
We now have a snapshot of the block storage of the instance. We record the Snapshot ID (snap-0925cec0faee0659a) to maintain provenance of the evidence.
Now that we have an image (not a forensic image, as we don’t have a hash), we want to Collect it. This means taking possession of it, so that it can’t be modified. To do this, we share the image with the EVIDENCE security domain we created earlier.
Recalling that the Account ID of our acquisition security domain is (993480464498), we privately share the image with that account.
Note that the snapshot isn’t instantaneous and may take some minutes to complete.
Prepare Evidence Storage Server – Server provisioning
While the snapshot is going, switch browser windows (and security domains), and begin setting up your Evidence Server in the same datacentre as the target server. This will be running the Evimetry Cloud Agent, and will be co-located with the TARGET server for efficiency and speed. In the below AWS control panel, we set the location to “Sydney” which matches the TARGET server in this instance.
In the left menu we select “Instances” and then “Launch Instance”
To deploy the Evimetry Cloud Agent, we need an Ubuntu 14.04 instance. Select that.
The speed at which your acquisition will occur will depend on a number of factors, including the virtual disk size, the performance of the virtual storage, and the number of CPU’s you have in the server. In a future blog post, I will go into this in more detail, but for now, select a 4 CPU machine with moderate performance.
Next up, we want to make sure that the evidence storage server is as close as possible to the target. From before, we have identified that that the target is in “ap-southeast-2c”, so we make sure that the subnet matches. We also ensure that we enable the auto assignment of a public IP, so we can connect to the server. This is sufficient to then “Review & Launch”.
Finally, we launch the new Evidence Storage instance.
The final task in bringing up the evidence storage server is to establish a key pair for working with the server. We create a new key pair called “SF-Acquisition” below.
Download the key pair, and save it somewhere safely. Example shell commands follow.
The final thing to setup is a port forward so that we can connect through to the Evidence Server. Unlike some cloud services, EC2 Instances sit on a private IP address, behind a firewall.
In the “Network & Security” section of the console, go to “Security Groups”. Recalling from the Instance that is was started in the Security Group “Launch Wizard 3”, edit the inbound rules of that security group.
Create a rule forwarding the Evimetry Cloud Agent’s port (TCP 9982) to the Evidence Server.
Verify Access to Evimetry Cloud Agent
At this point, the Evimetry cloud agent is ready to be used. Using the public IP of the VM (220.127.116.11), connect in using the Evimetry Controller.
The agent will appear in the controller’s fabric nodes view (note that the IP of the Evidence Server is showing a 172.X.X.X private IP address). Visible underneath it is its storage, and an Evimetry Repository, which is located on its internal storage. We will acquire our images into this Repository.
Acquiring the image
We now go back to the EVIDENCE Security Domain, and access the “Elastic Block Store” | “Snapshots” section. Be sure to filter the view to “Private Snapshot” as it won’t be visible in the default setting.
The snapshot from the SUSPECT security domain will now be visible (check the Snapshot ID matches). We now transform the image into a Volume, which can then be added to a running instance in much the same way we plug removable storage into a computer. First, right click on the Snapshot and select “Create Volume”.
In the volume creation form, where we create the evidence storage instance, we choose the Availability Zone of “ap-southeast-2c”. Make sure that you choose this zone as the instance where the Volume is created, and then click on “Create Volume” to create the volume.
Note the volume ID, of the new Volume, which is vol-097e59361bd515f78. Follow the link.
Now we can attach the volume to our Evidence Server. Go to the “Elastic Block Store” | “Volumes” area and, noting the Volume ID, select Attach Volume.
Recalling that the instance ID of our Evidence Server is i-0af9148e32f37b8ac, attach the Volume as a virtual disk.
Refreshing the Cloud Agent instance listed in the Evimetry Controller now shows the disk attached to the agent as /dev/xvdf . Note that the newly attached disk is locked against mounting and writing. Right click on the disk and select Acquire.
The acquisition settings dialog will appear. Select a full linear acquisition of the attached drive, and add the Repository on the Storage Server as the container location. Give the Image a name using your standard image naming scheme, and document the original Volume ID and Instance ID associated with this image. Then click OK.
Acquisition is now underway. The screenshot below shows an acquisition using the “Provisioned IOPS SSD” as Volume storage, which proceeds at around 90MB/s, constrained by the storage of the infrastructure. Our testing shows that using “General Purpose SSD’s” as storage gives a trickling rate of around 10MB/s (that’s 4x slower than USB2!). A future post will focus on scaling this speed.
When the acquisition (including verification) completes, Evimetry will display a completion dialog.
We then transfer the image locally to the lab using Evimetry, by flipping to the “Images” tab of the Controller, and right clicking on the newly created image.
After choosing the destination, the image downloads locally.
This post has described a methodology for acquiring storage in the EC2 cloud. Using EC2 Snapshots in conjunction with Snapshot Sharing enables one to quickly Collect copies of Target storage. Acquisition can then be undertaken in the Cloud, so that the evidence is protected by a hash at the earliest opportunity, while minimising the amount of data that needs to be copied.
In future posts, I will follow up on how virtual disk selection affects the speed of acquisition; how to acquire volatile memory in AWS; and how to undertake analysis in the cloud.
Late last year I had the pleasure of attending the F3 conference in Gloucestershire, UK. It is quite unlike any other digital forensics conference I have ever been to; a community run, practitioner focused, 2 day conference situated in a stately manor in the English countryside. I can thoroughly recommend it.
The Advanced Forensic Format 4 Working Group (AFF4 WG) is calling for interested parties to join the second working group meeting, to be co-located at the DFRWS Conference 2017, in Austin, TX.
Originally proposed in 2009 by Michael Cohen, Simson Garfinkel, and Bradley Schatz, the AFF4 forensic container enables new approaches to forensics, unparalleled forensic acquisition speeds and more accurate representation of evidence. The AFF4 WG has recently released v1.0 of the AFF4 Standard, including canonical images, specification, and open source libraries for implementers. Current AFF4 implementations include Rekall, Evimetry, Sleuth Kit, Volatility and GRR.
For more information, please see the working group mailing list, or contact Bradley Schatz or Michael Cohen.
Co-Chair: Dr Bradley L Schatz, Schatz Forensic/Evimetry, [ bradley <at> schatzforensic <dot> com ]
Co-Chair: Dr Michael Cohen, Google, [ scudette <at> google <dot> com ]
AFF4 working group mailing list: https://groups.google.com/forum/#!forum/aff4-wg
We recently contributed patches to the Sleuth Kit to read AFF4 images. While we are waiting for those to be pulled into the main distribution, the following recipe should suffice for compiling a stand alone copy on MacOS.
The following dependencies are needed to compile libAFF4 on OSX. I use MacPorts, and the corresponding packages i needed to install are: