This is just a small collection of the resources that are
available if you are interested in learning python. It is not intended to be a comprehensive list of everything available, just enough to get you started. They are not listed in any particular order although I may have saved the best till last ;-)
Free Online Classes• https://class.coursera.org/interactivepython-2012-001/lecture/index
• Google Python classes http://www.youtube.com/watch?v=tKTZoB2Vjuk
Books (free online)How to think like a computer scientist: http://www.greenteapress.com/thinkpython/thinkCSpy/html/index.html
Learn Python the Hard Way: http://learnpythonthehardway.org/
Invent with Python: http://inventwithpython.com/
Hacking secret ciphers with Python (from Invent with Python) http://inventwithpython.com/blog/2013/04/15/hacking-secret-ciphers-with-python-released/
Books (not free but worthwhile getting)T.J OConnor Violent Python: http://www.amazon.com/Violent-Python-Cookbook-Penetration-Engineers/dp/1597499579/
Justin Seitz Gray Hat Python: Python Programming for Hackers and Reverse Engineers http://www.amazon.com/Gray-Hat-Python-Programming-Engineers/dp/1593271921/
John Zelle Python Programming: An introduction to computer science 2nd ed http://www.amazon.com/Python-Programming-Introduction-Computer-Science/dp/1590282418/
Forensics & PythonWilli’s modules: http://williballenthin.com/
The volatility project: http://code.google.com/p/volatility/
Joachim Metz’s libraries: http://code.google.com/p/libyal/ (not all of these are python, but many have python bindings and some are python!)
Dave Nides blog (author of 4n6time); http://davnads.blogspot.com/
Plaso (backend engine for log2timeline): http://code.google.com/p/plaso/
T.J OConnor’s SANS paper Grow Your Own Forensic Tools: A Taxonomy of Python Libraries Helpful for Forensic Analysis http://www.sans.org/reading_room/whitepapers/incident/grow-forensic-tools-taxonomy-python-libraries-helpful-forensic-analysis_33453
<shameless plug> the course I teach at Champlain College Scripting for Digital Forensics http://www.champlain.edu/computer-forensics/masters-digital-forensics-science/curriculum
and of course the list would not be complete without a cheat sheet
One of my students is currently researching data recovery on solid state drives. Part of the testing requires that he create a large number of files with known and easily identifiable content. There are many ways of doing this and it is something I have done many times before. However every time have meant to write a script to do the work. This time round I figured I would write something to solve the program once and for all.
In this case the objective was to be able to determine the amount of recoverable data after a collection of files had been wiped. So we needed:
So I wrote filegen.py to generate the files we needed. It is a pretty simple program, in fact processing the options takes more code than generating the files but is simple to use and makes generating test files easy.
In order to address the unique filename requirement the user is able to pass a base filename that will then have a number appended to it. The file is then filled with a repeating pattern of the filename plus the size of the file (in bytes). When the string does not fit the desired file size the end of the file is padded with zeros. This way a keyword search can be used to identify how much of any given file is recoverable. Given the file size it is a simple matter to determine how many times the pattern repeats in the file.
The options are:
One surprising outcome of the research was that in some cases more file content was recoverable than should have been written to the disk, but that is a story for another time.
MBR & GPT
NTFS Of course I still have to make them for HFS and ext but I will get there one day.
If you have any suggestions for improvements please let me know.
However I had not really thought about all this from a forensic perspective until Adam posted a MFT carver and started asking questions on the win4n6 mailing list about 4k sectors and their impact on the MFT. I checked the large drive on my system (which is using 1TB WD caviar blacks) and sure enough I had 4k sectors. I then checked the MFT and found it was using 4k File Record Segments.
My theory at the time was that the file record segment would match the sector size if the sector was 1024 or over. This was sort of confirmed by Troy, one of the other win4n6 folk. Troy also pointed out that Microsoft have reported that they are not yet supporting 4k sectors, as mentioned in this blog post from the Microsoft storage team.
This of course got my interest up, as I had a drive clearly reporting 4k sectors, as did Adam. Upon rebooting my system I realized/remembered that I was in fact using a Highpoint Rocket RAID,not connecting directly to the drive (why I did not notice this in windows I don't know). So I copied everything off the RAID, wiped the three 1TB drives and then configured a new RAID, at which point I found that the controller will let you set the sector size from 512, 1024, 2048, 3072 or 4096 bytes (hows that for cool?). I then created a 2TB RAID using 4k sectors. This was then formatted as NTFS on this was on Windows 7 pro SP1. I confirmed once again that the MFT file record segments were 4k in size. By default it still used a 4k cluster size, so there goes file slack space! The picture below shows the BPB of the partition. The key values are Bytes/Sector (0x0100 or 4096), sectors per cluster (1) and Clusters per file record segment (1).
Now the interesting part, what tools can handle a disk using 4k sectors?
x-ways 16.1 SR-3 has no problem
FTK imager 18.104.22.1684 will mount the drive or image, but does not recognize any file system.*
Encase 6.19.4 & 6.16.2 crash when it starts processing the MFT, apparently encase 7 works*
SIFT 2.13 will mount it and show/access files correctly, although mmls appears to be hard coded to report 512 sectors, so the offsets it provides for the start of the partition are wrong.
analyzeMFT 1.7 does not support it. Adam pointed out the following comment in the code:
I have not had the time to test anything else, so I figured I would put the test image out there for everyone to play with. If you have the time please download the test image, try it out in your favourite tools and post your results in the comments. You can download it here: https://docs.google.com/open?id=0BxubffSxLhRkU21ybXNsQ1pNTk0. Since the RAID were pretty much all 0x00 once acquired it compresses down to 2GB.