Forensic tools, whether software or hardware, or just like traditional forensic science tools – they are designed by humans and typically meant to be used by trained users who understand both the artifacts they are processing as well as the results produced by the tool.
Some tools are as simple as a USB write blocker – you plug one end into a computer, you plug a USB device into the other end, and it “just works.” More complex tools such as Eric Zimmerman’s Registry Explorer are designed to parse and reveal hierarchical databases native to Windows Operating Systems that hold countless forensic artifacts of relevance to most all modern digital forensic investigations. The immense amount of research, coding, troubleshooting, and design work that can and does go into a great many tools can be extensive and while some are released for a price, some are released for free for the betterment of the forensics community. There is an interesting dilemma though with a great many tools on the software side and that is the human element behind the coding – what logic was used when the “ah-ha” moment was realized by the design team or researcher? Was all data laid bare for review by the forensic examiner or was just certain data because a reverse engineer reviewed the data and made a decision, “this is the data that matters the most.” Is that necessarily a problem? Perhaps all that matters for a TeamViewer log parser are the IP addresses of a connecting bad guy, right? But what if a beginning examiner runs the tool against an newer version of TeamViewer and the parser doesn’t account for a new log format or embedded data values? Does the new beginner know to verify and compare against original data or do they take the output and use that as proverbial “gospel” Tools are supposed to make our lives easier after all, right? Of course they are, but I propose that a balance must occur within the Digital Forensics and Incident Response community so that incoming, newer examiners know to trust, but verify.
Based on this premise, I’m going to begin presenting and organizing tools, not for purposes of listing every possible/known tool and script in the DFIR, CyberSecurity, and IT communities, but rather straight forward and organized by artifact. Some tools are better than others. Period. Some tools are free where others are paid for. If the tool is the best at what it does for a variety of reasons, then just because there are twenty different tools for a single artifact, doesn’t mean an examiner needs twenty choices. Examiners need a solid tool kit, efficiency, and repeatable reliability of output.