bitnuts.de logo

(According to Marcus Ranum in Technology Review): It is most likely that we will breakdown by a fatal system failure caused by connecting one critical system with a not so critical system that was connected to the internet just because someone wanted to check his facebook account through that system and accidentally got hit by a drive-by.

Home Archives Exploitbuster Contact About

Archive 2012

Last modified 2013/01/01 by Flo

Here you will find my posts from 2012.


How malware bypasses kernel-based process monitoring

2012/12/12 by Flo

The guys at fireeye.com published a nice blog post about malware that bypasses host-based security solutions using one of Microsoft Windows’ well known call-back funtions, namely: PsSetCreateProcessNotifyRoutine. For more details check out http://blog.fireeye.com/research/2012/06/bypassing-process-monitoring-.html.

Well, yet another prove that using notify routines is not always a good idea at all ;-)


A professional invoice.pdf.exe

2012/12/04 by Flo

Well, it's an already well known attack by cyber criminals to send you an e-mail backpacked with an executable looking like an ordinary pdf invoice. Most of these executables were build with crappy icons that should make you sceptical when it comes to open such a pseudo PDF. I found a suspicious executable that was build with high resolution icons taken from Adobe's Acrobat Reader. In addition the filename contains a long sequence of spaces after its filename making it even harder to identify it as an exe file with the Microsoft Explorer (even on the Windows desktop itself).

Have a look:


I think this one is catchy enough to trick an ordinary user although the authors did a stupid mistake by naming the invoice to Dezember (december) but the executable was sent to me in november. Well :-)


Exploitbuster (Xpl0!t-Bust3r)

2012/11/16 by Flo

During the last years I programmed a lot of different tools to analyze malware in a high level envrionment because I was tired of doing such stuff with ordinary debuggers and/or disassemblers anymore. Within the last few months I finally put all the different approaches together and build up some (malware) exploit analyzing framework called Exploitbuster. Exploitbuster lets you test suspicious objects (URLs, well known document formats like PDFs, DOCs, XLSs etc.) to see if such objects are harmful using a controlled and logged envrioment. If an object is harmful you can define signature rules to block or trace back such toxic stuff. Exploitbuster gives you a powerful environment to inspect potential zero-day malware attacks embedded through URLs or well known document file formats.

Exploitbuster is my private project and still under construction and development. For more details about the progress you can check out the project's web site under http://www.exploitbuster.com.

If you want to have further information or details about Exploitbuster feel free and fire an e-mail to contact me.


Active Internet Measurements

2012/11/01 by Flo

We are performing so called active Internet measurements with our bot (user-agent: BitnutsBot/1.0 (+http://bitnuts.de/bot.html)). These measurements involve sending TCP probes to 80/8080 (HTTP). The HTTP probes do not provide authentication information.

In case these measurements cause any problems at your site, do not hesitate to contact us.


MZWriteScanner: A minifilter that monitors executables written on your disk

2012/10/20 by Flo

Thinking about different approaches to monitor what malware does while this crap is installing its evil code on your machine I ended up in a monitoring minifilter driver that might help you out analyzing potential zero-days and other malicious stuff on your forensics machine.

MZWriteScanner is a simple minifilter that intercepts IRP_MJ_CREATE, IRP_MJ_CLEANUP and IRP_MJ_WRITE to track what files should (and will) be written on your disk. The driver checks if a file contains the magic bytes for an executable, namely the string 'MZ' at offset (0). If this is the case MZWriteScanner outputs the filename via DbgPrint so you can track it. Well, this is a bit cheesy but should work for many malware executables. The filter does no blocking on the written files thus malicious code might be executed. If there is demand I will probably adjust the driver. Right now it is just a monitoring driver, NO intercepting or blocking will be performed, so beware of what you are writing and executing on your machine!

The driver heavily bases on Microsoft's Scanner File System Minifilter Driver and PassThrough File System Minifilter Driver. As some homework for you: Just combine the best of these two drivers, think about what happens if a file is gonna be written on your disk and how to determinate an executable by its MZ/PE-header. The resulting driver should be something like MZWriteScanner.

You can download MZWriteScanner for Windows XP, Vista, 7 and 8 (32bit and 64bit). Please follow the link below. If you have any questions, suggestions, comments or bug reports contact me by e-mail.

Download MZWriteScanner: http://bitnuts.de/MZWriteScanner.zip

I would like to credit Microsoft and the Honeyproject for their ideas, whitepapers and great driver sources. Thank you guys for sharing your code and knowledge. I really appreciate your work, because it gives a very good overview and in depth look into drivers.


Transient Malware As A Show-Stopper For Proactive Application Control On Always-On-Systems

2012/10/17 by Flo

More and more sophisticated and targeted zero-day attacks rise in our internet and computer driven world. Traditional security defences, such as anti-virus software are not able to keep up with the amount of new attacks flooding computer systems and networks day after day. The impact to organizations is significant: Denial of Service, increased help desk calls, network downtime, information and intellectual property loss, etc. That all sums up in lost productivity and lost money. Even major anti-virus vendors and researchers conclude that classical malware scanners will no longer provide an effective defence. Additional security technology is needed.

A well known approach is to use some kind of trust- and real-time-based proactive application control, meaning that code (an executable, library, driver, script etc.) will only be executed if such executable code was identified as trustworthy. Unknown or already known untrusted code should be blocked. Since Microsoft Vista, Microsoft introduced AppLocker that prevents users from executing unknown and untrusted code in a more general fashion. On Mac OS X, Gatekeeper supports the same functionality since Mac OS X v10.7.5. AppLocker and Gatekeeper-like solutions calculate a unique signed fingerprint for generic types of executable files or use the operating system's code signing eco system to approve executable code from trusted sources. Such fingerprints assign a definitive identity, preventing software or an user from using a different name or directory to execute code. Only approved and known code that will be identified by its fingerprint will be executed. Untrusted code or known trusted but altered code will be blocked at system level. You can sum it up as White-/Black-List integrity checkers for executable code.

Because in most cases zero-day attacks use some kind of dropper that downloads and executes the intended malware (backdoor, bot, spyware, virus, worm etc.) on your system, blocking the execution of unknown code seems to be a good approach to defend such attacks. Unfortunately this will not protect against all attacks. Beyond resistant malware that installs some kind of executable files that survive a reboot and getting started once again if you boot up, transient malware just runs during your current session somewhere in your ram until you reboot your system. If you have rebooted your system, the code of transient malware is gone - nearly no attacking traces left.

I think that transient malware will be the show stopper for real-time-based proactive application control. Especially with regard to always on systems like the new versions of Mac OS X and Windows, because these operating systems were build to run for weeks and moths without a reboot. They support high efficient sleep modi that boot up in less than 5 seconds for a convenience user experience. But they never do a hard reset, starting the kernel and basic user processes from scratch. This makes them perfectly suitable for transient malware, that could resist for a long time, logging the keyboard, screen, scanning for interesting files etc. As shown by SamratAshok's kautilya framework it is really simple to program PowerShell based hacking tools that run behind the scenes without executing suspicious executables or even injecting a library into another process. The latter could be catched by proactive application control, scripts or directly injected code as transient malware will not (in most cases).

Well, we will see what's coming up next. I expect some fancy and tricky transient malware approaches for Windows 8 and upcoming OS X versions in the next years.

Further reading:


RichyWriter - Playing around with HTML5 and ContentEditable

2012/10/14 by Flo

While hacking on a simple content management system I wanted to allow users to format their inputs in a friendly manner. The often featured self made custom tags like [a], [b], [h1] [img] etc. are not very convenient, because people are used writing their texts in so called rich text editors where their text is formatted exactly as the final result (WYSIWYG).

Fortunately HTML5 supports rich text editing by simply enabling the ContentEditable-value which turns a web browser into design mode where users can change the location of objects like images or movies e.g., change the current text, paste a new text into the web page, etc. The modified web page can easily be saved as a HTML file, posted as form data or printed.

By enabling ContentEditable for a simple <div> tag, using only a few lines of JavaScript and some fancy icon images I created a simple WYSIWYG-HTML Rich Text Editor. Try it yourself at http://bitnuts.de/richywriter.html.

I would like to credit the following references for their examples, images, icons and descriptions that helped me building RichyWriter:


Hacking with Teensy USB

2012/10/09 by Flo

Teensy is a very small USB-based microcontroller board, capable of implementing geeky projects that interconnect with your computer over USB. The cool thing about Teensy is that all programming is done via the USB port. There is no special development board nor a heavy-duty dev envrionment needed. Just plug in a Mini-B USB cable on your PC or Macintosh and rock that little controller. For more details check out http://www.pjrc.com/teensy/.

But what makes Teensy profitable for a pen tester or security researcher? Well, Teensy brings support for USB HID (Human Interface Device). You can use Teensy as a HID USB keystroke dongle that tries to attack a computer via the keyboard. This works even if U3 autorun is turned off and mass storage devices could not be mounted. In such an envrionment a HID device acts as a keyboard to send "malicious" keystrokes. Such keystrokes could be shell scripts that download and install another program etc. But Teensy supports more USB devices such as serial interconnetcs, mouse input etc. I leave it up to you thinking about other vectors to test PCs.

For generic introduction and starting point I recommend Irongeek's "Programmable HID USB Keystroke Dongle: Using the Teensy as a pen testing device" (http://www.irongeek.com/i.php?page=security/programmable-hid-usb-keystroke-dongle) and SamratAshok's kautilya at http://code.google.com/p/kautilya/.


Crawling a web site using wget

2012/08/07 by Flo

In general a so called web crawler is a computer program that starts with a given URL (or list of URLs) to visit and then browses the corresponding web content methodical and automated. In most cases the web content is visited in a recursively manner, meaning you get all (sub-)pages of a given web site. Web crawlers save the content of the visited sites for further processing and analysis (e. g. to build up some kind of searchable index etc.).

From a security analyst’s perspective crawling a web site can result in interesting results, because in most cases you get an in depth view of a site’s structure, its content, the used web server, a cms, its authors and also its weaknesses. E. g. you can initiate a deeper look on its javascript code to find bugs, view the html source for interesting content like disabled parts of a web page, hidden links etc.

I did a lot of demanded crawlings on web sites in the past and in most cases the results were really amazing. Weak web servers and content management systems were just the tip of the iceberg. By just crawling the conent of a given web site I was able to find test-accounts left by the web site’s administrator or designer, I found "funny and stupid" test- or dummy pages that could lead in serious reputation loss, already infected servers acting as a malware distributor just to name a few. At the end I always wondered why the responsible administrator or IT-section never ever did such a basic test by themselves?! It is so easy to scan your web site, download nearly all its public content and having a closer look on the crawled data, scan it by an ordinary malware-scanner, check for weird content etc. To give you a starting point for your own crawling analysis the following lines and recommendations give you a introduction into crawling your web site using wget.

When it comes to simplicity wget is a really nice tool for downloading and even for crawling resources from the internet, for more details see http://www.gnu.org/software/wget/. Its simplicity makes it perfectly suitable for a in depth analysis. The basic usage is e.g:

wget http://bitnuts.de/
This downloads the main (index.html) page of the given domain. To recursively crawl bitnuts.de call wget with the recursion (-r) option on. Because many servers do not want you to download their entire site they prevent this by checking the callers user string or disable robots. I recommend to change wget’s user-string (--user-agent="your user sting") and discard robots-limits (-e robots=off). I also recommend to use the options limiting the crawling speed between retrievals, this makes sure you are not added to a blacklist (-t 7 -w 3). To make wget use a proxy (e. g. TOR), you must set up an environment variable before using wget. Adjust the environment variable
set http_proxy=http://proxy.myprovider.net:8080
and turn on the --proxy=on feature in wget. It also makes sense to exclude some file types like iso images, mp3s or other large files to speed up crawling without loosing time downloading large files. Just call wget using its -R option.

You might start up crawling your web site using the options I recommended like:
wget -r -l 0 -e robots=off -t 7 -w 3 -R 7z,zip,rar,cab,iso,mp3 --waitretry=10 --random-wait --user-agent="Botzilla/1.0 (+http://botzilla.tld/bot.html)" http://www.your-domain.tld
or
wget -r -l 0 -e robots=off -t 7 -w 3 -R 7z,zip,rar,cab,iso,mp3 --waitretry=10 --random-wait --cookies=on --save-cookies=cookies.txt --proxy=on --user-agent="Botzilla/1.0 (+http://botzilla.tld/bot.html)" http://www.your-domain.tld
Have fun crawling and analyzing your web site. If you have any questions just send me an e-mail -- I appreciate feedback.


Native Win32 GNU Utilities

2012/08/06 by Flo

When it comes to simplicity, the well known GNU utilities are an awesome set of tools to handle a lot of cool stuff under Unix/Linux-based operating systems. If you want to use some of these small, fast and really cool utilities on Windows, the Native Win32 Ports of the most common GNU utilities are the right collection for you. You can download them at sourceforge, see http://sourceforge.net/projects/unxutils/. The nice thing about the native port is that you can run them straight from the windows command line, there is no need to run them in an emulator like cygwin.

One thing to mention: Some of these utilities might have the same name as some native Windows utilities, so you better rename such filenames just in case. For example Windows comes with a program called find.exe that searches for text inside a given file. You should rename the GNU find to gnufind.exe for example.


How to change your MAC address under Windows 7

2012/07/19 by Flo

Sometimes it is necessary to change your computer's MAC address. For example if your access to a given network is limited to a special machine (its MAC) you are using but you will or must use another machine and cannot wait until its MAC is unlocked for the network. Maybe you are just too lazy to change your router's MAC-filtering-rule if you have a new notebook and want to connect to the LAN/WLAN like with your old machine ;-) Or maybe you will keep your privacy and bother that someone might track back your computer manufactor by its MAC-address. To change the MAC just follow these steps:

Open regedit and select the following hive:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}\]
Now check for the network adapter you would like to change the MAC. It seems to be the easiest way to search for the corresponding driver description that you will find using ipconfig.exe/all. If you have found the network adapter create a new string named "NetworkAddress". As its content set the new MAC address. Example:
"NetworkAddress"="0000ABAD1DEA"
That's it. Now restart your machine and type ipconfig.exe/all - your MAC address should have been changed.

If you are interested in the MAC address vendor codes, also known as OUI you can find an actual list at http://standards.ieee.org/regauth/oui/oui.txt


Updating HUAWEI Honour (U8860) to Android 4 (ICS)

2012/07/07 by Flo

HUAWEI has build a solid and fast budget smartphone named Honour. The phone ships only with Android Gingerbread but an update to Android ICS is now (July 2012) available. You can download Android Ice Cream Sandwich at: http://www.huaweidevices.de/telefone/honour.html


My lovely fast booting Windows 7

2012/07/06 by Flo

In the last few months Android and iOS are heavily hyped for their simplicity and speed bump. Its always said that Smartphones and Tablets boot up in less than 30 seconds getting you much more faster into the social web than your ordinary Windows PC.

Well, this ain't ture! Since years I tried to install and configurate my Windows based machines such that they boot up fast and give me an overall fast and buttered user experience. My current machine boots Windows 7 in just 23 seconds. At least on second 24.30 I have fired up Google's Chorme and here it is: My fully enriched web experience.

This is done with just an Intel P9400 at 2GHz, 2GBs of RAM and a Hitachi HTS ATA HDD. Well, its all about the right configuration -- there are no more tools, nor any tricks and no gimmicks.

You might ask "why is your Windows this fast?!" Well, I just followed the simple and last longing aphorism: "Keep It Stupid Simple", meaning

If your inner voice says: But I need this and that?! Just hold on for a minute. You do not need most of that crappy stuff promoted on magazines, software collections etc. Before installing crappy tools and programs try to find a web-based solution - the cloud is just one click away. And if there is no such solution, check for an USB-Stick-Edition of your favorite application.

Now enjoy using a fast booting and reacting operating system supporting your every days (social) web pleasure.


CSS gray out effect

2012/04/30 by Flo

A lot of modern web2.0 websites feature fancy looking gray out effects if an user e.g. opens an in-site image gallery or a messagebox pop-up. To do so, just use CSS and the style opacity-attribute on a global defined <div> in combination with some javascript that adjusts the size (width/height) of such a <div> box and some way to close the grayed out <div> box.

Check out the following example and its source for more details: gray out site

Include this definition as the top most of your html-file (this is needed to switch the browser into the right mode):

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
Insert the following javascript code that is responisble to show and close the gray out box:
<script type="text/javascript">

function get_doc_height()
{
	var body = document.body;

	var html = document.documentElement;

	var height = Math.max(body.scrollHeight, body.offsetHeight, html.clientHeight, html.scrollHeight, html.offsetHeight);

	return height;
}

function gray_out(div_id)
{

	document.getElementById(div_id).style.height = get_doc_height() + 'px';

	document.getElementById(div_id).style.width = '100%';

	document.getElementById(div_id).style.visibility = 'visible';
}

function close_gray_out(div_id);
{
	document.getElementById(div_id).style.visibility = 'hidden';
}

</script>
Last but not least insert the following <div> right after the <body> declaration:
<div id="div_gray_out" style="background-color: #000000; opacity: 0.4; position: absolute; width: 0px; height: 0px; top: 0; left: 0; visibility: hidden; onclick="close_gray_out('div_gray_out');"></div>
It is important to include it directly after <body> to ensure that the grayed out <div> is layered on top of all other DOM-elements.

To gray out your page just call gray_out() via javascript at some point and the magic takes place :-) Have fun!


Access Log Spam Next Level

2012/04/05 by Flo

While checking my web server's access logs I found yet another funny way of spam. It is well known that there are web crawlers out there peeking around just to leave a short spam message via the user-agent string. See the following examples:

[03/Apr/2012:21:50:01 +0200] "GET / HTTP/1.1" 200 567 "-" "hot girls are waiting for you at [censored]"

[01/Apr/2012:22:05:23 +0200] "GET / HTTP/1.1" 200 567 "-" "poker and win money www.[censored]"
Such spam is nothing new but the following seems to be a bit more tricky:
[04/Apr/2012:10:17:14 +0200] "GET / HTTP/1.1" 200 52742 "http://[censored].ru/" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
The user agent's signature tells me that this log entry was possibly generated by the google bot while crawling the web and accessing my web site through a referrer from "http://[censored].ru/". Well this is funny, because if I would do statistics on how my web site was accessed I probably would do a check on "http://[censored].ru/" to see who referres to my site. BANG! The spammer got some hit :-)
It is really amazing that such high effort is put into simple access_log spam. It seems that the outcome is worth the effort.


The Unix Guru's View of Sex

2012/04/04 by Flo

Something to smirk about :-)

#!/bin/ssh
unzip ; strip ; touch ; grep ; finger ; mount ; fsck ; more ; yes ; umount ; sleep


Practical Sandboxing Your Win32 Applications

2012/03/18 by Flo

Securing applications should always be a goal for reliable and good software development. As Google states in their design documents for the Chromium Sandbox (see http://www.chromium.org/developers/design-documents/sandbox): "The key to security is understanding: we can only truly secure a system if we fully understand its behaviors with respect to the combination of all possible inputs in all possible states." I fully agree with that and will notate it a bit stricter: To fully secure a piece of software you must know all possible inputs in all possible states and have to mitigate against inputs your software does not need to fullfill its actual operation purpose. And as Adobe says in (http://blogs.adobe.com/asset/2010/10/inside-adobe-reader-protected-mode-part-1-design.html): "The challenge is to enable sandboxing while keeping user workflows functional without turning off features users depend on. The ultimate goal is to proactively provide a high level of protection which supplements the mitigation of finding and fixing individual bugs."

Most attacks we are getting bothered with are such that an user opens malformed input data (this includes "hidden" open-request by a drive-by) that results the apropritate application switching into a not well defined state ending up in a ordinary application crash or even worse in executing malicious code that infects the users computer system. We all know that software will never be 100% bug free thus we (as developers) cannot guarantee that some pice of large software will never ever come into a stickyicky statet where it executes bootleged malicious code.

Sandboxing leverages basic OS-provided security mitigations and makes it possible to execute some piece of code in some kind of special container that cannot make persistent changes to the users system or access other ressources of the systems that are truly out of the scope for the software (e.g. reading files that are confidential, using the network, etc.). Sandbox architectures heavily depend on the exact assurances of the underlying operating system and used development envrionment.

This whitepaper summarizes well known stuff about practical real-mode win32 sandboxing. What is outlined her is no secret, nor is it something totally new. I heavily reference on articles by Microsoft, Google and Adobe. This document just summarizes the stuff I found and might give you a quick starting point for your own projects.

To give you a quick introduction: You have to split your application into at least two processes:

  1. One process is priviledged and dos NOT process untrusted data,
  2. the second one runs in some kind of a sandboxed envrionment and is responsible to process untrusted data.
So the first thing you have to do is to specify what threats could harm your application. Simply spoken using some examples:
  1. What data will be processed? (encoding, fomat, syntax...)
    ⇒ worstcase: possible threats?
  2. How is such data processed? (input-source, output-source, length check, integrity...)
    ⇒ worstcase: possible threats?
  3. How trustworthy is such data?
    ⇒ worstcase: possible threats?
  4. Maximum needed access rights to process the data?
    ⇒ worstcase: possible threats that could use the access rights? Impact?
By splitting into a trusted and untrusted "zone" you started securing your application in an early state of your software development. Both processes communicate through IPC or other techniques to exchange information between the priviledged (trusted process) and the sandboxed (untrusted process).

To give you a real world example, e. g. think about a browser or portable document reader: The priviledged process will render and display preprocessed documents from some source (internet download). Such documents will be loaded and processed by the unpriviledged process and if everything was all right will be passed to the priviledged one.
If a read and processed document was malformed all action takes place in the sandboxed process. If there was an exploit, its code will be executed in the sandboxed envrionment where the impact of an attack will be mitigated and no harm will take place.

One might argue that such solutions tend to be over engeneered and why not to use a fully sandboxed envrionment like a virtual machine or pre-processed software using some "safe" interpreter based programming language where the whole application could be executed?
Well, that's an argument and yes, virtualizing might secure unsecure software but it will not support the developing process of building reliable and secure software. I mean, crappy software remails crappy, even if it is executed in a virtualized sandbox envrionment.

Recall: We assumed that that all processed data in process 2 is untrusted and not trustworthy, so we have to except attacks within this process. Well, by design we except getting hit by malware and this is different. By design of such an solution we expect the sandboxed process beeing owned malware trying to infect, disturb or damage the system. It is importand to realize what could happen if malware owns a process to find the best possible mitigations to defend and mitigate against them.

In most cases exploits are trying to install some kind of malware infecting your system (backdoor, bot, spyware, virus, worm etc.). Such malware could be
  1. resident = surviving a reboot and getting started once again if you boot up your machine or execute an infected application for example.
  2. transient = malware that is just in place during your session until all processes/applications are killed or the system was rebooted.
Exploits often use a dropper that downloads the intended malware and executes them. So this gives us a first hint what we want to protect our sandboxed process from:
In many cases malware also writes into the registry to set autostart options or to manually install a service. In some cases system executables will be overwritten or executables will be copied to your system drive. Some malware just searches for files that might be interesting (password cache files, user logs etc.) This gives us the second threat we want to get rid of:
Malware often tries to auspionieren the user. We can except that a trojan might log the keyboard, take screenshots or even try to manipulate other running applications e.g. to remote control them, send messages monitor teir output etc. Thus we also want to
To achieve the goal of protecting a process e.g. Google's sandbox uses
  1. A restricted token
  2. A Job object
  3. An alternate desktop
  4. Integrity Levels and
  5. Hot-patching the Win32-API (e.g. network API, i/o)
Execute a process using a restricted token, assign it to a restriced Job object on a alternate desktop, use a low integrity level for such an process and hot-patch vital API-functions (network, io, registry, ...) to build a sandboxed execution envrionment for the untrusted part of your application that is ment to process data.

Malware running in such an envrionment it is not able to create another procress, write to the file system, change registry setting, to shatter attack other applications or remote control them just on the fly. An attacker must do a lot of work to bypass such a sandbox and attack your system.

References:


Finding doubles on your disk

2012/03/07 by Flo

Well, we all know the problem: Having huge amounts of disk capacity we begin collecting a lot of files over time until the drive is out of space. In a lot of cases we tend also to save copies of the same file across different directories. Having the problem to find such doubles on my drive I wrote a little tool that is able to find such copies and that might help to clean up your drive, too. Just run doubles.exe following the drive and path you would like to inspect and this tool just scans all directories, its sub directories and calculates a hash value for each file. By comparing the hash values against each other this tool might find copies of files that are distributed across different paths.

After scanning your drive you can check out the list of doubles, inspect the files and can decide what to do. If it is really the same file you might save disk capticity by deleting such copies.

The tool just uses standard Win32-API to travel through your drive and its directories, then calculates the SHA-1 hash value for each file that will be compared against a list of hash values calculated for the files actually traveled through. If this check ends up in a hit, the tool just prints out the corresponding filenames.

I highly recommend to check, if the files are equal on bit-level, because hash functions cannot guarantee that an equal hash of two files means that the two files are really equal. So keep that in mind before deleting a suspected double.

Download: http://bitnuts.de/doubles.exe


Secured Virtualized Desktop Envrionments

2012/01/12 by Flo

Well, just some thoughts about secured virtualized desktop envrionments: Today information systems in general lack efficient protection against both out-sider and insider threats. Special crafted targeted malware attacks and data leakages are the most visible examples of these threats. IT infrastructures are shared, distributed, and hevy heterogeneous. In the past few years many of them also extend into the cloud. Classic desktop envrionments just run all applications on the same desktop envrionment (operating system) meaning that all information is shared in such an envrionment. Bad news if you want to work on e.g. "top secret" documents while browsing the web, watching videos, listening to music, reading PDFs or richt text e-mails on the same machine. There is a big trade off between security and usability. If you work on confidential information, there should not run any untrusted application like a web browser etc. that could be victim to a targeted attack and result in damage or information disclosure. On the other hand it seems to be a bit unpractical using a stripped down desktop envrionment today, because it is common to read e-mail, watch enriched web pages, download untruted PDFs or other multimedia files on the fly while browsing the web, listening to a radio stream etc.

Using virtualized desktop envrionments is a powerful tool to provide central managed, securely isolated working envrionments fitting your needs in today's business world. Such solutions combine a system-wide security policy management with an easy to use deployment, configuration and provisioning system for the entire infrastructure, including networks, clients and desktops. The core component of such a solution is some kind of special protected (hardened) kernel/hypervisor that isolates and manages individual secured and virtualized desktop envrionments (VM containers) from each other on the same client machine and its hardware including network capabilities. In most cases such a solution fully virtualizes the underlying hardware, builds up an encrypted virtual network on existing (untrusted) wires into a trusted gateway where all network traffic will be routet through. This enables you to use a special crafted virtual machine to work on e.g. top-secret content totally isolated from another in parallel running virtual (and isolated) machine running a browser that is able to surf the web without loss of any comfort. Due to strict virtualization, malware infecting e.g. the VM containig the web browser will not harm your VM running your confidental business desktop envrionment. If your solution uses self healing techniques over the network it is also possible to repair a destroyed or infected VM on the fly while beeing connected to broad-band network. By encrypting the whole network traffic though a VPN it is also possible to use unsecure WIFI-networks without fear. If you strictly divide your virtual machines by your security requirements, it is possible to build up a VM for your business stuff, one for surfing the web, one containing an open VM where you can install and test any software etc.

At the end your envrionments gets not just a bit more secure it is also more convience to work with. There are still some good ideas and products out there to manage such stuff, see the following links for more details: