Securing applications should always be a goal for reliable and good software development. As Google states in their design documents for the Chromium Sandbox (see http://www.chromium.org/developers/design-documents/sandbox): "The key to security is understanding: we can only truly secure a system if we fully understand its behaviors with respect to the combination of all possible inputs in all possible states." I fully agree with that and will notate it a bit stricter: To fully secure a piece of software you must know all possible inputs in all possible states and have to mitigate against inputs your software does not need to fullfill its actual operation purpose. And as Adobe says in (http://blogs.adobe.com/asset/2010/10/inside-adobe-reader-protected-mode-part-1-design.html): "The challenge is to enable sandboxing while keeping user workflows functional without turning off features users depend on. The ultimate goal is to proactively provide a high level of protection which supplements the mitigation of finding and fixing individual bugs."
Most attacks we are getting bothered with are such that an user opens malformed input data (this includes "hidden" open-request by a drive-by) that results the apropritate application switching into a not well defined state ending up in a ordinary application crash or even worse in executing malicious code that infects the users computer system. We all know that software will never be 100% bug free thus we (as developers) cannot guarantee that some pice of large software will never ever come into a stickyicky statet where it executes bootleged malicious code.
Sandboxing leverages basic OS-provided security mitigations and makes it possible to execute some piece of code in some kind of special container that cannot make persistent changes to the users system or access other ressources of the systems that are truly out of the scope for the software (e.g. reading files that are confidential, using the network, etc.). Sandbox architectures heavily depend on the exact assurances of the underlying operating system and used development envrionment.
This whitepaper summarizes well known stuff about practical real-mode win32 sandboxing. What is outlined her is no secret, nor is it something totally new. I heavily reference on articles by Microsoft, Google and Adobe. This document just summarizes the stuff I found and might give you a quick starting point for your own projects.
To give you a quick introduction: You have to split your application into at least two processes:
So the first thing you have to do is to specify what threats could harm your application. Simply spoken using some examples:
By splitting into a trusted and untrusted "zone" you started securing your application in an early state of your software development. Both processes communicate through IPC or other techniques to exchange information between the priviledged (trusted process) and the sandboxed (untrusted process).
To give you a real world example, e. g. think about a browser or portable document reader: The priviledged process will render and display preprocessed documents from some source (internet download). Such documents will be loaded and processed by the unpriviledged process and if everything was all right will be passed to the priviledged one.
If a read and processed document was malformed all action takes place in the sandboxed process. If there was an exploit, its code will be executed in the sandboxed envrionment where the impact of an attack will be mitigated and no harm will take place.
One might argue that such solutions tend to be over engeneered and why not to use a fully sandboxed envrionment like a virtual machine or pre-processed software using some "safe" interpreter based programming language where the whole application could be executed?
Well, that's an argument and yes, virtualizing might secure unsecure software but it will not support the developing process of building reliable and secure software. I mean, crappy software remails crappy, even if it is executed in a virtualized sandbox envrionment.
Recall, we assumed that that all processed data in process 2 is untrusted and not trustworthy, so we have to except attacks within this process. Well, by design we except getting hit by malware and this is different. By design of such an solution we expect the sandboxed process beeing owned malware trying to infect, disturb or damage the system. It is importand to realize what could happen if malware owns a process to find the best possible mitigations to defend and mitigate against them.
In most cases exploits are trying to install some kind of malware infecting your system (backdoor, bot, spyware, virus, worm etc.). Such malware could be
Exploits often use a dropper that downloads the intended malware and executes them. So this gives us a first hint what we want to protect our sandboxed process from:
In many cases malware also writes into the registry to set autostart options or to manually install a service. In some cases system executables will be overwritten or executables will be copied to your system drive. Some malware just searches for files that might be interesting (password cache files, user logs etc.) This gives us the second threat we want to get rid of:
Malware often tries to auspionieren the user. We can except that a trojan might log the keyboard, take screenshots or even try to manipulate other running applications e.g. to remote control them, send messages monitor teir output etc. Thus we also want to
To achieve the goal of protecting a process e.g. Google's sandbox uses
Execute a process using a restricted token, assign it to a restriced Job object on a alternate desktop, use a low integrity level for such an process and hot-patch vital API-functions (network, io, registry, ...) to build a sandboxed execution envrionment for the untrusted part of your application that is ment to process data.
Malware running in such an envrionment it is not able to create another procress, write to the file system, change registry setting, to shatter attack other applications or remote control them just on the fly. An attacker must do a lot of work to bypass such a sandbox and attack your system.