Last modified 2017/04/17 by Flo
Here you will find my posts from 2016, tl;dr.
2016/11/23 by Flo
I was thinking about this article for a little while and finally published it now. Well, I was an early adopter of Android phones and in the beginning really liked Android devices. The idea of building a mobile OS on top of a virtual machine is great, hence ensured that Android runs on many different architectures with full support for all the beloved Apps, without the need to recompile them for each and every CPU, Chipset, Sensors, etc. So at first glance it looks great, but if you dig a bit deeper you will encounter some drawbacks. We all know the problem from a high-level perspective: We've bought a brand new Android device, are happy with the latest incarnation of Android-OS and at least 3-4 months later, Google publishes major (security) updates and we will not receive them for our devices, if we have not bought a device labeled Nexus or so. So we have a super-duper Samsung, HTC, Huawei smartphone which around 3 or 4 months later lacks of security patches and which in many cases will see no update, or at least just one which often ships a year later by the manufacturer. More than once this was the moment I was frustrated by different manufacturers and Google. How could this be?! We are in the 21th century, and my OS cannot update itself and needs to be fully recompiled for the target platform to ship updates and patches?! What the henk?
Assume the same for Microsoft Windows: Each and every manufacturer, be it DELL, Fujitsu, Acer, Lenovo or Asus, have their own Windows release, packed with the needed drivers, their own version of Explorer (=Launcher in Android) and many additional bloatware installed on top of their PC OS installation. Well, from one perspective this could be a smart idea, because the PC is ready to go, equipped with all drivers that are (hopefully) perfectly integrated, and special tweaks to take full advantage out of system. But keep them updates in mind, guy. I am not just speaking of feature updates, I talk about urgent security patches and updates to solve minor issues in the base system, kernel and API. If you have a finite packed OS – like in Android – you cannot just patch shared libraries or parts of the OS (kernel, tools, services, …). The manufacturer has to rebuild the whole system, pack each and everything into a single fat binary blob which is then deployed to the machines and burned onto the ROM.
This would be a huge amount of work and would cost a lot of money. The same is true for Android and this is why most smartphone manufacturers do not update their devices. Development costs are high and competition is terrible with regards to pricing and battles on discounts etc.
So it is even more strange, that Google's engineer's did not go for a more granular and modular approach. Android's fat and non-flexibility is an absolute bad and weak system design.
If you read books about operating system concepts and design, like Micro Kernels (e.g. QNX, MINIX) or modular micro styled kernels (e.g. Windows), you will encounter that such concepts are well proven to be better than monolithic kernel designs like in LINUX.
Beside the Kernel design you should also ensure that the operating system's core (drivers, tools, services, APIs) are modular as well and it should be possible to update and patch them on the fly. Due to the fact that Android was build on a Java-based VM this should have been possible and the way to go. So Google – like Microsoft – would be able to patch the core system centrally. With Andorid's current design this is impossible and this is why we have millions of really insecure and outdated Andorid devices running.
I personally cannot understand why Andorid was designed like this. There would have been opportunities in the beginning. It would have been possible, e.g. see Kaspersky's microkernel-based OS which was fully build from scratch. Why did Google did not go for MINIX, why they did not support this project and put it to another level with Andorid? Instead they've chosen LINUX, and on top, a badly designed architecture with regards to maintenance and security. This sucks!
2016/08/20 by Flo
I have been analyzing recent crimware/malware samples for the last couple of weeks now, and would like to share some interesting bits I stumbled upon. In a nutshell the crimware scene is getting professional and their malware makes use of the latest tricks to camouflage their actions tricky.
One reason for this might be that there are several groups selling crimware as a professional service, also known as malware as a service, where customers can buy a malware package including updates, fancy dashboards to analyze the campaigns infection rate and outcome, lists of e-mail addresses to send e-mails containing the malicious attachments or URLs to hijacked web pages where exploit-kits where hosted etc. While the hijacked web servers and exploit-kits and hacked CMS like Joomla, WordPress among others are commonly known, the malware in fact is getting more interesting, as the criminals update of the provided and distributed malware several times a day. So they are heavily aware of the AV industry and always try to stay under the radar of the engines.
As we know in general it is no rocket science to change the binary footprint of an executable by just rearranging and packing an exe several times, there is more to do to fool and bypass the behavioral engines of modern AV products. It seems that crimeware crews are compiling their malware several times a day, they change the icons, the executables general descriptions and also huge parts of the binary itself by modifying most of the code. We have seen such modifications in MS Office macros for years now where the attacker’s code was just around 1-2% of the whole macro’s code, the rest of it were copy & pasted macros one can find on web pages and in books about macro programming. It seems that malware - especially ransomware authors - use the same trick. I have found several ransomware samples from different crews that seem to use public domain Visual Basic programs to hide their ransomware code. This public domain code is just there to trick the AV’s heuristic scanners and to make the code look like sweet Dorothy who cannot do harm to anybody.
But this is not the whole story. I analyzed how modern ransomware commonly infects Windows PCs these days and was really astonished about the expenditure. There is ransomware which makes use of classic auto start to start a .bat file which in turn starts an Windows’ built in jscript interpreter which again reads and executes a jscript, but from registry. This jscript then starts up a powershell and sends keystrokes to this powershell which in turn does an in-memory code injection into regsvr.exe started up beforehand. Beside that they also make use of a PHP script interpreting DLL.
It is a lot of work just for running the persistent part of simple ransomware, where the encryption algorithm of this crimeware was implemented weakly and could be revealed by a simple known plaintext attack on the crypto. So I asked myself: If someone has the skills for letting his crimware survive a reboot using such sophisticated techniques and then dramatically fails when it comes to cryptography, something must be wrong. Most modern programming languages come with very powerful cryptographic libraries, so one must not have a degree in math to implement crypto. So what is wrong here?!
Well, I have also done some investigation in forums, pastebin, youtube etc. and came to the conclusion that the authors might not be that clever they make us believe they are. From what I have seen it looks like that there is a huge amount of copy and paste in this scene. There are a lot of copy cats just re-using code, which is - for the infection part and stuff to get persistent - not of bad quality. These guys know how to camouflage their doings, also how to keep silent, to change the binary to get rid of AV heuristic alert and so on. They also seem to know how to find vulnerable web pages using outdated CMS (well, it is quite simple: write a stupid python web crawler and simply analyze what a web server and the returned content tells you about the used system). However, it seems that there is lack of knowledge about cryptography and this lack also drawn towards the sample codes shared and used by the criminals. So they make use of simple XOR encryption, they use RSA with weak key lengths or by using wrong parameters making it relatively to crack down the encryption.
But there is also very professional crimware which also makes use of quite tricky infection and persistent techniques , but the guys behind also seem to know a lot about cryptography. I have seen samples which make use of EC and RSA Cryptography using the recommended parameters and key lengths. Additionally, they did not make the foolish failure to write their own crypto code, they make use of well known open source libraries and seem to use them properly, so I think there is little you can do, if such a beast has taken your digital jewels.
If such code finds its way to (semi) public crimware forums we might have a problem in the near future. Especially if the code is the base for sold crimware as a service. Once a good fundamental is set, such crews can elevate their malware engines, get better and better and it is getting harder to defeat them. From my experience I can tell that a brand new malware sample takes up between 12 - 24 hours until it is getting catched by the ordinary AV. A lot of time for malware to spread and guzzle down your system.
At the end it is not that worse as it might sound. Just
2016/04/19 by Flo
In the last couple of months a massive rise of ransomware (aka cryptolockers) can be registered. Reading news papers and dedicated IT press releases in Germany suggests, that this kind of malware spreads across Europe and other western countries. It seems that cyber crooks make easy and a lot of money by blackmailing users, so they switched from “classical” banking Trojans to ransomware. I had some discussions with readers of my private blog, with customers and also with friends about malware trends, and I decided to feature this blog post, sharing my thoughts.
We are living in a highly connected world, in daily business things must be done quick, so often there is not much time to act wisely, carefully and with strict IT Security in mind. But that is a mortal sin in IT Sec, and often the cause for malware infections. What to do?! Well, if you analyze common attacks and attack vectors you will encounter that most of them are executed through exploits by infecting victims using an exploit (kit) for a browser or a browser’s plug-in like Adobe Flash, PDF or Java, or by social engineering tricks, so users do something “odd”, like opening an executable attached to an e-mail, claiming to be an invoice, payment reminder, lasciviously image or video... You know these zip archives containing something like “urgent_invoice.pdf.scr”, “horny.zip.exe”, “important.doc.js” etc.
Unfortunately the main target for such attacks are users of the Microsoft Windows operating system family. Users of Mac OS X or Linux can just be happy, because cyber crooks do not give the hassle, developing malware for such platforms, because the market share and final outcome of a campaign is not worth. Developing and starting a malware campaign for computers running Mac OS X and Linux would more likely not pay off at the end. That is, why you will often read that using a Mac or Linux PC would result in safer browsing and not having any (malware) problems. Well, yes that is somehow right, but not the whole story. Working with Mac and Linux PCs for a long time you will also encounter problems and issues, and you will also see malware that targets such systems. But the probability getting hit is less likely. So using a Mac or Linux-based PC makes computing saver in some fashion.
How could we profit from that? Well, if you have enough money you could buy yourself a Mac and use this PC as your one and only workhorse, or you just remove Windows from your PC and install Linux. It sounds easy, but it is not and Windows users know, that there are reasons to use just Windows. Not only because of Microsoft Office, also because a lot of companies run dedicated applications that are only available for the Microsoft Windows platform. Just switching to Mac OS X or Linux is not the road to go then. Also for private users. There are a lot of peripherals like printers, cameras, home automation etc. that cannot just be run with Mac OS X or Linux, and most people are not IT professionals, they do not have time and passion to learn how to use and hack with Mac OS X and Linux just to get their beloved camera, TV Dongle, Midi-Synthesizer working. They just want to use the PC as is, without having a degree in CS. But they still want and should be safe somehow. What an irony!
So recall the most common ways computers are getting infected: Surfing the web and getting hit by an exploit or through tricking an user to open up a malicious file (attachment, download). Looking how security sensitive companies accomplish securing their users you will often see that they separate networks, more precisely they limit access to digital content from external parties like the Internet or from USB-Drives etc. One way to achieve this is to have two physical machines or a proxy that divides the internal infrastructure from the external. Each and every (insecure) external stuff must pass through a dedicated machine for external stuff, that only processes external data and information. Here the user is not able to process such information with the main information technology. It is split off, hence an attacker can only attack that kind of proxy and not the main IT. The problem is, that such an approach is not very comfortable, nor handy. We all need to answer e-mails, surf the web and should access information quickly.
To make such a scenario more comfy, virtualization comes into play. A virtual machine enables you to run different operating systems in parallel. Switching between one and another virtual machine is just a matter of one or two clicks. You can build up one virtual machine running your primary Windows system and another secondary system, running a Linux for browsing the Web and opening files from external sources. You could also run such sessions on a remote server and just have a remote (VNC/Remote-Desktop) connection to them, also called Remote-Controlled Browsers System. This is what big companies often do, they provide remote (e.g. Citrix) sessions for browsing and opening document files. Thus an user does not open potentially dangerous (external) content on his/her computer, instead such content is opened on a remote or virtual system.
However, the key point is, that the primary system does not open potentially dangerous content from the Internet, external drives etc., thus the system where you process personal, and critical information does not directly getting in contact with dangerous content, hence the risk of malware infecting your beloved digital gems (personal documents, photos, music, etc.) is dramatically lowered. If the second system gets hit by malware, only this system gets infected. Because such so called surf stations or remote desktops are usually getting wiped (reset) cyclically, and are also hardened, it is difficult for attackers to gain full and persistent access to such systems and to move over form such a system to the whole network, infrastructure, or the primary system. Well, it is still possible - do not get me wrong, but it harder to achieve.
A reasonable low-cost solution of such an approach is to install some kind of virtualization host like VirtualBox or VMWare-Player and then to create a virtual machine for a Linux-based Segregation System. I use the term Segregation here, because this system should act as a sluice or floodgate, you shall open potentially insecure content like web pages, e-mail correspondence and untrusted document files on that segregated system. You should carefully check the content and only transfer (copy) such content to your primary machine that is really needed, so content goes through a sluice. Ensure that you always keep the Segregation-System updated, you should still Anti-Virus scan the content and harden it. A simple approach to harden is, by discarding any changes of the VM after shutting the VM down. Virtualization hosts often support snap-shot functionality, it is like freezing the operating system into a defined state, so this state is kept regardless what you do with the dedicated VM. If you crash it, delete files or infect the system with malware, it will always go back to the snap-shot you have taken before. This is a smooth way to keep such a system's integrity, regardless what you do.
If you do not like virtualization what else could be done for SOHO?! Well, a more or less inexpensive solution is to turn a Raspberry Pi into a Segregation System. The new Raspberry Pi v3 should have enough power for surfing the Web, opening up PDF, ZIPs and typical document files. There are plenty of tutorials in the web (and also on raspberrypi.org) on how to configure a Raspberry Pi for basic computing, also on how to basically access the Pi remotely via SSH, and especially with VNC or Remote-Desktop. So you can more or less seamless integrate it into your workflow and use it to surf the web and open up untrusted documents. It is also a great way to clean third party document files: Just open up PDFs or document files with a Linux tool, then convert them to plain data which often removes malicious habit from a file. So you can use it more securely on your primary system.
I personally like the Raspberry Pi and use it as Segregation System, also for spanning a VPN over insecure (open, foreign) WiFi-connections. It is not too expensive, does not require much supply power and works great for private and small office usage. If you do not like VNC/Remote-Desktop you can also use a KVM-Switch. A KVM switch (with KVM being an abbreviation for “keyboard, video and mouse”) is a hardware device that usually allows you to control multiple computers from one keyboard, mouse and display. So you can switch between your primary PC and the Raspberry Pi by just using the same external periphery devices.
Using several (virtual) computers for daily computing operations might sound odd, but is is common practice in professional environments. I have seen it in governmental/military environments, also consulting, financial, and law companies, that dealt with confidential and clasified, valuable, or risky data/information.
Unfortunately it is not widespread and is not done in many companies, if you carefully read and analyze recent IT breaches. The same is true for Application Whitelisting, if you talk to people they all know about it, but there are only a few using it, although it provides so much more security.
I highly encourage you to check this option, also for private usage! Of course it is awkward and users have to readapt, but it is worth the time and money you spend, because getting hacked with all its consequences costs by far more than just deploying and using a dedicated Segregation System. And do not forget about backups, Anti-EXE/Appication Whitelisting, Anti-Exploit-Solutions for your primary computing system as well.
2016/03/26 by Flo
I've just stumbled upon a new cryptolocker that encrypts your data extremely fast by just scanning your local drives (c: - z:) and only encrypting the very first 2KBs of typical document files. This makes this malware very efficient. The cryptolocker serves as a typical doc.js file which - when opened - downloads some executables and starts them. The JScript file also writes a .cmd file and a ransom (how to) decrypt.txt file into the %temp% folder. The batch script simply walks through all your local drives (c: - z:) and passes any interesting document file to the downloaded executable, which in turn encrypts only the first 2KBs of the file. In fact not all of your data is then encrypted, but in most cases enough information is bricked that it cannot be used by the document file’s dedicated application. The cyber crooks also add some registry keys to survive the next reboot and to show the ransom message telling you how to pay and getting your files back. My quick analysis showed that the malware excutables are served through hacked Joomla! CMS web pages. Well, as always, a lot of people think they need a CMS but never keep such systems up to date. Unfortunately such CMS pages often get hijacked and end up as malware distribution sites.
I did a quick run on a test machine with my analysis drivers on, you can download the raw log files ransomAnalysis.zip here. If you have any questions, please feel free and contact me. Also do not miss the following video where I show how to reveal this kind of .js based malware. Enjoy and Happy Easter Holidays!
2016/02/11 by Flo
Currently I am very busy in analyzing and testing ransomware with my brand new kernel driver Pumpernickel, this kernel mode driver enables you to sandbox (limit) write attempts of other processes to certain location which can help to analyze malware or to protect against such scrap. For example you can restrict notepad.exe such that it can only write text files to some whitelisted paths.
While Pumpernickel is hile doing yet another run I stumbled acorss this one:
Well, this is definitly the height of impudence. This cryptolocker delivers as an RAR-SFX executable and extracts itself into a folder named Cryptolocker. This is really weird and of course bold. My conclusion: these guys feel so save that they do not care much about hiding. First delivering such malware in form of a pure executable (no zip or exploit), second internally naming it cryptolocker tells stories about the whole industry here. Nothing more to say. Take care!
2016/01/09 by Flo
While analyzing some Cryptolocker (ransomware) during my Christmas Holidays I stumbled around some cryptolockers that used regsvr32.exe to bypass application whitelisting solutions. I started experimenting and can confirm that you can easily misuse regsvr32.exe to load and execute dynamic link libraries. Well, if you have set up your whitelist properly (= also block DLLs, OCXs, SYS etc.) your system is not in danger, especially if you block any executable loading from user folders. But we all know that most anti-exe solutions on the market lack DLL and OCX-blocking in their default configuration - same on GPOs-, so you shall be aware and cross-check.
If you cannot blacklist all of your user folders and especially DLLs in such folders, you shall at least blacklist regsvr32.exe. But there are also a lot of cryptolockers out there that misuse other Windows' preinstalled tools like wscript.exe (and other like .NET compilers), so I also heavily recommend that you blacklist scripting hosts, and generic Windows admin tool executables, too (see my post from 2015/12/07). Personally I do not need most of such tools for daily operations on my PC - and I bet that most ordinary users also do not make use of them daily. So, for your own security block and blacklist as many as you can, to avoid these ransom- and cryptolocker attacks that seem to spread like all year's winter flu in the last couple of months (but it also helps to mitigate against other threats like most browser exploit's exe droppers etc).
By the way, check out Excubit's Beta Camp, especially my new Project called Pumpernickel. This driver enables you not just to track down what an executable willl save on your disk, this pure Kernel Mode Driver is also able to block such attempts. Everything will be logged, so you can easily use it for forensics...