This Week in Security: Dan Kaminsky, Banned from Kernel Development, Ransomware, And The Pentagon’s IPv4 Addresses
This week we’re starting off with a somber note, as Dan Kaminsky passed at only 42, of diabetic ketoacidosis. Dan made a name for himself by noticing a weakness in DNS response verification that could allow attackers to poison a target DNS resolver’s cache. A theoretical attack was known, where spoofed DNS responses could collide with requests, but Time-To-Live values meant that DNS requests only go out once per eight hours or so. The breakthrough was realizing that the TTL limitation could be bypassed by requesting bogus subdomains, and aiming the spoofed responses at those requests. This simple technique transformed a theoretical attack that would take 87 years to a very real 10 second attack. Check out the period video after the break, where Dan talked about his efforts in getting the problem fixed.
What may be the most impressive piece of work is how many different vendors and maintainers he convinced to cooperate while keeping the vulnerability quiet. Because of the seriousness of the problem, the decision was made to wait to publish details an additional 30 days after the coordinated patch release. It took 13 days for the vulnerability to leak, but that still gave the world enough time to prevent major problems.
Throughout his life, Dan always had a “go out and fix it” mentality that was an inspiration to others. Half of the reason that we have DNSSEC today is because of Dan’s tenacious behind-the-scenes lobbying. He was a force for good, and a hacker’s hacker.
University of Minnesota Banned From Linux
A story has been percolating for a while, and it finally resulted in a full ban on kernel patches sent from anyone at the University of Minnesota. This extreme step is the result of known-bad patches being sent for inclusion in the kernel. The original idea was to test whether a the kernel would be susceptible to a bad actor sending in a malicious patch, disguised as a fix. A paper was written about the test. Their suggested fixes don’t inspire confidence, particularly since they start off by recommending an addition to a project’s Code of Conduct, adding a promise not to push malicious code.
The paper was published back in 2020, but the ban just happened over a week ago. Why? Months after the initial incident was dealt with, suspicious patches start arriving again from the same university. The explanation was that these new patches were being generated by a new code analysis tool, but they seemed to be introducing new bugs instead of actually fixing them. Greg KH made the call that enough maintainer time had been wasted dealing with the patches, and announced the ban. My opinions are mixed. There is a certain utility in testing the kernel community against this sort of attack, but it also seems to be the right call to drop the hammer on repeated bad behavior.
The End of Emotet
Emotet was described by Europol as “the world’s most dangerous malware”. The US DOJ estimates 1.6 million machines were part of this botnet, but thanks to a global law enforcement action, it is no more. To get a sense of the size of the operation to take down Emotet, note that the number of Command and Control (C2) servers that had to be taken off-line numbered in the hundreds. The law enforcement action neutered the botnet, but there was still the problem of the malware still installed and running on machines around the world.
The solution was to host a set of C2 servers that pushed a new module to all the infected. On April 25th, this uninstall module would remove the autorun hooks, and shut down any Emotet processes on each infected machine. That process seems to have gone off without a hitch, but there is still cleanup work going on. Namely, 4.3 million email addresses were harvested from infected machines, and were found on the seized servers. Law enforcement has partnered with our old friend, [Troy Hunt] of HIBP, and those emails are now part of that service’s database.
Ransomware
It’s been a busy few weeks for Ransomware news. Apple is dealing with a $50 million ransomware demand. Acer is also facing a similar demand from the same group, REvil. In the case of Apple, the actual breach seems to have been in systems owned by Quanta, one of Apple’s suppliers. It’s been suggested that REvil is using the recent Microsoft Exchange vulnerabilities to get access into target networks.
QNAP devices have been hit hard, too. We’ve covered some of the vulnerabilities, but ransomware schemes are actively hitting the unpatched NAS devices exposed to the internet. Qlocker is one attack that is notable for its simplicity. The attackers simply ran a remote command using 7zip to generate password protected archives, and it looks like they made almost $300,000 in just a week of malicious business. A hero emerged out of the story, when [Jack Cable] tried to help a friend get data back, and discovered a vulnerability in the criminal payment system. Using the transaction ID, but with a character shifted to uppercase, was enough to confuse the system into giving up the decryption key. About 50 victims got their data back through this trick, before it was caught and fixed.
The Mystery of the Pentagon’s IPv4 addresses
The Washington Post kicked off coverage of a unique developing story. Millions of unused IPv4 addresses assigned to the US DoD were suddenly being routed for the first time. (Beware the paywall, though turning off Javascript temporarily might help you read the linked story.) The story makes much of the timing, as the routing seemed to be published in the few minutes between the swearing in of one president, and the actual end of the previous administration. My first response was annoyance that politics had been injected into what was likely a straightforward security story. It seemed that the Post was trying to find a sensational story to run. But the timing really was odd. So what’s going on here?
So first, let’s review some context. The world is running out of IPv4 addresses — has already run out, depending on how you count them. While they have all technically been assigned, in the early days of the internet, some very large blocks of addresses were assigned, and were never fully used. An A class network contains 16 million addresses, and they were given out like candy in the early days, and the DoD has several. Next, it’s worth noting that congress has entertained the idea of compelling the DoD to sell off its unused IP addresses. As the supply has run out, unused IPv4 addresses can fetch quite a high price on the open market. The last notable tidbit is that just because nothing is actively using an IP address, there is still network traffic addressed to and from there. Various network worms and scanners are continually probing the entire internet, and DDoS attacks usually use UDP packets with random source addresses. The probes and response traffic can be a valuable source of real-time information on what’s happening on the internet.
There has been coverage now by more knowledgeable voices, and even a cryptic response from the DoD. The growing consensus seems to be a trio of explanations as to what is going on. The first is the simplest. IP squatting has been known to happen, and if no one is announcing the Border Gateway Protocol (BGP) routes for an IP space, it’s much easier to illicitly make use of addresses. By claiming the IPs over BGP, the DoD is protecting those IPs. Second, it’s much easier to defend against a congress wanting to strip resources away, if you can make a valid claim you’re using those resources. And finally, it does appear that some part of the DoD is doing analytics on the Internet background noise and backscatter being received by the otherwise unused network space.
There’s one final question: Why was the project kicked off at the literal last minutes of an administration? Was this some sort of sordid plot? Not likely. A project of this magnitude would take quite some time to go from the initial idea to implementation, and would likely need to be signed off by high level administration. While I’m not sure why the switch was flipped quite as late as it was, I find it very likely that had it been any later, much of the red tape would have to be waded through a second time, and the new administration would need to sign off on it.
Drupal Core Flaw
We cover flaws in Drupal extensions, WordPress plugins, and Joomla add-ons. What is fairly rare is a serious vulnerability in the core code of one of these projects. That’s not to say it doesn’t happen. Case in point, Drupal has a Cross Site Scripting vulnerability in its core code. XSS usually shows up as a way for one user to inject javascript in a comment, that is executed when other users visit the site. This can be as benign as one user suddenly becoming the most friended account on myspace, or as malicious as a comment making the commenter an administrator when the site owner tries to moderate the trapped comment. There aren’t many details available about this one yet, but make sure your Drupal installs are up to date.
New Old Linux Backdoor
Last up this week, a bit of Linux malware was discovered, and then was found in a file submission from 2018. Dubbed RotaJakiro, this backdoor uses a handful of techniques to avoid detection, like rotating through encryption methods when contacting command and control servers. It’s a bit unnerving to know that it’s been around so long without getting noticed til now. Thankfully, there are some simple Indicators of Compromise (IoCs), like a pair of file names, and four possible md5 sums for those files. I thought it would be interesting to document the process of checking a system for these files.
My first thought was a simple combination of find
to list all the files on the system, and then grep
to look for the filename.
sudo find / | grep systemd-daemon
We can make this better, by getting rid of the grep
command, since find
has built in name matching. For bonus points, we use xargs
to go ahead and calculate the md5sum when a matching file is found.
sudo find / -name "gvfsd-helper" -o -name "systemd-daemon" | xargs md5sum
There is a potential problem, if folder names contain spaces or other special characters, xargs
would see the special characters as a break between inputs. Thankfully, both find
and xargs
have a null delineated mode, where special characters are preserved. The escaped parenthesis are needed to specify that the -print0
flag applies to both of the specified name patterns.
sudo find / \( -name "gvfsd-helper" -o -name "systemd-daemon" \) -print0 | xargs -0 md5sum
I showed this one-liner off, and was immediately reminded that find
also has an -exec
flag, that makes the use of xargs
superfluous.
sudo find / \( -name "gvfsd-helper" -o -name "systemd-daemon" \) -exec md5sum {} \;
So this gives us a simple one-liner that will calculate the md5sum
of any file with a suspicious file name. If you get a match, compare the output values to the known IoCs. Hopefully none of us will find this particular nasty on our systems, but better to know.
from Blog – Hackaday https://ift.tt/3t4nRjt
Comments
Post a Comment