> 7. The stolen info is sent out by infecting USB sticks that are used in an infected machine and copying an encrypted SQLLite database to the sticks, to be sent when they are used outside of the closed environment. This way data can be exfiltrated even from a high-security environment with no network connectivity.
> "Agent.BTZ did something like this already in 2008. Flame is lame."
Flame's approach is different and more impressive. Agent.BTZ copied itself and used an easy-to-discover autorun.inf file in the root directory of attached disks or network shares. Flame exports its database by encrypting it and then writing it to the USB disk as a file called '.' (just a period, meaning 'current directory')
When you run a directory listing you can't see it. You can't open it. The windows API doesn't allow you to create a file with that name and Flame accomplishes this by opening the disk as a raw device and directly writing to the FAT partition. Impressive, right.
While a lot of these individual features alone are not impressive the sum of the parts, combined with the collision attack on the certificate signature are very impressive.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Nobody knew about it for years, yet it was derided when discovered and documented.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Infosec is an inherently pessimistic enterprise, although spending time here makes me think it's not a perspective limited to security.
Just look at how almost every post here ends up littered with comments like "This isn't new. My XYZ already does all of this." People like to feel superior (it helps reinforce the individual nerd exceptionalism)
> This isn't new. My XYZ already does all of this.
This is exactly the attitude used by some negative minded mediocre people to demotivate free thinkers. To be fair, to most of them it probably also doesn't seem new in reality, because their grey cells lack the sophistication required to understand the difference.
I think it is more a case of the public information security field moving from a small, highly technical niche of hackers (example list[1]) to a mainstream career path that openly trains and employs thousands of people that may not have a traditional “hacker mind-set”[2][3].
BIFF[4] is still remembered by many within the early niche group of hackers. It is likely that similar psychology has been driving the ridicule at hacker conferences in recent years towards mainstream reporting on “cyber” topics and use of buzz phases such as “Advanced Persistent Threat”.
It is worth noting that the 8.3 short name of the hidden file was HUB001.DAT[1]. This is because VFAT allows the specification of both a short name (8.3) and long name (LFN) for each file/directory.
You can find 8.3 '.' entry names by searching a partition for \x2e\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
A file with an LFN of '.' could be found with (hopefully this is correct) \x00\x2e\x00\x00\x00\x00\xff\xff\xff\xff\x0f
It appears as if 8.3 file names starting with '.' are treated specially but LFNs starting with '.' carry no significant meaning.
I struggled to find references to other malware that has used a similar approach. Does anyone have more information?
Surely Windows does not attempt to automatically execute files with a LFN (UTF-16 name) of '.'?
>I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
People are unsure as to why it has such a large file size (do we know why yet?). One very common explanation is that it is bloated because of poor software engineering, some of the people that believe this explanation attempt to fit the facts to that narrative.
Also Consider the culture of the demo scene/exploit writers. The smaller code the better the programmer.
Personally I like to think that the flame authors intentionally exploited this prejudice and made it large so that: 1. it wouldn't look like malware, 2. if it was discovered no one would take it seriously and look deeper, 3. reverse engineering it would be complicated by it's large file size (cost > benefit from an AV perspective).
The same question was asked of Stuxnet; the answer is probably boring: state-sponsored malware authors are not like demo scene writers and do not care if their code is particularly elegant. They probably care more that it's J2EE-style maintainable.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Security folks often lack development experience, specifically in products that ship, to appreciate the big picture. This is why certain people on HN were so fixated on a lack of code obfuscation to give credit to the massive QA effort behind making all of stuxnet work on such a complex target.
I say this as a security person who has previously done dev on product teams.
I think tptacek has hummed a few bars in this direction before, but it has become received wisdom on some parts of the Internet that geeks vs. government is an asymmetric fight and that since governments are stupid geeks will win. You often see this in, let me cherry pick out of charitability, threads suggesting that the OSS community develop surveillance countermeasures for use by dissidents subject to certifiably evil regimes.
It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
I agree with this, and I was similarly amazed when some folks calling themselves Anonymous started dumping personal information about law enforcement officers onto the web, you don't pick fights with people who are going to hunt you down and eliminate you. It may be they will just lock you up, it may be worse than that.
That said, there is a history of people who have done that, paid the ultimate price, and later been honored for their sacrifice. Seems like history can go either way sometimes in judging the act, hero or idiot.
I'm all in favor of people keeping their eyes open though as they walk into it.
For me things like Flame just tell me that what I knew as an engineer could be done, actually have been done. And that is always a bit of a wake up call.
The idea that governments are stupid in the sphere of technology ignores the recent-ish discovery on behalf of governments that if you just pay a bunch of geeks to be geeks on behalf of the (evil or otherwise) government and make them exempt from bureaucracy they get results that are at least as good as geeks acting independently.
> You often see this in, let me cherry pick out of charitability, threads suggesting that the OSS community develop surveillance countermeasures for use by dissidents subject to certifiably evil regimes.
> It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
Are you really saying that people should avoid writing software that could help people who are subject to evil regimes because said evil regime might be upset at them? There's an uncertain level of personal risk associated with doing such things, but there's definite moral hazard in total self-interest.
Either way, if Flame was written by the US or Israel a lot of us on here are already complicit in such a project. We live in a democracy. Those are our tax dollars, hard at work.
I totally agree with you otherwise; governments are not stupid.
There's no personal risk to writing regime circumvention tools. Iran isn't going to have you assassinated for your work on Tor.
There is serious risk to using Tor in Iran. Death squads and disappearances aren't a conspiracy theory in Iran; they are the regime's well-understood M.O. When circumvention tools like Tor work, they hide your traffic from the regime. When they stop working, or are turned, they do exactly the opposite: they attach a statistical marker to your traffic that says "whether or not you can read these packets, the person sending them is interesting".
The people working on circumvention tools are mostly well-intentioned (many of them are friends of mine), but they are delusional about the SWOT analysis at play here. None of them have any unique skills that aren't available to an organization willing to shell out 6-7 figures to a team in a month. Money buys competence. A lot of money buys a lot of competence. Iran has a lot of money. Circumvention projects do not.
Kickstarter hasn't seen the amount of money that a world government could spend without director-level approval on a project to turn a circumvention tool against its users.
And that's before you get to the fact that many, if not most, of the computers in authoritarian regimes are probably already rootkitted.
Are you really saying that people should avoid writing software that could help people who are subject to evil regimes because said evil regime might be upset at them?
No, I'm saying that "my software helps people who are subject to evil regimes" is approximately as irresponsible as "my homeopathic remedy solves cancer" except in this case cancer has essentially infinite computational resources, arbitrarily high numbers of very savvy domain experts, and an army. Any hacker who believes their software, or their community's software, will hold up to dedicated adversarial interest from a nation-state is dangerously delusional.
I think a bigger asymmetry than "hackers vs. governments" is "defense vs. offense" -- the state of computer security is laughable enough that the attacker will probably win, whoever he is.
If non-government hackers were building offensive tools, vs. defensive, and only had to win periodically vs. essentially all the time, they'd be able to put up a better fight. Government doesn't have a particular monopoly on competence, and internal politics and budget issues probably would allow a relatively capital-poor non-governmental enterprise to do pretty well vs. a contractor/government team.
Similarly, I'll grant that the fight may appear asymmetrical at the higher level of organizations if one side is run by geeks and the other side is run by non-geeks. Nonetheless one should recognize that those hired by either side to be on the "front lines" so to speak will be geeks. Whether they're computer geeks or gun|bomb|espionage|etc geeks, assuming an asymmetry in the ability to employ strategy and accomplish goals on that level would be unwise.
Right. It always comes back to the people performing the task. Because people are generally "good" (IMHO), "evil" isn't particularly easy to get away with. This goes for geeks, police and even the military. Commands can be issued, but real people with real emotions have to deliver. This is why whistle-blower protection is so key to our economy and society.
And while politics may attract a disproportionate level of narcissists and sociopaths, I'm guessing CS doesn't.
Neither Stuxnet nor Flame target hackers or even the general population. They were targeting specific institutions. Attacking is easy once you know who or what to attack.
Each individual hacker and each individual citizen is a much smaller target. Sure, as soon as you're identified, you're toast: they break in and install malware on your computer -- if you're lucky. But there's a lot of hackers and even more normal people, all of which can be made individually harder to identify through smart software.
Lots of people. Relatively few pieces of software. Pit against an avalanche of money and access to the best talent in the world. The incentive structure doesn't work, at all.
Don't build circumvention tools. If you're lucky, they'll just turn out to be useless.
So when you mention black choppers, people will assume you are either being ironic or crazy. There's a particular policy goal affiliated with these attacks, and a spectrum of options for achieving that policy goal. Those options included "There exist certain individually identifiable employees of a foreign government who are personally indispensable to implementing something which goes against our policy goals. We could assassinate them."
If you read the papers you know that that option is neither a joke nor the fevered imaginings of a paranoid conspiracy theorist.
The thing I find most interesting about Flame: whoever developed it surely understood that by being released into the wild like this, their new cryptographic attack was guaranteed to eventually be discovered and analyzed. And yet they spent that attack's secrecy on a (very sophisticated, but still) fishing expedition.
So what cryptanalytical capabilities do they have which are considered too sensitive to expose via malware?
Bear in mind that attacks on MD5 have an inherently limited shelf life, and that while the exploit used in Flame may be new, the underlying vulnerability and the fundamental technique used to exploit it are very well known.
Think about it this way. Flame was designed not to spread automatically, only when it was told to, meaning that a targeted attack like this would be difficult to discover since it affected a relatively few amount of computers compared to a virus designed to propagate at every opportunity, as well as the fact that fishing expedition was limited to only persons the owner were interested in.
Combine this with the fact that we're now dating the creation of the virus to at latest summer 2008 [1], and you've got a sophisticated surveillance mechanism that has been installed on thousands of computers and evaded detection for at least 5 years.
I'm sure there's lots more tricks that advanced virus authors like this have up their sleeve, but they're only useful to someone if they actually get used, and this seems to have paid off for whoever was behind this.
Now I'm wondering if the submitter wrote a good title and some mod came along and bawdlerised it??
It's a terrible title. Fine for a media site trying to sell stories based on sensationalism but I thought we were building a brave new online community here.
Access to signing keys is very relevant, and I think there is a very real chance (p>0.2) that the huge oversight MS did with the terminal server keys happened because they were ordered to do it.
That's an awfully baroque government backdoor --- a misconfigured X.509 attribute on a certificate that turns out to be signed with a hash for which controlled collisions turn out to be feasible.
The source code to Windows doesn't matter. In 2012, even teenagers find vulnerabilities in Windows by reading the assembly code in IDA Pro. It's one of the most comprehensively reverse-engineered pieces of code on the planet.
mtgx theorized it would be easier with access to the Windows source, and it would, and you know it would, and you're purposely being obtuse; arguing for the sake of arguing.
There is a reason we have programming languages, and we don't all write directly in machine code. Just because it's techincally possible to do things in a more difficult way doesn't mean they would be done that way with a faster, easier option readily available.
I don't think so. I won't go on record with what I know but I do know that Microsoft has consented to allow governments audit, review, and sometimes even modify their source code.
I suspect a polite request (perhaps backed by a threat including the L-word) will get them far further than a virus.
Edit: Others have pointed to public documentation of this program. I believe the two cases I was aware of at the time were the governments of China and Germany.
Enterprises w/ 10k+ seats, OEMs, MVPs and governments can get access to Windows source these days. Microsoft launched the program in 2006 or so to dampen the "Linux is more secure because we can see the source!!" FUD.
Government Security Program: Addressing the unique security requirements of governments worldwide by helping government actively participate in ensuring the security of their critical systems. We help enhance system security by providing access to Microsoft Windows and Office source code, prescriptive and authoritative security guidance, technical training, security information, and Microsoft security experts.
The Common Criteria Evaluation and Validation Scheme (CCEVS) is a form of software accreditation that focuses on software security.
You can view the results for the Windows 7 accreditation at [1]. The website also has comprehensive documentation on the methodology used to accredit the software (including visibility of the source code).
To paraphrase Edison, anything worthwhile is 99% perspiration and 1% inspiration. The novel md5 collision/windows update propagation is Flame's 1%. The rest is just what's made possible as a result.
It is a cogent reminder of the fragility of the Internet's security infrastructure.
>9. Latest research proves that Flame is indeed linked to Stuxnet....
Whats the chance that this "Resource 207" is some 3rd party module that more than 1 developer had access to?
I concede that placing the same resource in the same resource location in 2 different unrelated applications is a bit of a long shot, but I dont see it as a smoking gun either.
> "Agent.BTZ did something like this already in 2008. Flame is lame."
Flame's approach is different and more impressive. Agent.BTZ copied itself and used an easy-to-discover autorun.inf file in the root directory of attached disks or network shares. Flame exports its database by encrypting it and then writing it to the USB disk as a file called '.' (just a period, meaning 'current directory')
When you run a directory listing you can't see it. You can't open it. The windows API doesn't allow you to create a file with that name and Flame accomplishes this by opening the disk as a raw device and directly writing to the FAT partition. Impressive, right.
While a lot of these individual features alone are not impressive the sum of the parts, combined with the collision attack on the certificate signature are very impressive.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Nobody knew about it for years, yet it was derided when discovered and documented.
Infosec is an inherently pessimistic enterprise, although spending time here makes me think it's not a perspective limited to security.
Just look at how almost every post here ends up littered with comments like "This isn't new. My XYZ already does all of this." People like to feel superior (it helps reinforce the individual nerd exceptionalism)
This is exactly the attitude used by some negative minded mediocre people to demotivate free thinkers. To be fair, to most of them it probably also doesn't seem new in reality, because their grey cells lack the sophistication required to understand the difference.
BIFF[4] is still remembered by many within the early niche group of hackers. It is likely that similar psychology has been driving the ridicule at hacker conferences in recent years towards mainstream reporting on “cyber” topics and use of buzz phases such as “Advanced Persistent Threat”.
[1] https://en.wikipedia.org/wiki/Cypherpunk#Noteworthy_cypherpu...
[2] http://www.catb.org/~esr/faqs/hacker-howto.html
[3] https://www.schneier.com/blog/archives/2006/09/what_is_a_hac...
[4] https://en.wikipedia.org/wiki/BIFF
You can find 8.3 '.' entry names by searching a partition for \x2e\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20
A file with an LFN of '.' could be found with (hopefully this is correct) \x00\x2e\x00\x00\x00\x00\xff\xff\xff\xff\x0f
It appears as if 8.3 file names starting with '.' are treated specially but LFNs starting with '.' carry no significant meaning.
I struggled to find references to other malware that has used a similar approach. Does anyone have more information?
Surely Windows does not attempt to automatically execute files with a LFN (UTF-16 name) of '.'?
[1] http://labs.bitdefender.com/2012/06/flame-the-story-of-leake...
[2] https://en.wikipedia.org/wiki/File_Allocation_Table#Director...
People are unsure as to why it has such a large file size (do we know why yet?). One very common explanation is that it is bloated because of poor software engineering, some of the people that believe this explanation attempt to fit the facts to that narrative.
Also Consider the culture of the demo scene/exploit writers. The smaller code the better the programmer.
Personally I like to think that the flame authors intentionally exploited this prejudice and made it large so that: 1. it wouldn't look like malware, 2. if it was discovered no one would take it seriously and look deeper, 3. reverse engineering it would be complicated by it's large file size (cost > benefit from an AV perspective).
Security folks often lack development experience, specifically in products that ship, to appreciate the big picture. This is why certain people on HN were so fixated on a lack of code obfuscation to give credit to the massive QA effort behind making all of stuxnet work on such a complex target.
I say this as a security person who has previously done dev on product teams.
It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
That said, there is a history of people who have done that, paid the ultimate price, and later been honored for their sacrifice. Seems like history can go either way sometimes in judging the act, hero or idiot.
I'm all in favor of people keeping their eyes open though as they walk into it.
For me things like Flame just tell me that what I knew as an engineer could be done, actually have been done. And that is always a bit of a wake up call.
> It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
Are you really saying that people should avoid writing software that could help people who are subject to evil regimes because said evil regime might be upset at them? There's an uncertain level of personal risk associated with doing such things, but there's definite moral hazard in total self-interest.
Either way, if Flame was written by the US or Israel a lot of us on here are already complicit in such a project. We live in a democracy. Those are our tax dollars, hard at work.
I totally agree with you otherwise; governments are not stupid.
There's no personal risk to writing regime circumvention tools. Iran isn't going to have you assassinated for your work on Tor.
There is serious risk to using Tor in Iran. Death squads and disappearances aren't a conspiracy theory in Iran; they are the regime's well-understood M.O. When circumvention tools like Tor work, they hide your traffic from the regime. When they stop working, or are turned, they do exactly the opposite: they attach a statistical marker to your traffic that says "whether or not you can read these packets, the person sending them is interesting".
The people working on circumvention tools are mostly well-intentioned (many of them are friends of mine), but they are delusional about the SWOT analysis at play here. None of them have any unique skills that aren't available to an organization willing to shell out 6-7 figures to a team in a month. Money buys competence. A lot of money buys a lot of competence. Iran has a lot of money. Circumvention projects do not.
Kickstarter hasn't seen the amount of money that a world government could spend without director-level approval on a project to turn a circumvention tool against its users.
And that's before you get to the fact that many, if not most, of the computers in authoritarian regimes are probably already rootkitted.
No, I'm saying that "my software helps people who are subject to evil regimes" is approximately as irresponsible as "my homeopathic remedy solves cancer" except in this case cancer has essentially infinite computational resources, arbitrarily high numbers of very savvy domain experts, and an army. Any hacker who believes their software, or their community's software, will hold up to dedicated adversarial interest from a nation-state is dangerously delusional.
If non-government hackers were building offensive tools, vs. defensive, and only had to win periodically vs. essentially all the time, they'd be able to put up a better fight. Government doesn't have a particular monopoly on competence, and internal politics and budget issues probably would allow a relatively capital-poor non-governmental enterprise to do pretty well vs. a contractor/government team.
And while politics may attract a disproportionate level of narcissists and sociopaths, I'm guessing CS doesn't.
Each individual hacker and each individual citizen is a much smaller target. Sure, as soon as you're identified, you're toast: they break in and install malware on your computer -- if you're lucky. But there's a lot of hackers and even more normal people, all of which can be made individually harder to identify through smart software.
Don't build circumvention tools. If you're lucky, they'll just turn out to be useless.
FTFY: "Do not pick fights with people who can fly black choppers"
If you read the papers you know that that option is neither a joke nor the fevered imaginings of a paranoid conspiracy theorist.
So what cryptanalytical capabilities do they have which are considered too sensitive to expose via malware?
Combine this with the fact that we're now dating the creation of the virus to at latest summer 2008 [1], and you've got a sophisticated surveillance mechanism that has been installed on thousands of computers and evaded detection for at least 5 years.
I'm sure there's lots more tricks that advanced virus authors like this have up their sleeve, but they're only useful to someone if they actually get used, and this seems to have paid off for whoever was behind this.
[1] http://www.nation.com.pk/pakistan-news-newspaper-daily-engli...
It's a terrible title. Fine for a media site trying to sell stories based on sensationalism but I thought we were building a brave new online community here.
Access to signing keys is very relevant, and I think there is a very real chance (p>0.2) that the huge oversight MS did with the terminal server keys happened because they were ordered to do it.
There is a reason we have programming languages, and we don't all write directly in machine code. Just because it's techincally possible to do things in a more difficult way doesn't mean they would be done that way with a faster, easier option readily available.
Stop being silly. :P
I suspect a polite request (perhaps backed by a threat including the L-word) will get them far further than a virus.
Edit: Others have pointed to public documentation of this program. I believe the two cases I was aware of at the time were the governments of China and Germany.
Enterprises w/ 10k+ seats, OEMs, MVPs and governments can get access to Windows source these days. Microsoft launched the program in 2006 or so to dampen the "Linux is more secure because we can see the source!!" FUD.
Government Security Program: Addressing the unique security requirements of governments worldwide by helping government actively participate in ensuring the security of their critical systems. We help enhance system security by providing access to Microsoft Windows and Office source code, prescriptive and authoritative security guidance, technical training, security information, and Microsoft security experts.
You can view the results for the Windows 7 accreditation at [1]. The website also has comprehensive documentation on the methodology used to accredit the software (including visibility of the source code).
[1] http://www.niap-ccevs.org/st/vid10390/
US Govt: we want source code access
MS: That is propiatory information
US Govt: give us access or we wont allow MS products to be used by govt. departments
MS: how quick do you want that access?
It is a cogent reminder of the fragility of the Internet's security infrastructure.
>9. Latest research proves that Flame is indeed linked to Stuxnet....
Whats the chance that this "Resource 207" is some 3rd party module that more than 1 developer had access to? I concede that placing the same resource in the same resource location in 2 different unrelated applications is a bit of a long shot, but I dont see it as a smoking gun either.
Deleted Comment