If the creators read this, I suggest some ways of building trust. There’s no “about us”, no GitHub link, etc. It’s a random webpage that wants my personal details, and sends me a “exe”. The overlap of people who understand what this tool does, and people who would run that “exe” is pretty small.
Author of cyber scarecrow here. Thank you for your feedback, and you are 100% right. We also dont have a code signing certificate yet either, they are expensive for windows. Smartscreen also triggers when you install it. Id be weary of installing it myself as well, especially considering it runs as admin, to be able to create the fake indicators.
I have just added a bit of info about us on the website. I'm not sure what else we can do really. Its a trust thing, same with any software and AV vendors.
Not many software promises to fend off attackers, asks for an email address before download, and creates a bunch of processes using a closed source dll the existence of which can easily be checked.
Then again, not many malware targeting consumers at random check for security software. You are more likely to see a malware stop working if you fake the amount of ram and cpu and your network driver vendor than if you have CrowdStrike, etc. running.
Concerning code signing: Azure has a somewhat new offering that allows you to sign code for Windows (SmartScreen compatible) without having an EV cert. It is called "Trusted Signing" [1], non-marketing docs [2]. The major gotcha is that currently you need to have a company or similar entity 3 years or older to get public trust. I tried it with a company younger than 3 years and was denied. You might have a company that fits that criteria or you might get lucky.
The major upside is the pricing: currently "free" [3] during testing, later about 10 USD/month. As there doesn't seem to be a revocation mechanism based on some docs I read, signed binaries might be valid even after a canceled subscription.
[3] You need a CC and they will likely charge you at some point. Also I had to use some kind of business Azure/MS 365 account which costs about 5 USD/month. Not sure about the exact lingo, not an Azure/MS expert. The docs in [2] was enough for me to get through the process.
One more thing you could do is put the real name of any human being with any track record of professionalism, anywhere on the website. Currently you're:
- commenting under a pseudonymous profile
- asking for emails by saying "please email me. contact at cyberscarecrow.com"
- describing yourself in your FAQ entry for "Who are you?" by writing "We are cyber security researchers, living in the UK. We built cyber scarecrow to run on our own computers and decided to share it for others to use it too."
I frequently use pseudonymous profiles for various things but they are NOT a good way to establish trust.
It's a neat concept, although I imagine this'll be a cat and mouse endeavor that escalates very quickly. So, a suggestion - apply to the Open Technology Fund's Rapid Response Fund. I'd probably request the following in your position:
* code signing certificate funding
* consulting/assessment to harden the application or concept itself as well as to make it more robust (they'll probably route through Cure53)
* consulting/engineering to solve for the "malware detects this executable and decides that the other indicators can be ignored" problem, or consulting more generally on how to do this in a way that's more resilient.
If you wanted to fund this in some way without necessarily doing the typical founder slog, might make sense to 501c3 in the US and then get funded by or license this to security tooling manufacturers so that it can be embedded into security tools, or to research the model with funding from across the security industry so that the allergic reaction by malware groups to security tooling can be exploited more systemically.
I imagine the final state of this effort might be that security companies could be willing to license decoy versions of their toolkits to everyone that are bitwise identical to actual running versions but then activate production functionality with the right key.
Obviously this should be an open source tool that people can build for themselves. If you want to sell premium services or upgrades for it later, you need to have an open/free tier as well.
Also are you aware of the (very awesome) EDR evasion toolkit called scarecrow? Naming stuff is hard, I get that, but this collision is a bit much IMO.
> We also dont have a code signing certificate yet either, they are expensive for windows.
When someone is offering you a certificate and the only thing you have to do in order to get it is pay them a significant amount of money, that's a major red flag that it's either a scam or you're being extorted. Or both. In any case you should not pay them and neither should anyone else.
Where is that additional info? It just says you're a group of security researchers, but there are no names, no verifiable credentials, nothing. You haven't really added any info that would contribute to any real trust.
Something that would have built trust with me that I didn't find on the site was any mention of success rate. Surely CyberScarecrow has been tested against known malware to see if the process successfully thwarts an attack.
I'd suggest putting down the actual authors. If you're UK based there should really be no issue in putting down each of the people involved and what their background in the industry is. Otherwise this just looks like a v1 to get people interested and v2 could include malware. Tbh it'd be quite a clever ploy if it is malware. Trust isn't built blindly, most smaller software creators always have their details known. I'd suggest if you want it to pick up traction, you have a full "about us" page.
You're collecting personal info and claiming to be in the UK: identifying the data controller would be a start, both for building trust and complying with GDPR.
How are you planning on preventing bad actors to identify scarecrow itself? You gonna randomize the name/processes etc?? Like anti-malware software do to install in stealth-mode??
It is a cat and mouse game. And security by obscurity practice. Not saying it won't work, but if it is open sourced, how long before the malware will catch on?
I'd be willing to bet good money that 99% of malware authors won't adapt, since 99% (more like 99.999%) of the billions of worldwide windows users will not have this installed.
For the cat to care about the mouse it needs to at least be a good appetizer.
Author of scarecrow here. Our thinking is that if malware starts to adapt and check if scarecrow is installed, we are doing something right. We can then look to update the app to make it more difficult to spot - but its then a cat and mouse game.
It's not a cat an mouse game; it's a diver and shark game. In SCUBA training we joked that you had the "buddy system" where you always dive in pairs, because that way if you encounter a shark you don't have to outswim the shark, you only have to outswim your buddy.
A low-effort activity that makes you not be the low-hanging fruit can often be worth it. For example, back in the '90s I moved my SSH port from 22 to ... not telling you! It's pretty easy to scan for SSH servers on alternate ports, but basically none of the worms do that.
Some malware will catch on, some will not. It's a cost vs profit problem. Statistically, this will always decrease the number of possible malware samples that can be installed on the machine, but by what margin? Impossible to say.
A lot of security stuff is a bit ironic like that. "Give this antivirus software super-root access to your machine".. it depends on that software being trustworthy.
That's a problem with a lot of software and developers these days. An "About Me" section with a real face and presence is important and I don't mean anime characters and aliases either. Tell me who you are, put yourself out there.
I don't understand why the software is built how it's built. Why would you want to implement licensing in the future for a software product that only creates fake processes and registry keys from a list: https://pastebin.com/JVZy4U5i .
The limitation to 3 processes and license dialog make me feel uncomfortable using the software. All the processes are 14.1MB in size (and basically the scarecrow_process.dll - https://www.virustotal.com/gui/file/83ea1c039f031aa2b05a082c...). I just don't understand why you create such a complex piece of software if you can just use a Powershell script that does exactly the same using less resources. The science behind it only kinda makes sense. There is some malware that is using techniques to check if there are those processes are running but by no means is this a good way to keep you protected. Most common malware like credential stealers (redline, vidar, blahblah) don't care about that and they are by far the most common type of malware deployed. Even ransomware like Lockbit doesn't care, even if it's attached to a debugger. I think this mostly creates a false sense of security and if you plan to grow a business out of this, it would probably only take hours until there would be an open source option available. Don't get me wrong - I like the idea of creating new ways of defending malware, what I don't like is the way you try to "sell" it.
They know that if this idea catches on, a dozen completely free imitations will crop up, so ... the time to grab whatever cash can be squeezed out of this is now.
Are you telling me this thing spawned 50 new processes on your computer? Could you zip up all the executable files and whatever it installed and upload it somewhere so we can analyze the assembly?
To your point, I made this a few years ago using powershell. I just created a stub .exe using csc on install and renamed it to match a similar list of binary names. Maybe I will dig it up...
But this literally comes off as probably being malware itself.
If your going to ship something like this, it needs to be open source preferably with a GitHub pipeline so I can see the full build process.
You also run into the elephant repellent problem. The best defense to malware will always be regular backups and a willingness to wipe your computer if things go wrong.
Better known as the Elephant Repellant Fallacy — a claim that a preventative is working when, in fact, the thing it prevents rarely or never happens anyway.
"Hey you better buy my elephant repellant so you don't get attacked!"
'Okay.'
...
"So were you attacked?"
'No, I live in San Francisco and there are no wild elephants."
I would assume there would be a small intersection of people that would download and install a windows program from an unknown web page and those that are worried about malware.
Author of cyber scarecrow here. You are right, its a trust thing. Completly understand if people wouldnt want to install it and thats fine. It's the same for any software really. We just havent built up any confidence or trust like a big established company will have.
Lol, this website is registered to someone in Iceland, despite the assurance that it is a "security researcher living in the UK". I'm sure the results from this experiment will make a cool blog post about pwning tech savvy folks.
Hmm my Namecheap domains keep the location details even with WHOIS privacy enabled. To be fair they are 7+ years old so maybe something has changed in that time?
I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But on that note, I wondered the same thing at my last workplace where we'd only run windows in virtual machines. Sometimes these were quite outdated regarding system and browser updates, and some non-tech staff used them to browse random websites. They were never hit by any crypto malware and whatnot, which surprised me a lot at first, but at some point I realized the first thing you do as even a halfway decent malware author is checking whether you run in a virtualized environment.
> I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But more sophisticated detection means bigger payload (making the malware easier to detect) and more complexity (making the malware harder to make / maintain), so mission accomplished.
“Sophisticated” detection can be as simple as checking rss and pcpu, the bullshit decoy processes probably aren’t wasting a lot of CPU and RAM, otherwise might as well run the real things; if they are, well, just avoid, who cares. So no, it’s not going to meaningfully complicate anything.
That's where I wonder about a tool like this interfering with legitimate software.
For example, I believe the anti-cheat software used by games like Fortnite looks for similar things -- my understanding is that it, too, will refuse to start when it is executing in a VM[0]. As a teenager (90s), I remember several applications/games refusing to start when I'd attached a tracing process to them. They did this to stop exactly what I was doing: trying to figure out how to defeat the software licensing code. I haven't had a need to do that since the turn of the century but I'd put $10 on that still being a thing.
So you end up with a "false positive", and like anti-virus software, it results in "denial of service." But does anti-virus's solution of "white list it" apply here? At least with their specific implementation, it's "on or off", but I wonder if it's even possible to alter the application in a way that could "white list a process so it doesn't see the 'malware defeat tricks' this exposes." If not, you'd just have to "turn off protection" when you were using that program. That might not be practical depending on the program. It's also not likely the vendor of that program will care that "an application which pretends it's doing things we don't like" breaks their application unless it represents a lot of their install base.
[0] I looked into it a few years ago b/c I run Tumbleweed and it's a game the kids enjoy (I'm not a huge fan but my gaming days have been behind me for a while, now) ... I had hoped to be able to expose my GPU to the VM enough to be able to play it but didn't bother trying after reading others' experiences.
Why does malware “stop” if it sees AV? Sounds as if it wanted to live, which is absurd. A shady concept overall, cause if you occasionally run malware on your pc, it’s already over.
Downloading a random exe from a noname site/author to scare malware sounds like another crazy security recipe from your layman tech friend who installs registry cleaners and toggles random settings for “speed up”.
Take malware that is part of a botnet. Its initial payload is not necessarily damaging to the host, but is awaiting instructions to e.g. DDOS some future victim.
The authors will want the malware to spread as far and wide as it can on e.g. a corporate network. So they need to make a risk assessment; if the malware stays on the current computer, is the risk of detection (over time, as the AV software gets updates) higher than the opportunity to use this host for nefarious purposes later?
The list[1] of processes simulated by cyber scarecrow are mostly related to being in a virtual machine though. Utilities like procmon/regmon might indicate the system is being used by a techie. I guess the malware author's assumption is that these machines will be better managed and monitored than the desktop/laptop systems used by office workers.
Many pieces of malware are encrypted and obfuscated to prevent analysis. Often, they'll detect virtual machines to make it harder for people to analyse the malware. Plenty of malware hides the juicy bits in a second or third stage download that won't trigger if the dropper is loaded inside of a VM (or with a debugger attached, etc.).
Similarly, there have also been malware that will deactivate itself when it detects signs of the computer being Russian; Russia doesn't really care about Russian hackers attacking foreign countries (but they'll crack down on malware spreading within Russia, when detected) so for Russian malware authors (and malware authors pretending to be Russian) it's a good idea not to spread to Russian computers. This has the funny side effect of simply adding a Russian keyboard layout being enough to prevent infection from some specific strains of malware.
This is less common among the "download trustedsteam.exe to update your whatsapp today" malware and random attack scripts and more likely to happen in targeted attacks at specific targets.
This tactic probably won't do anything against the kind of malware that's in pirated games and drive-by downloads (which is probably what most infections are) as I don't think the VM evasion tactics are necessary for those. It may help protect against the kind of malware human rights activists and journalists can face, though. I don't know if I'd trust this particular piece of software to do it, but it'll work in theory. I'm sure malware authors will update their code to detect this software if this approach ever takes off.
> Why does malware “stop” if it sees AV? Sounds as if it wanted to live, which is absurd.
Malware authors add in this feature so that it’s harder for researchers to figure out how it works. They want to make reverse engineering their code more difficult.
If these were laypeople that would then give up, sure.
But I'm surprised that it's even worth malware authors' time to put in these checks. I can't imagine there's even a single case of where it stopped malware researchers in the end. What, so it takes the researchers a few hours or a couple of days longer? Why would malware authors even bother?
(What I can understand is malware that will spread through as many types of systems as possible, but only "activate" the bad behavior on a specific type of system. But that's totally different -- a whitelist related to its intended purpose, not a blacklist to avoid security researchers.)
It's not about the usual AV software, but about "fake" system used to try and detect and analyse malware. AV Vendors and malware researcher in general use such honeypots to find malware that hasn't been identified yet.
This software seems to fake some idiciators that are used by malware to detect wheter they're on a "real system" or a honeypot.
It's not really about "normal" antivirus programs, but tools used by security researchers. It's well-known that more sophisticated malware often try to avoid scrutiny by not running, or masking their intended purpose if the environment looks "suspicious".
A paranoid online game like e.g. Test Drive Unlimited, might not launch because the OS says it's Windows Server 2008 (ask me how I know). A script in a Word document might not deliver its payload if there are no "recently opened documents".
The idea with this thing is to make the environment look suspicious by making it look like an environment where the malware is being deliberately executed in order to study its behaviour.
Even back in my script kiddy days, 10 years ago, I remember RATs and cryptors would all have a kill switch option if it detected it was running on a VM.
If the creators read this, I suggest some ways of building trust. There’s no “about us”, no GitHub link, etc. It’s a random webpage that wants my personal details, and sends me a “exe”. The overlap of people who understand what this tool does, and people who would run that “exe” is pretty small.
I have just added a bit of info about us on the website. I'm not sure what else we can do really. Its a trust thing, same with any software and AV vendors.
Not many software promises to fend off attackers, asks for an email address before download, and creates a bunch of processes using a closed source dll the existence of which can easily be checked.
Then again, not many malware targeting consumers at random check for security software. You are more likely to see a malware stop working if you fake the amount of ram and cpu and your network driver vendor than if you have CrowdStrike, etc. running.
The major upside is the pricing: currently "free" [3] during testing, later about 10 USD/month. As there doesn't seem to be a revocation mechanism based on some docs I read, signed binaries might be valid even after a canceled subscription.
[1] https://azure.microsoft.com/en-us/products/trusted-signing
[2] https://learn.microsoft.com/en-us/azure/trusted-signing/quic...
[3] You need a CC and they will likely charge you at some point. Also I had to use some kind of business Azure/MS 365 account which costs about 5 USD/month. Not sure about the exact lingo, not an Azure/MS expert. The docs in [2] was enough for me to get through the process.
- commenting under a pseudonymous profile
- asking for emails by saying "please email me. contact at cyberscarecrow.com"
- describing yourself in your FAQ entry for "Who are you?" by writing "We are cyber security researchers, living in the UK. We built cyber scarecrow to run on our own computers and decided to share it for others to use it too."
I frequently use pseudonymous profiles for various things but they are NOT a good way to establish trust.
* code signing certificate funding
* consulting/assessment to harden the application or concept itself as well as to make it more robust (they'll probably route through Cure53)
* consulting/engineering to solve for the "malware detects this executable and decides that the other indicators can be ignored" problem, or consulting more generally on how to do this in a way that's more resilient.
If you wanted to fund this in some way without necessarily doing the typical founder slog, might make sense to 501c3 in the US and then get funded by or license this to security tooling manufacturers so that it can be embedded into security tools, or to research the model with funding from across the security industry so that the allergic reaction by malware groups to security tooling can be exploited more systemically.
I imagine the final state of this effort might be that security companies could be willing to license decoy versions of their toolkits to everyone that are bitwise identical to actual running versions but then activate production functionality with the right key.
Also are you aware of the (very awesome) EDR evasion toolkit called scarecrow? Naming stuff is hard, I get that, but this collision is a bit much IMO.
https://github.com/Tylous/ScareCrow
When someone is offering you a certificate and the only thing you have to do in order to get it is pay them a significant amount of money, that's a major red flag that it's either a scam or you're being extorted. Or both. In any case you should not pay them and neither should anyone else.
Deleted Comment
Dead Comment
Here is one on github:
https://github.com/NavyTitanium/Fake-Sandbox-Artifacts
For the cat to care about the mouse it needs to at least be a good appetizer.
There is plenty of dumb malware.
Security folks seem to get overly focused at times on the most sophisticated attackers and forget about the unwashed hordes.
A low-effort activity that makes you not be the low-hanging fruit can often be worth it. For example, back in the '90s I moved my SSH port from 22 to ... not telling you! It's pretty easy to scan for SSH servers on alternate ports, but basically none of the worms do that.
Unfortunately (at least outside of HN) "people who understand what this tool does" probably isn't a subset of "people who would run that "exe"."
Deleted Comment
No different from MacAffee, Trend Micro, Symantec. Oh, but those are brand names you can trust, like Coca-Cola and Kellog's Corn Flakes.
But this literally comes off as probably being malware itself.
If your going to ship something like this, it needs to be open source preferably with a GitHub pipeline so I can see the full build process.
You also run into the elephant repellent problem. The best defense to malware will always be regular backups and a willingness to wipe your computer if things go wrong.
This is literally the first occurrence of that string on the internet.
"Hey you better buy my elephant repellant so you don't get attacked!"
'Okay.'
...
"So were you attacked?"
'No, I live in San Francisco and there are no wild elephants."
"Well, I guess the repellant is working!"
We need a chatGPT version of LMGTFY...
But perhaps I'm wrong
There are ways to establish trust, you aren’t doing any of them.
I guess if this gets enough attention, malware will just add more sophisticated checks and not just look at the exe name.
But on that note, I wondered the same thing at my last workplace where we'd only run windows in virtual machines. Sometimes these were quite outdated regarding system and browser updates, and some non-tech staff used them to browse random websites. They were never hit by any crypto malware and whatnot, which surprised me a lot at first, but at some point I realized the first thing you do as even a halfway decent malware author is checking whether you run in a virtualized environment.
But more sophisticated detection means bigger payload (making the malware easier to detect) and more complexity (making the malware harder to make / maintain), so mission accomplished.
I am recommending doing this for over 10 years now.
For example, I believe the anti-cheat software used by games like Fortnite looks for similar things -- my understanding is that it, too, will refuse to start when it is executing in a VM[0]. As a teenager (90s), I remember several applications/games refusing to start when I'd attached a tracing process to them. They did this to stop exactly what I was doing: trying to figure out how to defeat the software licensing code. I haven't had a need to do that since the turn of the century but I'd put $10 on that still being a thing.
So you end up with a "false positive", and like anti-virus software, it results in "denial of service." But does anti-virus's solution of "white list it" apply here? At least with their specific implementation, it's "on or off", but I wonder if it's even possible to alter the application in a way that could "white list a process so it doesn't see the 'malware defeat tricks' this exposes." If not, you'd just have to "turn off protection" when you were using that program. That might not be practical depending on the program. It's also not likely the vendor of that program will care that "an application which pretends it's doing things we don't like" breaks their application unless it represents a lot of their install base.
[0] I looked into it a few years ago b/c I run Tumbleweed and it's a game the kids enjoy (I'm not a huge fan but my gaming days have been behind me for a while, now) ... I had hoped to be able to expose my GPU to the VM enough to be able to play it but didn't bother trying after reading others' experiences.
Downloading a random exe from a noname site/author to scare malware sounds like another crazy security recipe from your layman tech friend who installs registry cleaners and toggles random settings for “speed up”.
The authors will want the malware to spread as far and wide as it can on e.g. a corporate network. So they need to make a risk assessment; if the malware stays on the current computer, is the risk of detection (over time, as the AV software gets updates) higher than the opportunity to use this host for nefarious purposes later?
The list[1] of processes simulated by cyber scarecrow are mostly related to being in a virtual machine though. Utilities like procmon/regmon might indicate the system is being used by a techie. I guess the malware author's assumption is that these machines will be better managed and monitored than the desktop/laptop systems used by office workers.
[1] https://pastebin.com/JVZy4U5i
Similarly, there have also been malware that will deactivate itself when it detects signs of the computer being Russian; Russia doesn't really care about Russian hackers attacking foreign countries (but they'll crack down on malware spreading within Russia, when detected) so for Russian malware authors (and malware authors pretending to be Russian) it's a good idea not to spread to Russian computers. This has the funny side effect of simply adding a Russian keyboard layout being enough to prevent infection from some specific strains of malware.
This is less common among the "download trustedsteam.exe to update your whatsapp today" malware and random attack scripts and more likely to happen in targeted attacks at specific targets.
This tactic probably won't do anything against the kind of malware that's in pirated games and drive-by downloads (which is probably what most infections are) as I don't think the VM evasion tactics are necessary for those. It may help protect against the kind of malware human rights activists and journalists can face, though. I don't know if I'd trust this particular piece of software to do it, but it'll work in theory. I'm sure malware authors will update their code to detect this software if this approach ever takes off.
Malware authors add in this feature so that it’s harder for researchers to figure out how it works. They want to make reverse engineering their code more difficult.
I agree with everything else you said.
If these were laypeople that would then give up, sure.
But I'm surprised that it's even worth malware authors' time to put in these checks. I can't imagine there's even a single case of where it stopped malware researchers in the end. What, so it takes the researchers a few hours or a couple of days longer? Why would malware authors even bother?
(What I can understand is malware that will spread through as many types of systems as possible, but only "activate" the bad behavior on a specific type of system. But that's totally different -- a whitelist related to its intended purpose, not a blacklist to avoid security researchers.)
This software seems to fake some idiciators that are used by malware to detect wheter they're on a "real system" or a honeypot.
A paranoid online game like e.g. Test Drive Unlimited, might not launch because the OS says it's Windows Server 2008 (ask me how I know). A script in a Word document might not deliver its payload if there are no "recently opened documents".
The idea with this thing is to make the environment look suspicious by making it look like an environment where the malware is being deliberately executed in order to study its behaviour.