If you have seen electric wiring inside houses from 1920's, that's the state of IoT and internet connected devices today. Bad things must happen first for standardization and regulation to happen.
Consumer devices and gadgets are not my main concern. Internet connected building automation is in similar sorry state. Someone will do large scale apartment automation systems attack, maybe just single manufacturer is targeted and as a result 5-15% of apartments go nuts at once. Just messing with the air conditioning can kill old and sick people until things get fixed.
Messing with the air conditioning can affect a lot more than just the people who have the vulnerable IoT air conditioner, if it manages to bring the grid down.
The grid is able to cope with fluctuations in demand, but that's a totally different ballgame than switching massive loads on and off, synchronized to within something like 20 milliseconds (a single 50 Hz cycle), in a controlled, intentional and malicious way - and potentially worse, the attacker could observe the grid and react to the countermeasures (e.g. to detect how quickly the grid reacts, and trigger oscillations in some system never designed to deal with something like that).
I personally see Brickerbot as the IoT version of Shodan. Port scanning and probing is considered illegal and immoral in many countries around the world yet Shodan has made the Internet more secure, by opening access to port scanning and making it impossible for companies to just "hide" new tools.
I'd prefer to see ISPs/governments taking action against dangerous IoT devices in their network (sending probes to vulnerable devices and blocking Internet access until the owners of the devices have secured their shit or provide proof of running a honeypot). We can't really expect such measures from companies right now, but a tool like Brickerbot might kickstart a movement.
I'd prefer the bot to just change the password and disable vulnerable services though (which still might brick devices if their web servers are vulnerable). Still, I do believe that why device fallen to Brickerbot would have fallen to any other botnet within days anyway, so the botnet is not inherently bad in my opinion.
This problem should stary disappearing as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market. Until then, steps have to be taken to stop DDoS attacks as they are getting worse and worse.
> Port scanning and probing is considered illegal and immoral in many countries
Do you know of a country where this is true? I know there are some broadly worded laws in England against the use of "hacking tools" but there are so many legitimate uses of port scanning that it'd be hard to explain why a port scanner is any more of a hacker tool than traceroute is.
In the UK it's not illegal per se, but if you go by the letter of the law you could be convicted under the Computer Misuse Act for it. But at the same time you could be convicted for sending a HTTP GET request to a server you don't have permission to access - this law is rather broad.
I'm not sure if there's even been a case to test it though, and as the general population becomes more tech savvy it seems unlikely such a conviction would be made for port scanning on it's own.
There's a bunch of cases denoted over at nmap.org [0]. Also, regardless of law, if you try to do a full probing port scan of certain military IP ranges in many countries, you might get a visit from a couple of not-so-friendly people regardless of legislation.
>as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market.
AKA the plausible end of open source. Once writing any program and sharing it can get you sued, people will stop doing that.
That depends on the way legislation is written of course. If you provide an open source package for free, there's no real transaction so you can't claim responsibility from anyone. Same if it's a free closed source product. But, if you sell software and neglect security issues within the warranty of said product, you should be held accountable, open source or not.
Such legislation should not be there to allow (class action) lawsuits but should be upheld by a government body, responding to complaints from the general public.
Problems with open source can also be solved by requiring companies who do not wish to take responsibility to give users to either sign a waiver (explicit, no TOS bullshit) or return the product immediately in exchange for money back. With open source software, there is no money given, so no problem. With closes source software, this highlights the vendor's behaviour regarding security support and might make consumers think twice before going with certain vendors.
Another way to do this would be to require vendors to put a clearly visible, standardised sticker/tag/image on their products detailing the support life cycle (warranty / software updates / security updates), similar to the nutrition information found on many food products. That way, consumers can shop around or hold a company responsible of their smart thermostat suddenly stops working because the company behind it got bought out by Google.
There are tons of variations of bases for legislation, but I don't see why physical and digital goods are that different.
If my CCTV system short cirtcuits and causes a fire, the company behind it can be held responsible for mot recalling the decices if the flaw was well known. If my CCTV camera has a known flaw that let's hackers in without authentication to record my alarm code so that they can break in, suddenly we're in the wild west of software support where you're on your own. Why is there such a difference?
If they can be bricked, perhaps they should be bricked. Maybe this is something we should encourage going forward to encourage security for IoT? Maybe we should hold a Brickcon where security researchers try to develop ways to brick any insecure devices before they become a threat.
No vigilante justice required. Just make the companies liable for the damage they cause when their products turn into a botnet.
Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things? Declare that the manufacturer has taken reasonable steps to make the device secure and is liable for damage if that turns out to be untrue.
We're moving in that direction in the UK. There's no legislation around it yet, but the government recently published the snappily named Code of Practice for Consumer IoT Security[1], which if rumours I've heard are correct they've basically published saying manufacturers can either voluntarily comply, or they can deal with it become legislation in the future.
When I first heard about it I was pretty dubious given government's track record on regulating technology, but its actually a really solid document, covering 13 guidelines which are specific enough to be useful, while not going deep into technical detail which will go out of date:
1. No default passwords
2. Implement a vulnerability disclosure policy
3. Keep software updated
4. Securely store credentials and security-sensitive data
5. Communicate securely
6. Minimise exposed attack surfaces
7. Ensure software integrity (this is probably my least favourite guideline, as it basically says you should check signatures on all firmware, by extension shutting down people's ability to control their own hardware with custom firmware)
8. Ensure that personal data is protected
9. Make systems resilient to outages
10. Monitor system telemetry data
11. Make it easy for consumers to delete personal data
12. Make installation and maintenance of devices easy
> Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things?
Because its a moving target. A light switch that isn't going to burn my house down when I buy it will still be safe in 10 or 20 years. A "secure" piece of software of even minimal complexity almost certainly has many severe bugs yet to be discovered.
I worry that the effect of legislation like this would mean you could no longer buy a $25 router to hack around with or put OpenWRT on - the legal liability would make such products non-viable, leaving only expensive enterprise grade stuff for purchase. Maybe that's for the greater good in the long run, but it would still be something of a loss.
> No vigilante justice required. Just make the companies liable for the damage they cause when their products turn into a botnet.
The hope would be that the former leads to the latter. "Making companies liable for doing a shoddy job" is rarely a thing that happens on its own, without pressure from below that's usually a reaction to incidents that hurt people.
The problem is, most consumers won't know if their "smart" lights are causing havoc somewhere across the Internet. If you brick the device, the consumer will notice their device stopped working, and will (hopefully) start claiming warranty. Without end-user pushback, secure IoT will remain a pipe dream.
Before you do this, please make a law that forces the manufacturers to replace the devices free of charge when they get bricked due to a security problem.
Having a known security bug unpatched for longer than x days should be grounds for a warrenty refund. That should get companies moving when there is a real world cost for not dealing with security bugs.
Or perhaps they could be fixed? There are plenty of examples of worms released with the express intent of patching a particular vulnerability (https://en.wikipedia.org/wiki/Anti-worm)
That is awesome. Probably still illegal, but having viruses and worms that patch the vulnerabilities that they use to spread, is a million times better than viruses and worms that use those vulnerabilities to do more damage.
The original title would be good enough for this story imho.
Worked in a bug bounty program for a spell, there are some young folks out there with borderline scary levels of talent and tenacity. Making this about the age of the person doesn't really add anything. (This is coming from a relative dinosaur, so maybe I'm just age-sensitive haha)
In high school and middle school you also have a lot more flexibility to devote a substantial amount of time to this type of stuff. You are right, it can result in really technically literate individuals.
I'm going to suggest that maybe this is a good thing. If the worm is really good at its job, it can take out all those zombie IoT devices that are being used for botnets. And it maybe it will act as a wake up call to consumers, regulators, and, perhaps, companies.
Not that I condone destroying people's property to accomplish that, but at least there is a potential upside.
I recently read the Shockwave Rider [1] in which the word 'worm' was first coined, thanks to the discussion on HN [2] on Stand on Zanzibar. Can recommend both books for those into SciFi and would like to thank the community.
Great article, but for the Iran smearing. Come on zdnet, you know better. Out of the thousands of ip addresses related to this attack, including a command and control center etc, whilst knowing that an attack from a VPN in Iran does in no way prove that the attacker is indeed from Iran or even Iranian, still you managed to insert a whole paragraph titled "Attacks carried out from Iranian server" in there, even though the researcher says the ip only "appears" to be from Iran, and he describes one (1!) attack from that ip. This wasn't even the command-and-control server.
And that 14-year old is living in Europe, not Iran.
Please leave the propaganda out of your tech news, zdnet.
Like I said, the 14 yo lives and originates in Europe. Cashdollar told so. Yet you fail to mention that and, despite the whole world knowing how explosive the political situation between the US (where YOU originate) and Iran is at the moment, you selective only mention Iran and no other country.
So yeah, factually correct maybe, but very biased by selective editing. That's textbook propaganda. If you don't see that you're part of it.
Regardless of the importance of the Iranian IP to the operation, it is of very little significance to the story. Mentioning it only really serves to act as clickbait for people who don’t understand that it’s possible to stand up a server anywhere in a few minutes, and who are predisposed to think that an Iranian IP makes everything extra ominous.
Interesting that you interrupted it in this way. Maybe that speaks more of your own biases than it does of those in the HN community? :)
Anyway, my immediate thought (before the article got to the teen) was that this attack might have be spillover from an attack by the US on Iranian systems. Or at least an hint at how exposed Iran's digital infrastructure might be to future attack from the US in light of recent events. Or maybe how this will get Iran to tighten up it's digital infrastructure before such an attack from the US could happen.
I also immediately didn't think that the origin of Cashdollar's attack had anything to do with the origin of the creator or perpetrator.
Consumer devices and gadgets are not my main concern. Internet connected building automation is in similar sorry state. Someone will do large scale apartment automation systems attack, maybe just single manufacturer is targeted and as a result 5-15% of apartments go nuts at once. Just messing with the air conditioning can kill old and sick people until things get fixed.
The grid is able to cope with fluctuations in demand, but that's a totally different ballgame than switching massive loads on and off, synchronized to within something like 20 milliseconds (a single 50 Hz cycle), in a controlled, intentional and malicious way - and potentially worse, the attacker could observe the grid and react to the countermeasures (e.g. to detect how quickly the grid reacts, and trigger oscillations in some system never designed to deal with something like that).
I'd prefer to see ISPs/governments taking action against dangerous IoT devices in their network (sending probes to vulnerable devices and blocking Internet access until the owners of the devices have secured their shit or provide proof of running a honeypot). We can't really expect such measures from companies right now, but a tool like Brickerbot might kickstart a movement.
I'd prefer the bot to just change the password and disable vulnerable services though (which still might brick devices if their web servers are vulnerable). Still, I do believe that why device fallen to Brickerbot would have fallen to any other botnet within days anyway, so the botnet is not inherently bad in my opinion.
This problem should stary disappearing as soon as legislation is introduced by governments to make the parties producing software or hardware responsible for the stuff they dump on the open market. Until then, steps have to be taken to stop DDoS attacks as they are getting worse and worse.
Do you know of a country where this is true? I know there are some broadly worded laws in England against the use of "hacking tools" but there are so many legitimate uses of port scanning that it'd be hard to explain why a port scanner is any more of a hacker tool than traceroute is.
I'm not sure if there's even been a case to test it though, and as the general population becomes more tech savvy it seems unlikely such a conviction would be made for port scanning on it's own.
[0] https://nmap.org/book/legal-issues.html
AKA the plausible end of open source. Once writing any program and sharing it can get you sued, people will stop doing that.
Such legislation should not be there to allow (class action) lawsuits but should be upheld by a government body, responding to complaints from the general public.
Problems with open source can also be solved by requiring companies who do not wish to take responsibility to give users to either sign a waiver (explicit, no TOS bullshit) or return the product immediately in exchange for money back. With open source software, there is no money given, so no problem. With closes source software, this highlights the vendor's behaviour regarding security support and might make consumers think twice before going with certain vendors.
Another way to do this would be to require vendors to put a clearly visible, standardised sticker/tag/image on their products detailing the support life cycle (warranty / software updates / security updates), similar to the nutrition information found on many food products. That way, consumers can shop around or hold a company responsible of their smart thermostat suddenly stops working because the company behind it got bought out by Google.
There are tons of variations of bases for legislation, but I don't see why physical and digital goods are that different.
If my CCTV system short cirtcuits and causes a fire, the company behind it can be held responsible for mot recalling the decices if the flaw was well known. If my CCTV camera has a known flaw that let's hackers in without authentication to record my alarm code so that they can break in, suddenly we're in the wild west of software support where you're on your own. Why is there such a difference?
Why can't products have a "declaration of security", like they have for EMI compatibility, safety standards and other such things? Declare that the manufacturer has taken reasonable steps to make the device secure and is liable for damage if that turns out to be untrue.
When I first heard about it I was pretty dubious given government's track record on regulating technology, but its actually a really solid document, covering 13 guidelines which are specific enough to be useful, while not going deep into technical detail which will go out of date:
1. No default passwords
2. Implement a vulnerability disclosure policy
3. Keep software updated
4. Securely store credentials and security-sensitive data
5. Communicate securely
6. Minimise exposed attack surfaces
7. Ensure software integrity (this is probably my least favourite guideline, as it basically says you should check signatures on all firmware, by extension shutting down people's ability to control their own hardware with custom firmware)
8. Ensure that personal data is protected
9. Make systems resilient to outages
10. Monitor system telemetry data
11. Make it easy for consumers to delete personal data
12. Make installation and maintenance of devices easy
13. Validate input data
[1] (PDF) https://assets.publishing.service.gov.uk/government/uploads/...
Because its a moving target. A light switch that isn't going to burn my house down when I buy it will still be safe in 10 or 20 years. A "secure" piece of software of even minimal complexity almost certainly has many severe bugs yet to be discovered.
I worry that the effect of legislation like this would mean you could no longer buy a $25 router to hack around with or put OpenWRT on - the legal liability would make such products non-viable, leaving only expensive enterprise grade stuff for purchase. Maybe that's for the greater good in the long run, but it would still be something of a loss.
The hope would be that the former leads to the latter. "Making companies liable for doing a shoddy job" is rarely a thing that happens on its own, without pressure from below that's usually a reaction to incidents that hurt people.
No airplanes required. Just make flying cars.
It's a bit like guerilla pot hole repair crews: https://www.citylab.com/equity/2017/03/portland-anarchists-w...
Worked in a bug bounty program for a spell, there are some young folks out there with borderline scary levels of talent and tenacity. Making this about the age of the person doesn't really add anything. (This is coming from a relative dinosaur, so maybe I'm just age-sensitive haha)
[0] https://twitter.com/shelajev/status/796685986365325312
Not that I condone destroying people's property to accomplish that, but at least there is a potential upside.
I recently read the Shockwave Rider [1] in which the word 'worm' was first coined, thanks to the discussion on HN [2] on Stand on Zanzibar. Can recommend both books for those into SciFi and would like to thank the community.
[1] https://en.wikipedia.org/wiki/The_Shockwave_Rider [2] https://news.ycombinator.com/item?id=19879830
And that 14-year old is living in Europe, not Iran.
Please leave the propaganda out of your tech news, zdnet.
I'm the ZDNet reporter who wrote the story.
What in God's green earth are you talking about?
The article says the hacker's server is rented from an Iranian company. And yes, despite your ignorant claims, the IP address is the C2 server.
What imagined propaganda are you talking about?
So yeah, factually correct maybe, but very biased by selective editing. That's textbook propaganda. If you don't see that you're part of it.
Anyway, my immediate thought (before the article got to the teen) was that this attack might have be spillover from an attack by the US on Iranian systems. Or at least an hint at how exposed Iran's digital infrastructure might be to future attack from the US in light of recent events. Or maybe how this will get Iran to tighten up it's digital infrastructure before such an attack from the US could happen.
I also immediately didn't think that the origin of Cashdollar's attack had anything to do with the origin of the creator or perpetrator.
But maybe like me they turn on a VPN and are now operating out of "China" or "Turkey" or, gasp, "San Francisco".
I think SF would be a good look to potential investors.
Probably just a typo but in case not: interpreted.