The GTA SA bug was reading of an uninitialized variable. The value it contained was correct simply by chance as it was placed there by the previous invocation of the function and never overwritten by something else intermittently. Any changes to functions that happened to be called in between these 2 could have changed the value of the stack memory.
The aforementioned check on the other hand is placing random value below the stack pointer. This means that by design it cannot call any external/os/game functions and is basically isolated/"pure" from any interactions with third party code.
Such was the case for SecuROM in early days. It featured the CRC checks mentioned, if any single byte was changed, including an INT (breakpoint) instruction, it would crash. Here it's unlikely that it wont crash. Rendering the game inoperable.
That's basically the whole point of any anti-tamper product. I just think you picked a terrible example of a feature that could break due to OS changes specifically.
> Meaning that if Windows ever were to overwrite that region for whatever reason, will trigger the crash.
We're talking about random stack memory inside of a virtual machine that likely doesn't call any external code whatsoever. There should be no real way for Microsoft to accidentally corrupt this memory.
I would imagine many things from the SecuROM era live on in Denuvo.
But if you read the article you will realize that certain games will not work in the future due to Denuvo.
"This destroyed any exception-based hooking since majority of the time an exception is triggered, Windows will write an EXCEPTION_RECORD high up in unused stack space. You can probably see where this is going. Now, whenever the CPUID is hooked via an exception, that important value will become overwritten with an EXCEPTION_RECORD, causing undefined behaviour later on. I believe this can be bypassed if you attach a debugger to the process and set certain flags when it comes to exception handling, but the method of patching every hardware check is still cumbersome due to randomness anyway."
As Windows matures, behaviour can change, breaking certain stuff.
How do you expect the aforementioned tech to break the games it's on? If anything it "breaking" will just make the anti-tamper feature ineffective.
For 98% of systems, there is probably no reason to scan every file on opening it. If people have enabled that setting, or left that default on, then that's their problem; it's not Windows Defender's fault.
My current AV dashboards are screaming at me that I'm only 35% protected. That's because I've exercised a lot of prudence in enabling paranoid settings, based on my rather limited and simplistic threat modeling. Installing AV software comes with the understanding that it can steal resources, but they nearly always have plenty of settings that can be disabled and win back your system responsiveness.
I am beginning to believe that commenters giving bingo-card winnings are not the brightest bulbs in the Windows MCSE pool, honestly. I can relate: Linux and Unix admin in general is far more intuitive and comfortable for me, so I have generally stayed on that side of things, but knowing how to properly set up Windows is an indispensable life skill for anyone.
There is no such setting for Defender. The file scanning is either on or defender is completely off. To even access some of the better stuff like ASR rules (that are disabled by default) you need third-party software or pay for their enterprise offering.
Consumer Defender literally has like 4 toggles in total. It's a dumbed down and extremely permissive AV because it runs on every Windows machine.
We are in the “Windows users complain endlessly and refuse to switch to Linux” bingo card right now. Windows has been this way since before you bought that mini PC.
Exactly. It's the same legacy scan every fucking thing you open AV architecture.
Back in the day of spinning disks it probably wouldn't have been too noticeable for the AV to marshal scanning to its usermode service and the filesystem to pull the data from cache for the original request afterwards. However now that we have 10GB/s+ capable SSDs the factor of slowdown is exponentially larger.
I can run ripgrep on a massive directory, make myself a cup of tea and return to it still searching for matches versus being done in < 10 seconds with defender disabled.
Can you share how you can compile to only 10kb?
What's modern about it? Also you could have used C++ instead to remove some potential issues, and those global variables...
Use std::string and std::array or std::list, some anonymous namespaces, remove all the malloc, etc. Your code would be half the size and still compile to the same assembly language without the bugs.
What about them? In a 500 loc app there is no practical difference and there's only ~20 of them with clear purpose.
> Use std::string <...> or std::list <...> remove all the malloc, etc
> still compile to the same assembly language without the bugs.
I see you have no clue what those things actually compile down to.
Are they able to load a .so/dylib file during runtime and just call a method on it as long as they know the name of the method? How does iOS even allow that? How does an iOS even get to load those files? Seems like that would be locked down.
Yes, usually that's the entire point of an .so/.dylib/.dll - to load it and call it's functions by name?
> How does iOS even allow that? How does an iOS even get to load those files? Seems like that would be locked down.
Because it's something that higher level apple interfaces might rely on. It's not a security issue in the first place - if you submit an app obviously using them the message you get is:
> The use of non-public APIs is not permitted on the App Store because it can lead to a poor user experience should these APIs change.