Google have additional information about IP addresses that updates dynamically based on cell phone, wifi and other magic usage so maybe ask them if they have some javascript that queries their site for more specific city/state details. Also call Pornhub and ask how they were blocking specific states to meet legal requirements.
The law in question requires "commercially reasonable efforts"
Personally I'd say none at all, unless the government itself provides it as a free service, takes on all the liability, and makes it simple to use.
It also defines personally identifiable information as including "pseudonymous information when the information is used by a controller or processor in conjunction with additional information that reasonably links the information to an identified or identifiable individual." But it doesn't specify what it means by 'controller' or 'processor' either.
If a hobbyist just sets up a forum site, with no payment processor and no identified or identifiable information required, it would seem reasonable that the law should not apply. But I'm not a lawyer.
Clearly, however, attempting to comply with the law just in case, by requiring ID, would however then make it applicable, since that is personally identifiable information.
The net utility of AI is far more debatable.
You can still run a train on those old tracks. And it'll be competitive. Sure you could build all new tracks, but that's a lot more expensive and difficult. So they'll need to be a whole lot better to beat the established network.
But GPUs? And with how much tech has changed in the last decade or two and might in the next?
We saw cryptocurrency mining go from CPU to GPU to FPGA to ASICs in just a few years.
We can't yet tell where this fad is going. But there's fair reason to believe that, even if AI has tons of utility, the current economics of it might be problematic.
On the one hand, software is like a living thing. Once you bring it into this world, you need to nurture it and care for it, because its needs, and the environment around it, and the people who use it, are constantly changing and evolving. This is a beautiful sentiment.
On the other hand, it's really nice to just be done with something. To have it completed, finished, move on to something else. And still be able to use the thing you built two or three decades later and have it work just fine.
The sheer drudgery of maintenance and porting and constant updates and incompatibilities sucks my will to live. I could be creating something new, building something else, improving something, instead, I'm stuck here doing CPR on everything that I have to keep alive.
I'm leaning more and more toward things that will stand on their own in the long-term. Stable. Done. Boring. Lasting. You can always come back and add or fix something if you want. But you don't have to lose sleep just keeping it alive. You can relax and go do other things.
I feel like we've put ourselves in a weird predicament with that.
I can't help but think of Super Star Trek, originally written in the 1970s on a mainframe, based on a late 1960s program (the original mainframe Star Trek), I think. It was ported to DOS in the 1990s and still runs fine today. There's not a new release every two weeks. Doesn't need to be. Just a typo or bugfix every few years. And they're not that big a deal. -- https://almy.us/sst.html
I think that's more what we should be striving for. If someone reports a rare bug after 50 years, sure, fix it and make a new release. The rest of your time, you can be doing other stuff.
If it's me running it, that's fine. But if it's someone else that's trying to use installed software, that's not OK.
However, a decade ago, a coworker and I were tasked with creating some scripts to process data in the background, on a server that customers had access to. We were free to pick any tech we wanted, so long as it added zero attack surface and zero maintenance burden (aside from routine server OS updates). Which meant decidedly not the tech we work with all day every day which needs constant maintenance. We picked python because it was already on the server (even though my coworker hates it).
A decade later and those python scripts (some of which we had all but forgotten about) are still chugging along just fine. Now in a completely different environment, different server on a completely different hosting setup. To my knowledge we had to make one update about 8 years ago to add handling for a new field, and that was that.
Everything else we work with had to be substantially modified just to move to the new hosting. Never mind the routine maintenance every single sprint just to keep all the dependencies and junk up to date and deal with all the security updates. But those python scripts? Still plugging away exactly as they did in 2015. Just doing their job.
robots.txt main purpose back in the day was curtailing penalties in the search engines when you got stuck maintaining a badly-built dynamic site that had tons of dynamic links and effectively got penalized for duplicate content. It was basically a way of saying "Hey search engines, these are the canonical URLs, ignore all the other ones with query parameters or whatever that give almost the same result."
It could also help keep 'nice' crawlers from getting stuck crawling an infinite number of pages on those sites.
Of course it never did anything for the 'bad' crawlers that would hammer your site! (And there were a lot of them, even back then.) That's what IP bans and such were for. You certainly wouldn't base it on something like User-Agent, which the user agent itself controlled! And you wouldn't expect the bad bots to play nicely just because you asked them.
That's about as naive as the Do-Not-Track header, which was basically kindly asking companies whose entire business is tracking people to just not do that thing that they got paid for.
Or the Evil Bit proposal, to suggest that malware should identify itself in the headers. "The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem – simply ignore any messages with the evil bit set and trust the rest."
I put the blame solely on the management of Borland. They had the world leading language, and went off onto C++ and search of "Enterprise" instead of just riding the wave.
When Anders gave the world C#, I knew it was game over for Pascal, and also Windows native code. We'd all have to get used to waiting for compiles again.
No kid or hobbyist or person just learning was spending $1400+ on a compiler. Especially as the number of open-source languages and tools were increasing rapidly by the day, and Community Editions of professional tools were being released.
Sure they were going for the Enterprise market money, but people there buy based on what they're familiar with and can easily hire lots of people who are familiar to work with it.
Last I looked they do have a community edition of Delphi now, but that was slamming the barn door long after the horses had all ran far away and the barn had mostly collapsed.
> After some mental gymnastics weighing if I should continue with Obsidian, I found solace when asking myself "Can I see myself using this in 20 years?". I couldn't. The thought of cyclically migrating notes from one PKMS to another every 5 years, as I had done from Evernote to Notion to Obsidian, made me feel tired.
In point of fact this is actually an argument IN FAVOR of Obsidian. While the editor might be proprietary - the notes themselves are just standard markdown. If somehow all the copies of Obsidian magically disappeared off the earth tomorrow, I could easily switch over to Emacs org mode, VS Code, or literally anything else.
> Obsidian was a great tool for me personally for a long time. But I felt frustrated when I wanted to access my notes on my phone while on-the-go and saw that I had to pay for this feature.
Again, a little bit odd considering that the author is technically savvy enough to write an entire PKMS but didn't seem to consider that you can just check your markdown notes into a git repository and sync with the native android/iOS Obsidian app on a mobile device. All my notes sync up to Gitea hosted on my VPS and it works relatively seamlessly.
I'm glad the author had fun. Personally, I'm very happy with Obsidian and the plugin architecture has made it easy for me to extend it where necessary.
But mostly I don't. My work notes are on my work laptop and my personal notes are on my PC. I might copy them onto a mobile device if I'm traveling, but I might not bother. Mobile devices don't have the good keyboard and large screen to really be useful for stuff like that. But I have copied them over before just in case I wanted to find something in them.
You can also put a shortcut to a program on your desktop and - horror of horrors! - clicking the shortcut will execute the program! How crazy is that?
I get that some people don't want the markdown functionality in notepad (you can turn it off very easily, btw). But I don't understand why suddenly the idea of hyperlinks is being blasted as a terrible security vulnerability?
Surely there has to be more to this, in order to generate so much hubbub, than just people not understanding the basic concept of hyperlinks?