I'm curious to know what the problem of Firefox is. For example, the 3d-raytrace-SP benchmark is nearly three times faster on Edge than on Firefox on my i7 laptop. The code of that benchmark is very simple and mostly consists of basic math operations and array accesses. Maybe the canvas operations are particularly slow on Firefox? This seems to be an example that developers should take a look at.
However, I just cannot bring myself to constantly pull the transactions down manually from multiple banks.
Many suggest automating. How is this working in practice? Are there providers like Plaid you can use? Build web scrapers? Build PDF statement parsers?
I ended up just paying YNAB the $130/year or whatever they’re at now. High wife approval factor and everything just connects. They also have an API. In theory I could just constantly backup YNAB with PTA by pulling down transactions from the API.
But for most, I understand that they aren’t enjoying what I am doing every couple of weeks. I was using YNAB before but due to how many cards I had something got messed up in the importer all the time. Sometimes my transactions would duplicate or even get triplicated and then I would decline one of them only for it to pop up again a few days later. This lead to a very messed up and not accurate tracking. For me I was just fighting this thing every single day.
This is probably user error but after wiping it 3 times and starting over and over I just gave up and went back to mentally keeping track which worked but I needed something better.
As mentioned in another comment, you can’t do this on the server without expensive checks for every single player that is always checking line of sight, because it’s not just your session running on a single server but multiple sessions.
And let’s say you did this, now you have a latency problem because most modern games to make them feel fluid has client side prediction with server reconciliation. This is what makes your modern games feel more responsive, if you put a constant server check there you have lost this.
No matter what people say online, it isn’t just move all of it to the server, there is data the client needs to know and can’t be spoonfed by the server.
Thats the point to many things in life that you just make it more difficult and most people won’t be bothered to attempt to circumvent whatever it is.
There will still be circumventers but it is will be less than if you just said fuck it.
* The person's time is worth $0/hr
* The person is not disabled
* The person doesn't need to carry anything that can't fit in a backpack
* The weather is clemant enough
* The journey is short enough
The assumptions are always implicit.
Realistically once you get fitter and fitter from riding the bicycle, your commute times will drop and if you’re in a city like the article, your trips aren’t even very far anyway. If anything driving is more annoying.
Last time I checked going from 16 to 32GB in a Mac Mini was more expensive (or as expensive) than buying two 16GB Mac Minis.
Especially when your project is new it's also not often clear that this project will become something more serious later where you have to worry about such things as people cloning your project.
A lot of people say that it can prevent these situations but from working for large enterprises, a lot of the offerings that are created literally don’t even change the code. Thus they have nothing to contribute and have no obligation to release any source code.
GPL also does not prevent the corporation from building software in front of whatever GPL service it is. Kind of like the Linux kernel, why bother changing the kernel when you can build software in front of it and not change anything and thus have nothing to release.