Authentication: "Prove you are you" (hash functions)
Secure Storage: "Keep this secret but let me get it back later" (encryption)
Identification: "Track who/what this is" (UUIDs/tokens)
Authentication: "Prove you are you" (hash functions)
Secure Storage: "Keep this secret but let me get it back later" (encryption)
Identification: "Track who/what this is" (UUIDs/tokens)
And you should also know...
Best practices for password storage use one-way hash functions (like bcrypt, Argon2, or PBKDF2).
There are cons beyond performance. For example Docker complexity - you need to learn a new filetype, a new set of commands, a new architecture, new configurations, spend hours reading another set of documentation. Buy and read another 300 page O'Reily book to master and grasp something that again has Pro's and Con's.
For me? It's not necessary and I even know some Docker Kung-Fu but choose not to use it. I do use Docker Desktop occasionally to run apps and services on my localhost - it's basically a Docker Compose UI, and I really enjoy it.
Sorry, but you've got no idea what you're talking about.
You can also run OSI images, often called docker images directly via systemds nspawn. Because docker doesn't create an overhead by itself, its at its heart a wrapper around kernel features and iptables.
You didn't need docker for deployments, but let's not use completely made up bullshit as arguments, okay?
That doesn't necessarily mean there aren't Pro's to Docker, but one Con to Docker is - it's absolutely overhead and complexity that is not necessary.
I think one of the most powerful features of Docker by the way is Docker Compose. This is the real superpower of Docker in my opinion. I can literally run multiple services and apps in one VPS / dedicated server and have it manage my network interface and ports for me? Uhmmm...yes please!!!! :)
Option 2: use Poetry
How is this different than a Dockerfile that is creating the venv? Just add it to beginning, just like you would on localhost. But that is why I love to code Python in PyCharm, they manage the venv in each project on init.
I currently have distilled, compact Puppet code to create a hardened VM of any size on any provider that can run one more more Docker services or run directly a python backend, or serve static files. With this I create a service on a Hetzner VM in 5 minutes whether the VM has 2 cores or 48 cores and control the configuration in source controlled manifests while monitoring configuration compliance with a custom Naemon plugin. A perfectly reproducible process. The startups kids are meanwhile doing snowflakes in the cloud spending many KEUR per month to have something that is worse than what devops pioneers were able to do in 2017. And the stakeholders are paying for this ship.
I wrote a more structured opinion piece about this, called The Emperor's New clouds:
You mentioned Python backend, so literally just replicate build script, directly in VPS: "pip install requirements.txt" > python main.py" > nano /etc/systemd/system/myservice.service > systemd start myservice > Tada.
You can scale instances by just throwing those commands in a bash script (build_my_app.sh) = You're new dockerfile...install on any server in xx-xxx seconds.
To be able to achieve that is entirely dependent on two things:
1) deploying capital in the current fund on 'sexy' ideas so they can tell LPs they are doing their job
2) paper markups, which they will get, since Ilya will most definitely be able to raise another round or two at a higher valuation. even if it eventually goes bust or gets sold at cost.
With 1) and 2), they can go back to their existing fund LPs and raise more money for their next fund and milk more fees. Getting exits and carry is just the cherry on top for these megafund VCs.
I mean it probably depends on the LP and what is their vision. Not all apples are red, come in many varieties and some for cider others for pies. Am I wrong?
I'm a happy Apple user, love the OS...just saying.