There's a difference between QoL features and reliability functions; to me, at least, that means that I can't justify trying to adopt it in my OSS projects. It's too bad, too, because this looks otherwise fantastic.
It's pretty easy to do. I recommend it, if you already use Fedi for anything.
In case anyone else needs to do something similar: Log in to your Amazon account > Manage Your Content and Devices
Copy the cookie and save it to a file ('cookie.txt'): https://github.com/yihong0618/Kindle_download_helper?tab=rea...
Execute the Python utility (this example accesses amazon.co.uk):
python kindle.py --cookie-file cookie.txt --uk -o DOWNLOADS --device_sn [Your Kindle serial no.] --mode all
You can also download a JSON list containing details of all your Kindle books: python kindle.py --cookie-file cookie.txt --uk --list --device_sn [Your Kindle serial no.]
There are other methods outlined in the README, but this worked best for me.I also extracted a list of cover URLs from the JSON file using a basic Python script (with output redirected to a file 'covers.txt'):
import json
with open('book-list.json') as f:
json_data = json.load(f)
for i in range(len(json_data)):
print(json_data[i]['productImage'])
And then I used wget to download them all too: wget --wait=3 --random-wait --input-file=covers.txt
Of course, the books are still DRM'd, but it's trivial to DeDRM them later. The crucial thing was to get the files before it's too late!* maintained a stable version of python within google, and made sure that everything in the monorepo worked with it. in my time on the team we moved from 2.7 to 3.6, then incrementally to 3.11, each update taking months to over a year because the rule at google is if you check any code in, you are responsible for every single breakage it causes
* maintained tools to keep thousands of third party packages constantly updated from their open source versions, with patch queues for the ones that needed google-specific changes
* had highly customised versions of tools like pylint and black, targeted to google's style guide and overall codebase
* contributed to pybind11, and maintained tools for c++ integration
* developed and maintained build system rules for python, including a large effort to move python rules to pure starlark code rather than having them entangled in the blaze/bazel core engine
* developed and maintained a typechecker (pytype) that would do inference on code without type annotations, and work over very large projects with a one-file-at-a-time architecture (this was my primary job at google, ama)
* performed automated refactorings across hundreds of millions of lines of code
and that was just the dev portion of our jobs. we also acted as a help desk of sorts for python users at google, helping troubleshoot tricky issues, and point newcomers in the right direction. plus we worked with a lot of other teams, including the machine learning and AI teams, the colaboratory and IDE teams, teams like protobuf that integrated with and generated python bindings, teams like google cloud who wanted to offer python runtimes to their customers, teams like youtube who had an unusually large system built in python and needed to do extraordinary things to keep it performant and maintainable.
and we did all this for years with fewer than 10 people, most of whom loved the work and the team so much that we just stayed on it for years. also, despite the understaffing, we had managers who were extremely good about maintaining work/life balance and the "marathon, not sprint" approach to work. as i said in another comment, it's the best job i've ever had, and i'll miss it deeply.
I feel for ya, zem; if you ever turn up at a PyCon in person, lemme buy you a drink.
I think the reason that Amazon leadership isn't bringing data to support RTO is that not only are they aware of the "Urban Doom Loop” that this article is referring to, but I'd bet you a lot of money that the C-suite (S-team) has a significant investment in commercial real estate.
At least the inspector in Safari (and maybe other WebKit browsers) does something similar for CSS.
I’m all for it. It’s a pain when writing config systems (no longer just key/value, but key+value+meta), but very helpful. It can be a pain for things like JSON where libraries don’t give you that type of diagnostic information easily, however.
I mean, I believe you, but ... what on earth IS all of that?