It is no small feat to create compatibility for modern Python features like type hints and async in a library that has its roots in Python 2, it has absolutely exceeded expectations in that regard.
I can't disagree more. Identity map based ORMs are _awful_ to use, in almost every way.
Probably my favourite "ya'll don't understand how databases work" was where they "reserved" space for MySQL enums; for example for the "active" column it would be something like:
enum(
'active',
'deleted',
'_futureval1',
'_futureval2',
'_futureval3',
'_futureval4',
'_futureval5',
'_futureval6',
'_futureval7',
'_futureval8',
'_futureval9'
)
Enums don't work like that at all; it's just a mapping of int to a string value for readability, and you don't need to "reserve" space to add future values just like you don't need to for ints. Adding a new enum value is easy and cheap. Removing them is not as it requires a full scan of all rows to verify they're used. Even worse, you couldn't easily rename enum labels (at the time, don't know if you can now), making it all worse than useless.Since this was all on large tables and the effort to fix it was relatively large, but without adding much business value, we never fixed it. We were basically stuck with it. It sure as hell annoyed me every time I looked at it.
I'm not an DBA either, but spending about 5 seconds on the documentation for "enum" would have prevented this. This really doesn't require a PhD in SQL.
The issue was that MySQL doesn't use a full int to store enums. If your enum has 8 values, it stores in 1 byte, if it has more than 8, it stores it in 2 bytes. Adding that 9th value thus requires re-writing the entire table. So yes - it can make sense to "reserve space" to avoid a future table re-write.
You also had to be careful to include `ALGORITHM=INPLACE, LOCK=NONE;` in your `ALTER TABLE` statement when changing the enum or it would lock the table and rewrite it.
Hardware addition is filled with edge cases that cause it to not work as expected; I don't see it as that much different from the memory safety edge cases in most programming models. So by corollary is there no way to reason about any program that uses hardware addition?
The switch to a tag based architecture in 1.0 completely broke the database for our use case, it could no longer handle large metric cardinality. Things improved a bit around 1.2, but never got back to something usable for us.
We ultimately moved to using clickhouse for time series data and haven't had to think about it since.
Where is influx at now? Can they handle millions of metrics again? What would bring us back?
Here are my gripes:
1) For me one of the biggest selling points is client code gen (https://github.com/OpenAPITools/openapi-generator). Basically it sucks, or at least it sucks in enough languages to spoil it. The value prop here is define the API once, code gen the client for Ruby, Python and Scala (or insert your languages here). Often there are a half dozen clients for each language, often they are simply broken (the generated code just straight up doesn't compile). Of the ones that do work, you get random PRs accepted that impose a completely different ideological approach to how the client works. It really seems like any PR is accepted with no overarching guidance.
2) JSONSchema is too limited. We use it for a lot of things, but it just makes some things incredibly hard. This is compounded by the seemingly limitless number of version or drafts of the spec. If your goal is interop, which it probably is if you are using JSON, you have to go our and research what the lower common denominator draft spec JSONSchema support is for the various languages you want to use and limit yourself to that (probably draft 4, or draft 7).
On the pros side:
It does make pretty docs - kinda wish it would just focus on this and in the process not be as strict, I think it would be a better project.
Our test harness didn't catch it (weird combination of reasons, too long ago for me to remember the details) & it rolled out.
Shortly thereafter I get an anxious customer call that we'd charged their debit card $2500.00 instead of $25.00 and they'd gotten an overdraft notice. At first I was incredulous ("how is that even possible!?"), then I remembered that we'd just version bumped ActiveMerchant.
My endocrine response as I realized what must have happened was amazing to experience - the sinking feeling in my gut, hairs standing up, sweaty palms, dread, pupils dilating, and my internal video camera pulling back poltergeist-style in a brief out-of-body experience.
Fun times. Live and learn.
This is why I'm pretty dogmatic about variable comparison in tests.
This is dangerous and stuff like this has caused a lot bugs to slide though in my experience (and maybe ops):
expect(account1.balance).to eq account2.balance
This is safe and specifc: expect(account1.balance).to eq 2500
expect(account2.balance).to eq 2500
Unfortunately I've run into a lot of folks that take major issue with the later because of 'magic numbers' or some similar argument. In tests I want the values being checked to be be as specific as possible.6.5mpg - I just Googled average MPG for a fully loaded Semi, could be wrong.
$5.313 - Most recent highway diesel price (https://www.eia.gov/petroleum/gasdiesel/). I'd expect this is high due to fleet discounts and such.
Not at all my experience, actually it was incredibly easy to get this working smoothly with hotwire and no javascript at all (outside the hotwire lib).
We have a Rails app with thousands of users streaming agentic chat interfaces, we've had no issues at all with this aspect of things.