Are SELECT queries most companies' bottleneck? While we find the occasional SELECT without an index or joining something it doesn't need to, my biggest struggle is optimizing UPDATES.
In our busiest times we need to update up to 100,000 rows, ~50 columns, every ~5 seconds. So ideally the update finishes before the next one starts to avoid deadlocks.
MySQL probably isn't the most ideal DB for our use-case, but it's what the startup had been using for years before I joined.
Totally depends on the use case I suppose, we found that in our environment, we perform _a lot_ more SELECT's than we do UPDATE/DELETE/INSERT's.
And with some badly optimized SELECT's, the time MySQL had to spend on sorting results/reading from disk in an inefficient way made all our _write_ queries suffer.
By optimizing our SELECTs first, we freed up some CPU bandwidth (it seems?) that can be spent doing all the other work.
That's good to hear. We have found some suspect SELECTs used in our client-facing API recently. Might be good to double-check that those are running efficiently and not hamstringing the writes.
In our busiest times we need to update up to 100,000 rows, ~50 columns, every ~5 seconds. So ideally the update finishes before the next one starts to avoid deadlocks.
MySQL probably isn't the most ideal DB for our use-case, but it's what the startup had been using for years before I joined.
And with some badly optimized SELECT's, the time MySQL had to spend on sorting results/reading from disk in an inefficient way made all our _write_ queries suffer.
By optimizing our SELECTs first, we freed up some CPU bandwidth (it seems?) that can be spent doing all the other work.
Its tricky to validate improvements real time but I guess validating in prod is good too
If it's for local testing, you could try to cripple InnoDB as much as possible by just setting some absurdly low values, that would almost certainly mean no InnoDB caching is happening; https://gist.github.com/mattiasgeniar/87cd4a10bfcc788d81b51f...