Just recently was trying to optimize a 12s index scan, turns out I didn't need to change anything about the query I just had to update the table statistics. 12s down to 100ms just form running ANALYZE (no vacuum needed).
Just recently was trying to optimize a 12s index scan, turns out I didn't need to change anything about the query I just had to update the table statistics. 12s down to 100ms just form running ANALYZE (no vacuum needed).
I wrote a simple CLI tool (bash wrapper around kubectl) to automate diffing kubectl top metrics against the declared requests in the deployment YAML.
I ran it across ~500 pods in production. The "waste" (allocated vs. used) average by language was interesting:
Python: ~60% waste (Mostly sized for startup spikes, then idles empty).
Java: ~48% waste (Devs seem terrified to give the JVM less than 4Gi).
Go: ~18% waste.
The tool is called Wozz. It runs locally, installs no agents, and just uses your current kubecontext to find the gap between what you pay for (Requests) and what you use (Usage).
It's open source. Feedback welcome.
(Note: The install is curl | bash for convenience, but the script is readable if you want to audit it first).
I understand we're talking about CPU in case of Python and memory for Java and Go. While anxious overprovisioning of memory is understandable, doing the same for CPU probably means lack of understanding of the difference between CPU limits and CPU requests.
Since I've been out of DevOps for a few years, is there ever a reason not to give each container the ability to spike up to 100% of 1 core? Scheduling of mass container startup should be a solved problem by now.
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
lol we are so cooked
Sanity is available immediately if you are willing to be paid less. There are tons of simple, non-SPA, non-stack-on-stack projects out there, they just usually pay 1/10th the complex stuff.
Meanwhile the client is telling me is virtually impossible to find frontend devs willing to write HTML.
I learned Java only after Python and I remember being not quite familiar with types so even the 'void' and 'String[]' where a bit mysterious. After learning basic types that part made sense. Then you start learning about Classes and objects, and you understood that main is a static method of this class Main that must be required somehow. As you dive deeper you start learning when this class is called. In a weird way what started as complete, unknowable boiler plate slowly evolved into something sensible as you began to understand the language better. I have no doubt that seasoned Java devs see a lot more in that invocation that I do.
Good riddance though!
1. Programmer A creates a class because they need to do create an entry point, a callback, an interface... basically anything since everything requires a class. Result: we have an class.
2. Programmer B sees a class and carelessly adds instance variables, turning the whole thing mutable. Result: we have an imperative ball of mud.
3. Another programmer adds implementation inheritance for code reuse (because instance variables made factoring out common code into a function impossible without refactoring to turn instance variables from step 2 into arguments). Result: we have an imperative ball of mud and a nightmare of arbitrary dynamic dispatch.
At some point reference cycles arise and grandchild objects hold references to their grandparents in order to produce... some flat dictionary later sent over the wire.
4. As more work is done over that bit of code, the situation only worsens. Refactoring is costly and tedious, so it doesn’t happen. Misery continues until code is removed, typically because it tends to accumulate inefficiencies around itself, forcing a rewrite.
* VACUUM does not compact your indexes (much).
* VACUUM FULL does. It's slow though.