I can't say I've ever experienced this. Are you sure it's not related to other things in the script?
I wrote a single file Python script, it's a few thousand lines long. It can process a 10,000 line CSV file and do a lot of calculations to the point where I wrote an entire CLI income / expense tracker with it[0].
The end to end time of the command takes 100ms to process those 10k lines, that's using `time` to measure it. That's on hardware from 2014 using Python 3.13 too. It takes ~550ms to fully process 100k lines as well. I spent zero time optimizing the script but did try to avoid common pitfalls (drastically nested loops, etc.).
This benchmark is a little bit outdated but the problem remains the same.
Interpreter initialization: Python builds and initializes its entire virtual machine and built-in object structures at startup. Native programs already have their machine code ready and need very little runtime scaffolding.
Dynamic import system: Python’s module import machinery dynamically locates, loads, parses, compiles, and executes modules at runtime. A compiled binary has already linked its dependencies.
Heavy standard library usage: Many Python programs import large parts of the standard library or third-party packages at startup, each of which runs top-level initialization code.
This is especially noticeable if you do not run on an M1 Ultra, but on some slower hardware. From the results on Rasperberry PI 3:
C: 2.19 ms
Go: 4.10 ms
Python3: 197.79 ms
This is about 200ms startup latency for a print("Hello World!") in Python3.
Is this a push to override the meaning and erase the hallucination critique?
There are other terms that are similarly controversial, such as "thinking models". When you describe an LLM as "thinking", it often triggers debate because people interpret the term differently and bring their own expectations and assumptions into the discussion.
In contrast, a poorly designed microservice can be replaced much more easily. You can identify the worst-performing and most problematic microservices and replace them selectively.
Starting point: In 1965, the most advanced chips contained roughly 50 to 100 transistors (e.g., early integrated logic).
Lets take 1965 -> 2025, which is 60 years.
Number of doubling intervals: 60 years / 2 years per doubling = 30 doublings
So the theoretical prediction is:
Transistors in 2025 (predicted) = 100 × 2^30 ≈ 107 billion transistors
The Apple M1 Ultra has 114 billion transistors.
You should also buy the sealing lids with silicone gaskets.
Wilhoit was right about everything: America is in-groups who are protected by the law, but not bound by it. Alongside out-groups who are bound by the law, but not protected by it.
You and I? An out-group. And it although we make a lot of jokes about leopards ripping faces off, MAGA know they're an out-group too, it's just that as long as somebody else is getting it worse, they're fine with that.
All the Congress member trades are public. There are even ETFs now that track the Congress member trades. So you can just buy such an ETF and have a one-to-one replication of the Congress member portfolio.