I run LLMs against a 500k LoC poker engine and they do well because the engine is modularized into many small parts with a focus on good naming schemes and DRY.
If it doesn't require a lot of context for an LLM to figure out how to direct effort then the codebase size is irrelevant -- what becomes relevant in those scenarios is module size and the amount of modules implicated with any change or problem-solving. The LLM codebase 'navigation' becomes near-free with good naming and structure. If you code in a style that allows an LLM to navigate the codebase via just an `ls` output it can handle things deftly.
The LLMification of things has definitely made me embrace the concept of program-as-plugin-loader more-so than ever before.
if you want the same kind of style diy-er box-for-batteries I suggest the Trampa offerings. Similar focus on safety and novice level DIYer capability but much larger capacities and arrangements.
What I do care about is being met with something cutesy in the face of a technical failure anywhere on the net.
I hate Amazon's failure pets, I hate google's failure mini-games -- it strikes me as an organizational effort to get really good at failing rather than spending that same effort to avoid failures all together.
It's like everyone collectively thought the standard old Apache 404 not found page was too feature-rich and that customers couldn't handle a 3 digit error, so instead we now get a "Whoops! There appears to be an error! :) :eggplant: :heart: :heart: <pet image.png>" and no one knows what the hell is going on even though the user just misplaced a number in the URL.
Not for me, I have nothing but a hard time solving CAPTCHAs, ahout 50% of the time I give up after 2 tries.
Here's a prompt I'd make for fizz buzz, for instance. Notice the mixing of english, python, and rust. I just write what makes sense to me, and I have a very high degree of confidence that the LLM will produce what I want.
fn fizz_buzz(count):
loop count and match i:
% 3 => "fizz"
% 5 => "buzz"
both => "fizz buzz"
the results are good because as another replier mentioned, LLMs are good at style transfer when given a rigid ruleset -- but this technique sometimes just means extra work at the operator level to needlessly define something the model is already very aware of.
"write a fizzbuzz fn" will create a function with the same output. "write a fizzbuzz function using modulo" will get you closer to verbatim -- but my point here is that in the grand scheme of "will this get me closer to alleviating typing-caused-RSI-pain" the pseudocode usually only needs to get whipped out when the LLM does something braindead at the function level.
the flap size itself keeps the lens in place; the elasticity of the underlying tissue itself, until it heals into an encapsulation.
the surgery videos of that procedure make me squeamish unlike other surgery videos. Watching an eyeball get deflated/inflated with liquid pressure from the surgeon is just un-nerving to me; not as bad as watching a glaucoma surgery -- but up there.
There's been an absolute explosion in communication. In the early years of the internet it was pretty exciting and novel to be able to talk to people from other countries. Now it's completely unremarkable.
All this of course has a huge effect on how language develops and is used, and really we're still in the early years of it all (I guess The Smartphone Era starts around 2010 or so).
i've been on my phone/social/media/etc through the entire trend and this is the only time i've ever read the word 'delulu'; I had to look it up.
Might I suggest that tribe matters a lot in this context?
I don't listen to k-pop, I don't watch machinima, and I only knew 'tradwife' from the bullshit politics associated with the concept..
I think Cambridge called these too early. Maybe i'm old, and maybe i'm sheltered, but I never hear these words used in real life aside from a young nephew who was into the toilet thing, and he didn't so much use the word as just scream SKIBIDI while dancing around the room.
I'm fine with being old. Some trends you prefer to see sail away from you.
I would imagine any GPGPU compute-capable pre-CUDA thing probably won't cut it.