I've been on HN since 2010 (lost the password to my first account, alexc04) and I recall a time when it felt like every second article on the front-page was an bold directive pronouncement or something just aggressively certain of its own correctness.
Like "STOP USING BASH" or "JQUERY IS STUPID" - not in all caps of course but it created an unpleasant air and tone (IMO, again, this is like 16 years ago now so I may have memory degredation to some extent)
Things like donglegate got real traction here among the anti-woke crew. There have been times where the venn diagram of 4chan and hackernews felt like it had a lot more overlap. I've even bowed out of discussion for years at a time or developed an avoidance reaction to HN's toxic discussion culture.
IMO it has been a LOT better in more recent years, but I also don't dive as deep as I used to.
ANYWAYS - my point is I would be really interested to see a sentiment analysis of HN headlines over the years to try and map out cultural epochs of the community.
When has HN swayed more into the toxic and how has it swayed back and forth as a pendulum over time? (or even has it?)
I wonder what other people's perspective is of how the culture here has changed over time. I truly think it feels a lot more supportive than it used to.
I'm writing a language spec for an LLM runner that has the ability to chain prompts and hooks into workflows.
https://github.com/AlexChesser/ail
I'm writing the tool as proof of the spec. Still very much a pre-alpha phase, but I do have a working POC in that I can specify a series of prompts in my YAML language and execute the chain of commands in a local agent.
One of the "key steps" that I plan on designing is specifically an invocation interceptor. My underlying theory is that we would take whatever random series of prose that our human minds come up with and pass it through a prompt refinement engine:
> Clean up the following prompt in order to convert the user's intent > into a structured prompt optimized for working with an LLM > Be sure to follow appropriate modern standards based on current > prompt engineering reasech. For example, limit the use of persona > assignment in order to reduce hallucinations. > If the user is asking for multiple actions, break the prompt > into appropriate steps (**etc...)
That interceptor would then forward the well structured intent-parsed prompt to the LLM. I could really see a step where we say "take the crap I just said and turn it into CodeSpeak"
What a fantastic tool. I'll definitely do a deep dive into this.