In architectural design I think it’s rather pronounced. We already know how to design great buildings for the human environment. There ain’t anything new to learn here, so in order to stand out in the field you have to invent some bullshit.
Well, you do that, you create Brutalism or something similarly nonsensical, and in order to defend your creation you have to convince a lot of other academics that no, in fact, buildings that look like bunkers or “clean lines” with “modern materials” are the pinnacle of architecture and design.
And as time has gone on we still go and visit Monet’s Gardens while the rest of the design and art world continues circle jerking to ever more abstract and psychotic designs that measurably make people unhappy.
Not all “experts” in various fields are weighted the same. And in some cases being an expert can show you don’t really know too much.
There's a lot of ugly brutalist buildings, but there's a lot of ugly buildings in every style. At lot of them look cheap because they were supposed to be cheap; to a certain extent looking inexpensive was intended. In some cases the hostile nature of the institutional building was part of the point, conveying strength unstead of offering a pleasant experience, but there's also some quite pleasant brutalist buildings that have a lot of nature integrated into the design.
And so, even if Google was the same thing it was back in 2010, there's no longer anything for "search" to find. And I hope you all downvote me to -50 and scream at me for being a retard with some snarky-assed abuse detailing how and why I am wrong. Because I don't want to be correct about this.
There was an era where there were a lot of completely free sites, because they were mostly academic or passion projects, both of which are subsidized by other means.
Then there were ads. Banner adds, Google's less obtrusive text ads, etc. There were a number of sites completely supported by ads. Including a lot of blogs.
And forums. Google+ managed to kill a lot of niche communities by offering them a much easier way to create a community and then killing it off.
Now forums have been replaced by Discord and Reddit. Deep project sites still exist but are rarer. Social media has consolidated. Most people don't have personal home pages. There's a bunch of stuff that's paywalled behind Patreon.
And all of that has been happening before anyone threw AI into the mix.
So yeah, simply filtering by year published could be a start
I'm including things like RL metrics as data here, for lack of a better umbrella term, though the number of proposed projects that I've seen that decided that ongoing evaluation of actual effectiveness was a distraction from the more important task of having expensive engineers make expensive servers into expensive heatsinks is maddening.
Why do we believe that what is now Saudi Arabia was a desert in 11,000 BCE?
The Arabian desert is technically considered to be part of the Sahara, climate-wise, and participes in the same cycle [2].
This article is about researching evidence for ehat those transitions looked like, focusing on evidence that dates around the end of that particular dry period, pre-Holocene.
> Prior to the onset of the Holocene humid period, little is known about the relatively arid period spanning the end of the Pleistocene and the earliest Holocene in Arabia. An absence of dated archaeological sites has led to a presumed absence of human occupation of the Arabian interior. However, superimpositions in the rock art record appear to show earlier phases of human activity, prior to the arrival of domesticated livestock25.
[1]: https://en.wikipedia.org/wiki/African_humid_period
[2]: https://www.nationalgeographic.com/environment/article/green...
> Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.
They described the mechanism that it uses internally for planning [2]:
> Language models are trained to predict the next word, one word at a time. Given this, one might think the model would rely on pure improvisation. However, we find compelling evidence for a planning mechanism.
> Specifically, the model often activates features corresponding to candidate end-of-next-line words prior to writing the line, and makes use of these features to decide how to compose the line.
[1]: https://www.anthropic.com/research/tracing-thoughts-language...
[2]: https://transformer-circuits.pub/2025/attribution-graphs/bio...