So is piping more functional programming?
Here's a comparison:
* Method chaining: `df.pipe(f1, a=1, b=2).pipe(f2, c=1)`
* Pipe syntax: `df |> f1(a=1, b=2) |> f2(c=1)`
Piping generally chains functions, by passing the result of one call into the next (eg result is first argument to the next).
Method chaining, like in Python, can't do this via syntax. Methods live on an object. Pipes work on any function, not just an object's methods (which can only chain to other object methods, not any function whose eg first argument can take that object).
For example, if you access Polars.DataFrame.style it returns a great_tables.GT object. But in a piping world, we wouldn't have had to add a style property that just calls GT() on the data. With a pipe, people would just be able to pipe their DataFrame to GT().
"The space between us: stereotype threat and distance in interracial contexts"
It mentions being run at Stanford, and was pretty popular (Claude Steele discussed in his book Whistling Vivaldi).
https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.9...
There's something really inspiring from realizing how far back tables go.
[1]: https://posit-dev.github.io/great-tables/blog/design-philoso...
But if the shape drawing process isn't random, I think the author's experience of feeling unable to articulate the rules AND gravitating to a set of behaviors is a good example of procedural memory (implicit vs explicit).
Explicit rules would probably help speed things up, though!
By this I mean that to make confident predictions, you need some serious statistics, but psych is one of the least math heavy sciences (thankfully they recently learned about Bayes and there's a revolution going on). Unlike physics or chemistry, you have so little control over your experiments.
There's also the problem of measurements. We stress in experimental physics that you can only measure things by proxy. This is like you measure distance by using a ruler, and you're not really measuring "a meter" but the ruler's approximation of a meter. This is why we care so much about calibration and uncertainty, making multiple measurements with different measuring devices (gets stats on that class of device) and from different measuring techniques (e.g. ruler, laser range finder, etc). But psych? What the fuck does it even mean "to measure attention"?! It's hard enough dealing with the fact that "a meter" is "a construct" but in psych your concepts are much less well defined (i.e. higher uncertainty). And then everything is just empirical?! No causal system even (barely) attempted?! (In case you've ever wondered, this is a glimpse of why physicists struggle in ML. Not because the work, but accepting the results. See also Dyson and von Neumann's Elephant)
I've jokingly likened psych to alchemy, meaning proto-chemistry -- chemistry prior to the atomic model (chemistry is "the study of electrons") -- or to astrology (astronomy pre-Kepler, not astrology we see today). I do think that's where the field is at, because there is no fundamental laws. That doesn't mean it isn't useful. Copernicus, Brahe, Galileo (same time as Kepler; they fought), and many others did amazing work and are essential figures to astronomy and astrophysics today. But psych is in an interesting boat. There are many tools at their disposal that could really help them make major strides towards determining these "laws". But it'll take a serious revolution and some major push to have some extremely tough math chops to get there. It likely won't come from ML (who suffers similar issues of rigor), but maybe from neuroscience or plain old stats (econ surprisingly contributes, more to sociology though). My worry is that the slop has too much momentum and that criticism will be dismissed because it is viewed as saying that the researchers are lazy, dumb, or incompetent rather than the monumental difficulties that are natural to the field (though both may be true, and one can cause the other). But I do hope to see it. Especially as someone in ML. We can really see the need to pin down these concepts such as cognition, consciousness, intelligence, reasoning, emotions, desire, thinking, will, and so on. These are not remotely easy problems to solve. But it is easy to convince yourself that you do understand, as long as you stop asking why after a certain point.
And I do hope these conversations continue. Light is the best disinfectant. Science is about seeking truth, not answers. That often requires a lot of nuance, unfortunately. I know it will cause some to distrust science more, but I have the feeling they were already looking for reasons to.
1. Many of the early pioneers in statistics were psychologists.
2. The econ x psych connection is strong (eg econometrics and psychometrics share a lot in common and know of each other)
3. Many of the people I see with math chops trying to do psychology are bad at the philosophy side (eg what is a construct; how do constructs like intelligence get established)
I come from a similar cog psych background as the Bjork Lab, so am a big fan of their research, but books like 10 steps come from instructional design, which is a bit more focused on the big picture (designing a whole course vs individual mechanisms).
I just wanted to say that Rich is the only software developer I know, who when asked to lay out the philosophy of his package, would give you 5,000 years of history on the display of tables. :)
This article explains it pretty well: https://dynomight.net/numpy/
Take two examples of dataframe apis, dplyr and ibis. Both can run on a range of SQL backends because dataframe apis are very similar to SQL DML apis.
Moreover, the SQL translation for tools for pivot_longer in R are a good illustration of complex dynamics dataframe apis can support, that you'd use something like dbt to implement in your SQL models. duckdb allows dynamic column selection in unpivot. But in some SQL dialects this is impossible. dataframe apis -> SQL tools (or dbt) enable them in these dialects.