What does it mean to say that we humans act with intent? It means that we have some expectation or prediction about how our actions will effect the next thing, and choose our actions based on how much we like that effect. The ability to predict is fundamental to our ability to act intentionally.
So in my mind: even if you grant all the AI-naysayer's complaints about how LLMs aren't "actually" thinking, you can still believe that they will end up being a component in a system which actually "does" think.
I didn't test it and I'm far from an expert, maybe someone can challenge it?
It kinda works, but is not very reliable and is quite sensitive to which model the text was generated with.
This page has nice explanations:
https://www.pangram.com/blog/why-perplexity-and-burstiness-f...
So a link would be much appreciated, in order to judge the quality of the info. As it is, I'm skeptical that the info is accurate, precisely because mutual funds are so wildly popular among the middle-class people I know (none of whom are in the top 10%, though most of them would likely be in the top 50%).
Mistral Large 3 is ranked 28, behind all the other major SOTA models. The delta between Mistral and the leader is only 1418 vs. 1491 though. I *think* that means the difference is relatively small.
Deleted Comment
Have you tried Polars? It really discourages the inefficient creation of intermediate boolean arrays such as in the code that you are showing.
> There's Julia -- it has serious drawbacks, like slow cold start if you launch a Julia script from the shell, which makes it unsuitable for CLI workflows.
Julia has gotten significantly better over time with regard to startup, especially with regard to plotting. There is definitely a preference for REPL or notebook based development to spread the costs of compilation over many executions. Compilation is increasingly modular with package based precompilation as well as ahead-of-time compilation modes. I do appreciate that typical compilation is an implicit step making the workflow much more similar to a scripting language than a traditionally compiled language.
I also do appreciate that traditional ahead-of-time static compilation to binary executable is also available now for deployment.
After a day of development in R or Python, I usually start regretting that I am not using Julia because I know yesterday's code could be executing much faster if I did. The question really becomes do I want to pay with time today or over the lifetime of the project.
The problem is not usually inefficiency, but syntactic noise. Polars does remove that in some cases, but in general gets even more verbose (apparently by design), which gets annoying fast when doing explorative data analysis.
If your data is already in a table, and you’re using Python, you’re doing it because you want to learn Python for your next job. Not because it’s the best tool for your current job. The one thing Python has on all those other options is $$$. You will be far more employable than if you stick to R.
And the reason for that is because Python is one of the best languages for data and ML engineering, which is about 80% of what a data science job actually entails.
I'd say dplyr/tidyverse is a lot more a separate programming language to R than pandas is to Python.
Most of this is not about Python, it’s about matplotlib. If you want the admittedly very thoughtful design of ggplot in Python, use plotnine
> I would consider the R code to be slightly easier to read (notice how many quotes and brackets the Python code needs)
This isn’t about Python, it’s about the tidyverse. The reason you can use this simpler syntax in R is because it’s non-standard-evaluation allows packages to extend the syntax in a way Python does not expose: http://adv-r.had.co.nz/Computing-on-the-language.html
In R it's often that things for which there's a ready made libraries and recipes are easy, but when those don't exist, things become extremely hard. And the usual approach is that if something is not easy with a library recipe, it just is not done.