R especially dplyr/tidyverse is so underrated. Working in ML engineering, I see a lot of my coworkers suffering through pandas (or occasionally polars or even base Python without dataframes) to do basic analytics or debugging, it takes eons and gets complex so quickly that only the most rudimentary checks get done. Anyone working in data-adjacent engineering work would benefit from R/dplyr in their toolkit.
Why not mix R and Python in interactive analysis workflows:
1) Download positron: https://github.com/posit-dev/positron
2) Set up a quarto (.qmd) notebook
3) Set up R and Python code chunks in tour quarto document
4a) Use reticulate to spawn a Python session inside R and exchange objects beween both languages (https://github.com/posit-dev/positron/pull/4603)
4b) Write a few helper functions that pass objects between R and Python by reading/writing a temporary file.
This is exactly what I do for the vast majority of my academic papers. It combines the power and flexibility of R for statistics, which I agree with the upstream poster is incredibly underrated (especially with tidyverse) with python.
Is this what tools like Nextflow or Snakemake aim to do? I don't know, and I'm genuinely curious, because I'm starting to work in bioinformatics and doing different parts of an analysis pipeline in R and Python seems common, and, necessary really if you want to use certain packages.
I'm wondering if I should devote time to learning Nextflow/Snakemake, or whether the solution that you outlined is "sufficient" (I say "sufficient" in quotes because of course, depends on the use case).
As someone who is learning probability and statistics for recreation, I wholeheartedly agree. I wish I had come across R and dplyr/tidyverse/ggplot2 back in college while learning probability and stats. They were quite boring and drudgery to study because I wasn't aware of R to play around with data.
I love R and dplyr. It is very readable and easy to explain to non-programmers. I use it almost everyday.
Not exactly on the topic,I am having difficulties debugging it. May be I need to brush up on debugging R. Not sure if there is a easy way to add breakpoint when using vscode.
what’s the story integrating R code into larger software systems (say, a saas product)?
I’m sure part of Python’s success is sheer mindshare momentum from being a common computing denominator, but I’d guess the integration story is part of the margins. Your back end may well already be in python or have interop, reducing stack investment and systems tax.
There are so many options to emded R in any kind of system. Thanks to the C API, there are connectors for any if the traditional language. There is also RServe and plumber for inter-process interaction. Managing dependencies is also super easy.
My employer is using R to crunch numbers enbeded in a large system based on microservices.
The only thing to keep in mind is that most people writing R are not programmers by trade so it is good to have one person on the project who can refactor their code from time to time.
I am working on a system at present where the data scientist has done the calculations in an R script. We agreed upon an input data.frame and an output csv as our 'interface'.
I added the SQL query to the top of the R script to generate the input data.frame and my Python code reads the output CSV to do subsequent processing and storage into Django models.
I use a subprocess running Rscript to run the script.
It's not elegant but it is simple. This part of the system only has to run daily so efficiency isn't a big deal.
It's getting a lot better, but R in production was something companies 10 years ago would say "so we figured out a way".
The problem is pinning dependencies. So while an R analysis written using base R 20 or 30 years ago works fine, something using dplyr is probably really difficult to get up and running.
At my old work we took a copy of CRAN when we started a new project and added dependencies from then.
So instead of asking for dplyr version x.y, as you'd do ... anywhere, we added dplyr as it and its dependencies where stored on CRAN on this specific date.
We also did a lot of systems programming in R, which I thought of as weird, but for the exact same reason as you are saying for Python.
But R is really easy to install, so I don't see why you can't setup a step in your pipeline that does R - or even both R and Python. They can read dataframes from eachothers memory.
This is, I think, the main reason R has lost a lot of market share to Pandas. As far as I know, there's no way to write even a rudimentary web interface (for example) in R, and if there is, I think the language doesn't suit the task very well. Pandas might be less ergonomic for statistical tasks, but when you want to do anything with the statistical results, you've got the entire Python ecosystem at your fingertips. I'd love to see some way of embedding R in Python (or some other language).
Tangentially, R can help produce living Markdown documents (.Rmd files). A couple of ways include pandoc with knitr[0] or my FOSS text editor, KeenWrite[1]. I've kept the R syntax in KeenWrite compatible with knitr. Living documents as part of a build process can produce PDFs that are always up-to-date with respect to external data sources[2], which includes source code.
Last time I was working on something complex, I was able to knit from Rmd to md, and then use my usual pandoc defaults, which was quite neat. Big recommendation on that workflow.
Then I grew tired of editing YAML files, piping files together, and maintaining bash scripts. So next, I developed KeenWrite to allow use of interpolated variables directly within documents from a single program. The screenshots show how it works:
I will say, now after 15 years messing with this. With LLM I just do it all in Python. But, I still miss the elegance and simplicity of R for data manipulation and analysis. Especially the dplyr semantics. They really nailed it. I think they got crushed by the namespace / import system. There’s something about R that makes you so fluid and intuitive. But the engineering, the efficiency, I get with Python now, I can’t go back.
Funny you mention namespacing: R 4.5.0 was just released today with the new `use()` function, which allows you import just what you need instead of clobbering your global namespace, equivalent to python’s `from x import y` syntax.
I agree with all your comment… except the very last bit. Do you really find python to be more efficient at engineering stuff than R? And especially speed, which in my experience at least is broadly the same if not faster with R because it interages easier with Rust and C++?
Not OP, but i think python is very far above R for engineering stuff. I built my early career on R and ran R user groups. R is great for one-off analyses, or low-volume controlled repetition like running the same report with new inputs.
For engineering stuff i want strong static analysis (type hints, pydantic, mypy), observability (logfire, structlog), and support (can i upload a package to my cloud package registry?).
For ML stuff, i want the libraries everyone else uses (pytorch, huggingface) because popularity brings a lot of development and documentation and obscure github issues the R clones lack.
Userbase matters. In R, hardly any users are doing any engineering; most R code only needs to run successfully one time. The ecosystem reflects that. The python-based ML world has the same problem, but the broader sea of python engineers helps counterbalance.
On further reflection I think the sweet spot for R for me
Has always been prototyping and exploration. Where you don’t exactly know what the logic needs to be, or how the data needs to be cut to get at what you want. So that rapid type of exploration R is really really good at. Closer to math for me than software engineering. And if I had a job where I could just do that all day I’d be pretty happy at this point in my life. and you can’t use a pivot table Google sheets or excel to get at the cut you want or the logic is too complex to do in Google sheets. So for that sweet spot, which is still a broad niche, R is excellent and shines.
Everything I need can get done in python, so I don’t even need to deal with rust and cpp. Adding language interop between r and cpp is now just another thing on my plate, so just stick to Python and pay the cost of less elegant code for data manipulation which I am okay with because now I just need to read it and not write it.
There’s a ton more python code out there so the LLM reliability in python code just makes my life easier. R was great and still is, but my world is now more than just data eng, model fitting, and viz. I have to deal with operationalizing and working with people who aren’t just data science and most org don’t have the luxury of having an easy production R system so I can get my python code over the line and trust a good engineer will be okay smeshing that into the production stack which is likely heavy Python. (Instead of saying oh we don’t work with R we do Python Java so it will take 3-5x longer).
Another sad truth is the cool ml kids all want to do pytorch deep ML training / post training / rlhf / ppo / gdpr gtfo so you are not real hardcore ml if you only do R. I know it’s stupid but the world is kind of like that.
You want to hire people who want to build their careers on the cool stack. I know it’s not all the cool talk the hackers here play with but for real world application I have a lot of other considerations.
Having seen Julia proposed as the nemesis of R (not python, that too political, non-lispy)
>the creator of the R programming language, Ross Ihaka, who provided benchmarks demonstrating that Lisp’s optional type declaration and machine-code compiler allow for code that is 380 times faster than R and 150 times faster than Python
(Would especially love an overview of the controversies in graphics/rendering)
In my opinion, Julia has the best alternative to dplyr in its Dataframes.jl package [1]. The syntax is slightly more verbose than dplyr because it's more explicit, but in exchange you get data transformations that you can leave for 6 months and when you come back you can read and understand very quickly. When I used R, if I hadn't commented a pipeline properly I would have to focus for a few minutes to understand it.
In terms of performance, DF.jl seems to outperform dplyr in benchmarks, but for day to day use I haven't noticed much difference since switching to Julia.
There are also APIs built on top of DF.jl, but I prefer using the functions directly. The most promising seems to be Tidier.jl [2] which is a recreation of the Tidyverse in Julia.
In Python, Pandas is still the leader, but its API is a mess. I think most data scientists haven't used R, and so they don't know what they're missing out on. There was the Redframes project [3] to give Pandas a dplyr-esque API which I liked, but it's not being actively developed. I hope Polars can keep making progress in replacing Pandas, but it's still not quite as good as dplyr or even DF.jl.
For plotting, Julia's time to first plot has got a lot better in recent versions, from memory it's something like 20 seconds a few years ago down to 3 seconds now. It'll never be as fast as matplotlib, but if you leave your terminal window open you only pay that price once.
I actually think the best thing to come out of Julia recently is AlgebraOfGraphics.jl [4]. To me it's genuinely the biggest improvement to plotting since ggplot which is a high bar. It takes the ggplot concept of layers applied with the + operator and turns it into an equation, where + adds a layer on top of another, and the * operator has the distributive property, so you can write an expression like data * (layer_1 + layer_2) to visualise the same data with two visualisations. It's very powerful, but because it re-uses concepts from maths that you're already familiar with, it doesn't take a lot of brain space compared to other packages I've used.
Thanks for the links. FWIW, the link for 4 (aog) is currently 404'd, which is amusing because the site is still up. They just seem to have deleted their own top level index.html file. Anyway, this works:
The comment you linked is a response to my comment where I tried (and failed) to articulate the world in which R is situated. I finally "RTFA" and the benchmark I think perfectly deomonstrates why conversations about R tend not to be very productive. The benchmark is of a hypothetical "sum" function. In R, if you pass a vector of numbers to the sum function, it will call a C function sum. That's it. In R when you want to do lispy tricky metaprogramming stuff you do that in R, when you want stuff to go fast you write C/C++/Rust extensions. These extensions are easy to write in a really performant way because R objects are often thinly wrapped contiguous arrays. I think in other programming language communitues, the existence of library code written in another language is some kind of sign of failure. R programmers just do not see the world that way.
Julia is what I mostly use. I used R in the past, but I was all the time puzzled from the documentation. It did not work for me. Sometimes I fire the REPL for some interpolation, but I limit myself to what I understand.
Totally agree. R is pure pirate energy. Half the functions are hidden on purpose, the other half only work if you chant the right incantation while facing the CRAN mirror at dawn.
Thanks! Paid books do note (above the link) that they're paid but I agree, a better visual might help. I'm thinking of removing the paid books where many free alternatives are available
One of my students codes exclusively in Python. But in most cases newer econometrics methods are implemented in R first. So he just uses rpy2 to call R from his Python code. It works great. For example, recently he performed Bayesian synthetic control using the R code shared by the authors. It required stan backend but everything worked.
There is also https://www.rplumber.io/, which lets you turn R functions into REST APIs. Calling R from Python this way will not be as flexible as using rpy2, but it keeps R in its own process, which can be advantageous if you have certain concerns relating to threading or stability. Also, if you're running on Windows, rpy2 is not officially supported and can be hard to get working.
Not sure what you mean by "python backend". If you mean calling R from Python, rpy2 mentioned in the other comment works well. If you mean the other direction, RStudio has this all built in. This is probably the best place to start: https://rstudio.github.io/reticulate/articles/calling_python...
Here's the github to the package https://github.com/b-rodrigues/rixpress/tree/master
and here's an example pipeline https://github.com/b-rodrigues/rixpress_demos/tree/master/py...
I'm wondering if I should devote time to learning Nextflow/Snakemake, or whether the solution that you outlined is "sufficient" (I say "sufficient" in quotes because of course, depends on the use case).
Dead Comment
Well, better late than never I guess.
the ease of doing `model <- lm(speed~dist, cars)` and then `predict(model, data.frame(dist = c(42)))` is unparalled.
I’m sure part of Python’s success is sheer mindshare momentum from being a common computing denominator, but I’d guess the integration story is part of the margins. Your back end may well already be in python or have interop, reducing stack investment and systems tax.
My employer is using R to crunch numbers enbeded in a large system based on microservices.
The only thing to keep in mind is that most people writing R are not programmers by trade so it is good to have one person on the project who can refactor their code from time to time.
I added the SQL query to the top of the R script to generate the input data.frame and my Python code reads the output CSV to do subsequent processing and storage into Django models.
I use a subprocess running Rscript to run the script.
It's not elegant but it is simple. This part of the system only has to run daily so efficiency isn't a big deal.
The problem is pinning dependencies. So while an R analysis written using base R 20 or 30 years ago works fine, something using dplyr is probably really difficult to get up and running.
At my old work we took a copy of CRAN when we started a new project and added dependencies from then.
So instead of asking for dplyr version x.y, as you'd do ... anywhere, we added dplyr as it and its dependencies where stored on CRAN on this specific date.
We also did a lot of systems programming in R, which I thought of as weird, but for the exact same reason as you are saying for Python.
But R is really easy to install, so I don't see why you can't setup a step in your pipeline that does R - or even both R and Python. They can read dataframes from eachothers memory.
[0]: https://yihui.org/knitr/
[1]: https://keenwrite.com/
[2]: https://youtu.be/XSbTF3E5p7Q?list=PLB-WIt1cZYLm1MMx2FBG9KWzP...
https://dave.autonoma.ca/blog/2019/07/11/typesetting-markdow...
However, most workflows and nearly all editors don't support interpolated variables. To address this, first I developed a YAML preprocessor:
https://repo.autonoma.ca/yamlp.git
Then I grew tired of editing YAML files, piping files together, and maintaining bash scripts. So next, I developed KeenWrite to allow use of interpolated variables directly within documents from a single program. The screenshots show how it works:
https://keenwrite.com/screenshots.html
e.g. avoid dplyr overriding base::filter
use(“dplyr”, c(“mutate”, “summarize”))
For engineering stuff i want strong static analysis (type hints, pydantic, mypy), observability (logfire, structlog), and support (can i upload a package to my cloud package registry?).
For ML stuff, i want the libraries everyone else uses (pytorch, huggingface) because popularity brings a lot of development and documentation and obscure github issues the R clones lack.
Userbase matters. In R, hardly any users are doing any engineering; most R code only needs to run successfully one time. The ecosystem reflects that. The python-based ML world has the same problem, but the broader sea of python engineers helps counterbalance.
There’s a ton more python code out there so the LLM reliability in python code just makes my life easier. R was great and still is, but my world is now more than just data eng, model fitting, and viz. I have to deal with operationalizing and working with people who aren’t just data science and most org don’t have the luxury of having an easy production R system so I can get my python code over the line and trust a good engineer will be okay smeshing that into the production stack which is likely heavy Python. (Instead of saying oh we don’t work with R we do Python Java so it will take 3-5x longer).
Another sad truth is the cool ml kids all want to do pytorch deep ML training / post training / rlhf / ppo / gdpr gtfo so you are not real hardcore ml if you only do R. I know it’s stupid but the world is kind of like that.
You want to hire people who want to build their careers on the cool stack. I know it’s not all the cool talk the hackers here play with but for real world application I have a lot of other considerations.
Having seen Julia proposed as the nemesis of R (not python, that too political, non-lispy)
>the creator of the R programming language, Ross Ihaka, who provided benchmarks demonstrating that Lisp’s optional type declaration and machine-code compiler allow for code that is 380 times faster than R and 150 times faster than Python
(Would especially love an overview of the controversies in graphics/rendering)
https://news.ycombinator.com/item?id=42785785
In terms of performance, DF.jl seems to outperform dplyr in benchmarks, but for day to day use I haven't noticed much difference since switching to Julia.
There are also APIs built on top of DF.jl, but I prefer using the functions directly. The most promising seems to be Tidier.jl [2] which is a recreation of the Tidyverse in Julia.
In Python, Pandas is still the leader, but its API is a mess. I think most data scientists haven't used R, and so they don't know what they're missing out on. There was the Redframes project [3] to give Pandas a dplyr-esque API which I liked, but it's not being actively developed. I hope Polars can keep making progress in replacing Pandas, but it's still not quite as good as dplyr or even DF.jl.
For plotting, Julia's time to first plot has got a lot better in recent versions, from memory it's something like 20 seconds a few years ago down to 3 seconds now. It'll never be as fast as matplotlib, but if you leave your terminal window open you only pay that price once.
I actually think the best thing to come out of Julia recently is AlgebraOfGraphics.jl [4]. To me it's genuinely the biggest improvement to plotting since ggplot which is a high bar. It takes the ggplot concept of layers applied with the + operator and turns it into an equation, where + adds a layer on top of another, and the * operator has the distributive property, so you can write an expression like data * (layer_1 + layer_2) to visualise the same data with two visualisations. It's very powerful, but because it re-uses concepts from maths that you're already familiar with, it doesn't take a lot of brain space compared to other packages I've used.
[1] https://dataframes.juliadata.org/ [2] https://github.com/TidierOrg/Tidier.jl [3] https://github.com/maxhumber/redframes [4] https://aog.makie.org/
https://aog.makie.org/v0.10.3/
BTW I am a senior Java / Python developer
https://www.burns-stat.com/pages/Tutor/R_inferno.pdf
The invention of the Tidyverse freed new R programmers from 126 pages of gotchas.
Tell them to learn to use the tidyverse instead. For most of them, that will be all they ever need.
https://bookdown.org/ndphillips/YaRrr/
One comment: it would be good to distinguish between books that are free and books that you have to pay for.
I’ve been tempted to port to python, but some of the stats libraries have no good counterparts, so, is there a ergonomic way to do this?