Data engineering was software engineering from the very beginning. Then a bunch of business analysts who didn't know anything about writing software got jealous and said that if you knew SQL/DBT you were a data engineer. I've had to explain too many times that yes, indeed, I can set up a CI/CD pipeline or set up kafka or deploy Dagster on ECS, to the point where I think I need to change my title just to not be cheapened.
Yep, I specifically asked my company to make sure my job title was not “data engineer” when working on data infrastructure, because there was a growing trend of using it to mean “can write some sql”.
Likewise, we had to steer HR away from “data engineer” because we got very mixed results with candidates.
"Data Engineering" being considered a different role from "regular" SWE predates DBT by... at least one decade? If not two? Probably folks working with Hadoop vs RDMS DBA jobs.
I think even before dbt turned DE into "just write sql & yaml", there was an appreciable difference in DE vs SE. There was defo some DEs writing a lot of java/scala if they were in Spark heavy co's, but my experience is that DEs were doing a lot more platform engineering (similar to what you suggest), SQL and point-and-click (just because that was the nature of the tooling). I wasn't really seeing many DEs spending a lot of time in an IDE.
But I think whats interesting from the post is looking at SEs adopting data infra into their workflow, as opposed to DEs writing more software.
yeah, i've seen large fortune 100 data and analytics orgs where the majority of folks with data engineering titles are uncomfortable with even the basics of git.
We have these at my company. They refuse to do any infrastructure work so you have to spoon feed the databases to them ready to go. It’s pretty annoying.
Part of the problem is that a BA/BSA who writes Python, SQL, etc... as part of their day to day work will get lumped with those who don't and their salary doesn't reflect their skills and work product.
Obviously the same can apply in any given title, and does with data engineers like you pointed out, but it's not as simple as just title inflation.
Titles in software engineering have never mattered less than they do today. Energy worrying about titles or jealosy over specific tech ownership is best channeled into focus on customer, on problem to solve and on finding the best way to solve it as a team.
Agreed. Weird distinction to pay less to people who did certain things and you could a high variance between "data engineers". Some who had only done a course and others that had extensive knowledge of software engineering practices were considered the same.
The comments here are... interesting, as they indicate a strong split between analysts and those engineers that can operationalize things. I see another dimension to it all.
My title is senior data engineer at GAMMA/FAANG/whatever we're calling them. I have a CS degree and am firmly in the engineering. My passion, though, is in using software engineering and computer science principles to make very large-scale data processing as stupid fast as we can. To the extent I can ignore it, I don't personally care much about the tooling and frameworks and such (CI/CD, Airflow, Kafka, whatever). I care about how we're affinitizing our data, how we index it, whether and when we can use data sketches to achieve a good tradeoff between accuracy and compute/memory, and so on.
While there are plenty of folks in this thread bashing analysts, one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.
People who can utilize the tooling to process petabytes of data efficiently aren’t the ones that are catching flack. The people I’m thinking of basically run massive inefficient SQL queries and then throw their hands up when it runs slowly or gets an oom error. They don’t even know how to do an explain plan. And if you try to explain to them things like partitioning, indexes, sketches, etc then they are not able to comprehend and argue that it’s not their job to learn, and that it’s the “proper engineers” job to scale the processing.
My boss at a large company years ago wrote a query for daily stats and then proceeded to run it on the entire event history every day for the life of the company just to get DAU, etc. The solution was to just keep paying more for redshift until the bill was a few million a year. Suggestions to fix his crap were met with disdain.
>And if you try to explain to them things like partitioning, indexes, sketches, etc then they are not able to comprehend and argue that it’s not their job to learn, and that it’s the “proper engineers” job to scale the processing.
Make up a person and attack him, literal strawman. You sound pleasant to work with.
>one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.
But that would be SWEs no?
I was a 'data engineer' (until they changed the terrible title) at a startup and I ended up having to fight with Spark and Apache Beam at times, eventually contributing back to improve throughput for our use cases.
That's not the same thing as a Business Analyst who can run a little pyspark query.
I mean this very sincerely but I'm a little lost how data engineering is distinct from software engineering. It seems like just a subset of it, my title was software engineer and I've done what sounds like very similar work.
I’m pretty sure the term came from Google (at least that is where I heard it first described) and just referred to a backend engineer with speciality in this area. Now usually these roles have “distributed systems” in the title, even if you aren’t really on the inside of the systems. That or “systems and infrastructure”, “data infrastructure”, or “AI/ML infrastructure” or sometimes “MLE” for those kinds of orgs. Or back to good ole “big data” now that it’s no longer tacked on everything.
I've never really seen the distinction between data and software engineering. It's more like front-end vs backend. If you're a data engineer and it's all no code tooling, then you're just an analyst or something.
When I worked at bigCo , it was a totally different world. Data engineers used data platform tools to do data work, usually for data’s sake. Software teams trying to build stuff with data had to finagle their way onto roadmaps.
The difference in titles is more or less where most of the time is spent. Developer could be doing front-end, back-end, embedded, high-performance computing, system, game, data analysis, or any other niche work. All of those have different design, tooling, and ways of thinking that you gain through actually doing.
I been in interviews where after reading my resume they say oh your an embedded developer. Another said a front-end, no a back-end, no a system developer, and other desktop developer. Reality, I did all of those to get the job done and create a viable product.
"Data engineering and software engineering are converging" says firm selling analytics products/services. I think the perspective here may be a bit skewed.
One thing that I don't see mentioned but that does bug me: data engineers often use a lot of Python and SQL, even the ones that have heavily adopted software engineering best practices. Yet both languages are not great for this.
Python is dynamically typed, which you can patch a bit with type hints, but it's still easy to go to production with incompatible types, leading to panics in prod. It's uncompiled nature also makes it very slow.
SQL is pretty much impossible to unit test, yet often you will end up with logic that you want to test. E.g. to optimize a query.
For SQL I don't have a solution. It's a 50 year old language that lacks a lot of features you would expect. It's also the defacto standard for database access.
For Python I would say that we should start adopting statically typed compiled languages. Rust has polars as dataframe package, but the language itself isn't that easy to pick up. Go is very easy to learn, but has no serious dataframe package, so you end up doing a lot of that work yourself in goroutines. Maybe there are better options out there.
When I was most recently at Google (2021-ish) my team owned a bunch of SQL Pipelines that had fairly effective SQL tests. Not my favorite thing to work on, but it was a productive way to transform data. There are lots of open source versions of the same idea, but I have yet to see them accompanied with ergonomic testing. Any recommendations or pointers to open source SQL testing frameworks?
Could you describe what made those tests effective? I just wrote some tools to write concise tests for some analytics queries, and some principles I stumbled on are:
- input data should be pseudorandom, so the chance of a test being “accidentally correct” is minimized
- you need a way to verify only part of the result set. Or, at the very least, a way to write tests so that if you add a column to the result set, your test doesn’t automatically break
In addition, I added CSV exports so you can verify the results by hand, and hot-reload for queries with CTEs — if you change a .sql file then it will immediately rerun each CTE incrementally and show you which ones’ output changed.
In your experience, how are folks doing (1)? The post is talking about a framework to add e.g. type safety, schema-as-code, etc. over assets in data infra in a familiar way as to what is common with Postgres; I'm not familiar with much else out there for that?
Python, R, and Julia all have at least one package that defines a tabular data type. That means we can pass tables to functions, use them in classes, write tests for them, etc.
In all of these packages, the base tabular object you get is a local in-memory table. For manipulating remote SQL database tables, the best full-featured object API is provided by R's dbplyr package, IMHO.
I think Apache Spark, Apache Ibis, and some other big data packages can be configured to do this too, but IMHO their APIs are not nearly as helpful. For those who (understandably) don't want to use R and need an alternative to dbplyr, Apache Ibis is probably the best one to look at.
For the foundation on data engineering I'd recommend this book by Joe Reis and Matt Housley. They did a good job on providing the framework that includes data engineering lifecycle, software engineering, data management, data architecture, etc. You can check the proposed framework here [1],[2].
I think they've been fully converged in most strong practitioners for a long time.
There's a specific type of "data engineer" (quotes to indicate this is what they're called by the business, not to contest their legitimacy) that just writes lots of SQL, but they're usually a bad hire for businesses. They're approximately as expensive as what people call platform engineers, but platform engineers in the data space can usually do modelling as well.
When organizations split teams up by the most SWE-type DEs and the pure SQL ones, the latter all jockey to join the former team which causes a lot of drama too.
Likewise, we had to steer HR away from “data engineer” because we got very mixed results with candidates.
But I think whats interesting from the post is looking at SEs adopting data infra into their workflow, as opposed to DEs writing more software.
Obviously the same can apply in any given title, and does with data engineers like you pointed out, but it's not as simple as just title inflation.
Ridiculous.
There it is! I found the post title was strange. Thanks for setting the record straight so succinctly.
My title is senior data engineer at GAMMA/FAANG/whatever we're calling them. I have a CS degree and am firmly in the engineering. My passion, though, is in using software engineering and computer science principles to make very large-scale data processing as stupid fast as we can. To the extent I can ignore it, I don't personally care much about the tooling and frameworks and such (CI/CD, Airflow, Kafka, whatever). I care about how we're affinitizing our data, how we index it, whether and when we can use data sketches to achieve a good tradeoff between accuracy and compute/memory, and so on.
While there are plenty of folks in this thread bashing analysts, one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.
That job taught me a lot.
Make up a person and attack him, literal strawman. You sound pleasant to work with.
But that would be SWEs no?
I was a 'data engineer' (until they changed the terrible title) at a startup and I ended up having to fight with Spark and Apache Beam at times, eventually contributing back to improve throughput for our use cases.
That's not the same thing as a Business Analyst who can run a little pyspark query.
Dead Comment
I been in interviews where after reading my resume they say oh your an embedded developer. Another said a front-end, no a back-end, no a system developer, and other desktop developer. Reality, I did all of those to get the job done and create a viable product.
Python is dynamically typed, which you can patch a bit with type hints, but it's still easy to go to production with incompatible types, leading to panics in prod. It's uncompiled nature also makes it very slow.
SQL is pretty much impossible to unit test, yet often you will end up with logic that you want to test. E.g. to optimize a query.
For SQL I don't have a solution. It's a 50 year old language that lacks a lot of features you would expect. It's also the defacto standard for database access.
For Python I would say that we should start adopting statically typed compiled languages. Rust has polars as dataframe package, but the language itself isn't that easy to pick up. Go is very easy to learn, but has no serious dataframe package, so you end up doing a lot of that work yourself in goroutines. Maybe there are better options out there.
In general, choice of language isn’t important - again if you’re using spark your data frame structure schema defines that structure Python or not.
Most folks confuse pandas with “data engineering”. It’s not. Most data engineering is spark.
- input data should be pseudorandom, so the chance of a test being “accidentally correct” is minimized
- you need a way to verify only part of the result set. Or, at the very least, a way to write tests so that if you add a column to the result set, your test doesn’t automatically break
In addition, I added CSV exports so you can verify the results by hand, and hot-reload for queries with CTEs — if you change a .sql file then it will immediately rerun each CTE incrementally and show you which ones’ output changed.
SQL is the most beautiful, expressive, get stuff done language I've used.
It is perfect for whatever data engineering is defined as.
1. You use a real programming language that supports all the abstractions software engineers rely on, not (just) SQL.
2. The data is not too big, so the feedback cycle is not too horrendously slow.
#2 can't ever be fully solved, but testing a data pipeline on randomly subsampled data can help a lot in my experience.
In all of these packages, the base tabular object you get is a local in-memory table. For manipulating remote SQL database tables, the best full-featured object API is provided by R's dbplyr package, IMHO.
I think Apache Spark, Apache Ibis, and some other big data packages can be configured to do this too, but IMHO their APIs are not nearly as helpful. For those who (understandably) don't want to use R and need an alternative to dbplyr, Apache Ibis is probably the best one to look at.
[1] Fundamentals of Data Engineering:
https://www.oreilly.com/library/view/fundamentals-of-data/97...
[2] Fundamentals of Data Engineering Review:
https://maninekkalapudi.medium.com/fundamentals-of-data-engi...
There's a specific type of "data engineer" (quotes to indicate this is what they're called by the business, not to contest their legitimacy) that just writes lots of SQL, but they're usually a bad hire for businesses. They're approximately as expensive as what people call platform engineers, but platform engineers in the data space can usually do modelling as well.
When organizations split teams up by the most SWE-type DEs and the pure SQL ones, the latter all jockey to join the former team which causes a lot of drama too.