Readit News logoReadit News
CalRobert · 3 days ago
Data engineering was software engineering from the very beginning. Then a bunch of business analysts who didn't know anything about writing software got jealous and said that if you knew SQL/DBT you were a data engineer. I've had to explain too many times that yes, indeed, I can set up a CI/CD pipeline or set up kafka or deploy Dagster on ECS, to the point where I think I need to change my title just to not be cheapened.
kentm · 3 days ago
Yep, I specifically asked my company to make sure my job title was not “data engineer” when working on data infrastructure, because there was a growing trend of using it to mean “can write some sql”.

Likewise, we had to steer HR away from “data engineer” because we got very mixed results with candidates.

itsoktocry · 3 days ago
Ironic, since "Data Engineers" are probably far more in demand right now than "Software Engineers".
majormajor · 3 days ago
"Data Engineering" being considered a different role from "regular" SWE predates DBT by... at least one decade? If not two? Probably folks working with Hadoop vs RDMS DBA jobs.
snthpy · 3 days ago
In work yes, but as a title? I only started seeing it called that around dbt origin.
sdairs · 3 days ago
I think even before dbt turned DE into "just write sql & yaml", there was an appreciable difference in DE vs SE. There was defo some DEs writing a lot of java/scala if they were in Spark heavy co's, but my experience is that DEs were doing a lot more platform engineering (similar to what you suggest), SQL and point-and-click (just because that was the nature of the tooling). I wasn't really seeing many DEs spending a lot of time in an IDE.

But I think whats interesting from the post is looking at SEs adopting data infra into their workflow, as opposed to DEs writing more software.

craneca0 · 3 days ago
yeah, i've seen large fortune 100 data and analytics orgs where the majority of folks with data engineering titles are uncomfortable with even the basics of git.
vjvjvjvjghv · 3 days ago
We have these at my company. They refuse to do any infrastructure work so you have to spoon feed the databases to them ready to go. It’s pretty annoying.
Foobar8568 · 3 days ago
Or the basics of SQL...
omgwtfbyobbq · 2 days ago
Part of the problem is that a BA/BSA who writes Python, SQL, etc... as part of their day to day work will get lumped with those who don't and their salary doesn't reflect their skills and work product.

Obviously the same can apply in any given title, and does with data engineers like you pointed out, but it's not as simple as just title inflation.

mrugge · 3 days ago
Titles in software engineering have never mattered less than they do today. Energy worrying about titles or jealosy over specific tech ownership is best channeled into focus on customer, on problem to solve and on finding the best way to solve it as a team.
isaacremuant · 3 days ago
Agreed. Weird distinction to pay less to people who did certain things and you could a high variance between "data engineers". Some who had only done a course and others that had extensive knowledge of software engineering practices were considered the same.

Ridiculous.

mandeepj · 3 days ago
> Data engineering was software engineering from the very beginning.

There it is! I found the post title was strange. Thanks for setting the record straight so succinctly.

mynameisash · 3 days ago
The comments here are... interesting, as they indicate a strong split between analysts and those engineers that can operationalize things. I see another dimension to it all.

My title is senior data engineer at GAMMA/FAANG/whatever we're calling them. I have a CS degree and am firmly in the engineering. My passion, though, is in using software engineering and computer science principles to make very large-scale data processing as stupid fast as we can. To the extent I can ignore it, I don't personally care much about the tooling and frameworks and such (CI/CD, Airflow, Kafka, whatever). I care about how we're affinitizing our data, how we index it, whether and when we can use data sketches to achieve a good tradeoff between accuracy and compute/memory, and so on.

While there are plenty of folks in this thread bashing analysts, one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.

kentm · 3 days ago
People who can utilize the tooling to process petabytes of data efficiently aren’t the ones that are catching flack. The people I’m thinking of basically run massive inefficient SQL queries and then throw their hands up when it runs slowly or gets an oom error. They don’t even know how to do an explain plan. And if you try to explain to them things like partitioning, indexes, sketches, etc then they are not able to comprehend and argue that it’s not their job to learn, and that it’s the “proper engineers” job to scale the processing.
CalRobert · 3 days ago
My boss at a large company years ago wrote a query for daily stats and then proceeded to run it on the entire event history every day for the life of the company just to get DAU, etc. The solution was to just keep paying more for redshift until the bill was a few million a year. Suggestions to fix his crap were met with disdain.

That job taught me a lot.

itsoktocry · 3 days ago
>And if you try to explain to them things like partitioning, indexes, sketches, etc then they are not able to comprehend and argue that it’s not their job to learn, and that it’s the “proper engineers” job to scale the processing.

Make up a person and attack him, literal strawman. You sound pleasant to work with.

VirusNewbie · 3 days ago
>one could also bash other "proper" engineers that can do the CI/CD but don't know shit about how to be efficient with petabyte-scale processing.

But that would be SWEs no?

I was a 'data engineer' (until they changed the terrible title) at a startup and I ended up having to fight with Spark and Apache Beam at times, eventually contributing back to improve throughput for our use cases.

That's not the same thing as a Business Analyst who can run a little pyspark query.

tdb7893 · 3 days ago
I mean this very sincerely but I'm a little lost how data engineering is distinct from software engineering. It seems like just a subset of it, my title was software engineer and I've done what sounds like very similar work.
briankelly · 3 days ago
I’m pretty sure the term came from Google (at least that is where I heard it first described) and just referred to a backend engineer with speciality in this area. Now usually these roles have “distributed systems” in the title, even if you aren’t really on the inside of the systems. That or “systems and infrastructure”, “data infrastructure”, or “AI/ML infrastructure” or sometimes “MLE” for those kinds of orgs. Or back to good ole “big data” now that it’s no longer tacked on everything.

Dead Comment

giantg2 · 3 days ago
I've never really seen the distinction between data and software engineering. It's more like front-end vs backend. If you're a data engineer and it's all no code tooling, then you're just an analyst or something.
flexiflex · 3 days ago
When I worked at bigCo , it was a totally different world. Data engineers used data platform tools to do data work, usually for data’s sake. Software teams trying to build stuff with data had to finagle their way onto roadmaps.
sdairs · 3 days ago
this has been my experience too
yndoendo · 2 days ago
The difference in titles is more or less where most of the time is spent. Developer could be doing front-end, back-end, embedded, high-performance computing, system, game, data analysis, or any other niche work. All of those have different design, tooling, and ways of thinking that you gain through actually doing.

I been in interviews where after reading my resume they say oh your an embedded developer. Another said a front-end, no a back-end, no a system developer, and other desktop developer. Reality, I did all of those to get the job done and create a viable product.

SrslyJosh · 3 days ago
"Data engineering and software engineering are converging" says firm selling analytics products/services. I think the perspective here may be a bit skewed.
jochem9 · 3 days ago
One thing that I don't see mentioned but that does bug me: data engineers often use a lot of Python and SQL, even the ones that have heavily adopted software engineering best practices. Yet both languages are not great for this.

Python is dynamically typed, which you can patch a bit with type hints, but it's still easy to go to production with incompatible types, leading to panics in prod. It's uncompiled nature also makes it very slow.

SQL is pretty much impossible to unit test, yet often you will end up with logic that you want to test. E.g. to optimize a query.

For SQL I don't have a solution. It's a 50 year old language that lacks a lot of features you would expect. It's also the defacto standard for database access.

For Python I would say that we should start adopting statically typed compiled languages. Rust has polars as dataframe package, but the language itself isn't that easy to pick up. Go is very easy to learn, but has no serious dataframe package, so you end up doing a lot of that work yourself in goroutines. Maybe there are better options out there.

orochimaaru · 3 days ago
If you’re using some variety of spark for your data engineering then scala is an option too.

In general, choice of language isn’t important - again if you’re using spark your data frame structure schema defines that structure Python or not.

Most folks confuse pandas with “data engineering”. It’s not. Most data engineering is spark.

rovr138 · 3 days ago
in spark, doesn't pyspark and sql both still get translated to scala?
sbrother · 3 days ago
When I was most recently at Google (2021-ish) my team owned a bunch of SQL Pipelines that had fairly effective SQL tests. Not my favorite thing to work on, but it was a productive way to transform data. There are lots of open source versions of the same idea, but I have yet to see them accompanied with ergonomic testing. Any recommendations or pointers to open source SQL testing frameworks?
physicles · 3 days ago
Could you describe what made those tests effective? I just wrote some tools to write concise tests for some analytics queries, and some principles I stumbled on are:

- input data should be pseudorandom, so the chance of a test being “accidentally correct” is minimized

- you need a way to verify only part of the result set. Or, at the very least, a way to write tests so that if you add a column to the result set, your test doesn’t automatically break

In addition, I added CSV exports so you can verify the results by hand, and hot-reload for queries with CTEs — if you change a .sql file then it will immediately rerun each CTE incrementally and show you which ones’ output changed.

greekorich · 3 days ago
I've been a professional java dev for a decade. I've written a little python, clojure, lots of JS/TS/Node.

SQL is the most beautiful, expressive, get stuff done language I've used.

It is perfect for whatever data engineering is defined as.

antupis · 3 days ago
SQL is beautiful when it works but when it doesn’t you end up with some abomination eg if you need some kind dynamic query.
getnormality · 3 days ago
It's not hard to do data engineering to the standards of software engineering, and many people do it already, provided that

1. You use a real programming language that supports all the abstractions software engineers rely on, not (just) SQL.

2. The data is not too big, so the feedback cycle is not too horrendously slow.

#2 can't ever be fully solved, but testing a data pipeline on randomly subsampled data can help a lot in my experience.

sdairs · 3 days ago
In your experience, how are folks doing (1)? The post is talking about a framework to add e.g. type safety, schema-as-code, etc. over assets in data infra in a familiar way as to what is common with Postgres; I'm not familiar with much else out there for that?
getnormality · 3 days ago
Python, R, and Julia all have at least one package that defines a tabular data type. That means we can pass tables to functions, use them in classes, write tests for them, etc.

In all of these packages, the base tabular object you get is a local in-memory table. For manipulating remote SQL database tables, the best full-featured object API is provided by R's dbplyr package, IMHO.

I think Apache Spark, Apache Ibis, and some other big data packages can be configured to do this too, but IMHO their APIs are not nearly as helpful. For those who (understandably) don't want to use R and need an alternative to dbplyr, Apache Ibis is probably the best one to look at.

teleforce · 3 days ago
For the foundation on data engineering I'd recommend this book by Joe Reis and Matt Housley. They did a good job on providing the framework that includes data engineering lifecycle, software engineering, data management, data architecture, etc. You can check the proposed framework here [1],[2].

[1] Fundamentals of Data Engineering:

https://www.oreilly.com/library/view/fundamentals-of-data/97...

[2] Fundamentals of Data Engineering Review:

https://maninekkalapudi.medium.com/fundamentals-of-data-engi...

ludicity · 3 days ago
I think they've been fully converged in most strong practitioners for a long time.

There's a specific type of "data engineer" (quotes to indicate this is what they're called by the business, not to contest their legitimacy) that just writes lots of SQL, but they're usually a bad hire for businesses. They're approximately as expensive as what people call platform engineers, but platform engineers in the data space can usually do modelling as well.

When organizations split teams up by the most SWE-type DEs and the pure SQL ones, the latter all jockey to join the former team which causes a lot of drama too.