This article seems written by someone who never had to work with diverse data pipelines.
I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.
Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.
It’s like arguing how much mp3 sucks and how we should all just use flac.
The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.
Totally agree. His arguments are basically "performance!" (which is honestly not important to 99% of CSV export users) and "It's underderspecified!" And while I can agree with the second, at least partly, in the real world the spec is essentially "Can you import it to Excel?". I'm amazed at how much programmers can discount "It already works pretty much everywhere" for the sake of more esoteric improvements.
All that said (and perhaps counter to what I said), I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters, so you don't have to have complicated quoting/escaping logic, and it also supports multiple sheets (i.e. a workbook) in a single file.
> in the real world the spec is essentially "Can you import it to Excel?"
And the answer to that is always no. You will it think it's yes because it works for you, but when you send it to someone who has a different Excel version or simply different regional settings, it won't work. The recipient will first have to figure out what dialect you used to export.
The IANA standard for TSV already disallows tabs inside fields, so you can skip writing any quoting logic (in principle). The MIME type is `text/tab-separated-values`.
So true. Working with imports/exports in CSV from ERP software. One can't imagine how often "Oh, this import doesn't work. I'll just fix the CSV file" occurs. Try that with some compressed, "esoteric" file, or even CML and users will break it.
Besides all the downsides CSV has, as soon as it's not only machine-machine communication and a human is involved, CSV is just simole enough.
Check out Polars in python if you want some CSV performance lol. I recently got a 5 million row CSV from a 3rd party and I could manipulate columns (filtering, sorting, grouping) in actions that took less than a second. It's an incredible tool.
To me this criticism feels excessive. It feels like the author is describing their frustrations with internal usage of CSVs - there's no mention of customers and non-technical stakeholders at all. I think it goes without saying that Parquet files and other non-human-readable formats are a nonstarter when working with external stakeholders and the last paragraph makes that clear - if the end-user wants CSV, give them CSV.
I also think we shouldn't blindly dismiss the performance drawbacks of CSV when working with data pipelines. At even modest scales it becomes hard to work with very large CSVs because the data often doesn't fit into memory, a problem easily solved by Parquet and other formats assuming you only need a subset.
I deal with gig size csvs all the time and don’t have any performance issues. These aren’t huge files, but decent sized. And most are just a few megs and only thousands to millions of records.
Csv is not very performant, but it doesn’t matter for these use cases.
I’ll also add that I’m not working with the csvs, they are just I/o. So any memory issues are handled by the load process. I certainly don’t use csvs for my internal processes. Just for when someone sends me data or I have to send it back to them.
That being said my workstation is pretty big and can handle 10s of gigs of csv before I care. But that’s usually just for dev or debugging and anything that sticks around will be working with data in some proper store (usually parquet distributed across nodes).
I very much agree with this. For an integration where you have control over both ends of the pipeline, CSV is not optimal unless there's existing work to build on, and even then it's a legacy choice.
Parquet and Avro are widely supported in backend languages and also in data analysis. I don't think the article is talking about exported-like-a-jpeg, but instead exported-like-a-daily-report-run: the data scientist doing the exporting is probably using R or Pandas instead of Excel, and can reasonably be expected to read https://arrow.apache.org/docs/r/reference/read_parquet.html.
Why is this getting downvoted? They're right that the criticism is pretty excessive:
"Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation."
Not saying csv doesn’t have its issues, but I don’t think the author made a convincing argument.
A lot of the issues the author brought up didn’t sound that bad and/or it sounds like he never looked at the source data first.
If you’re doing work with large datasets, I think it’s a good practice to at least go and look at the source data briefly to see what to expect.
This will give you a good idea of the format it outputs, data types, some domain context, etc. or some combination thereof and I don’t think it even takes that long.
Also, it reminds me of the arguments against excel in a way. Most people know what a csv is, more or less how to open it, and don’t need too much context when discussing the file. Someone will quickly understand if you tell them the file isn’t delimited properly right away. These are pros that shouldn’t be taken for granted.
Again, I’m not saying csv doesnt have issues or that there aren’t better alternatives, simply that I didn’t find this particular argument convincing.
IME most people don't know that using Excel to open and save a csv will silently mangle data. In our application leading zeros are significant, so we constantly get screwed by people trying to do quick manual edits and breaking the data. If we're lucky it breaks so badly the import fails. It's worse when the mangling results in structurally valid but wrong data.
> all these sources aren’t able to agree on some successful format.
But the same is true for csv, and they are not readable by everyone since you don't always know how to read them, there is not enough info for that
Also it's not a good reflection on "deep experience" if it leads to reflexive defense of common stupid things people do with wrong analogies (e.g, flac is less efficient, so more like csv)
My entire exoerience with software development has been me bellyaching about how stupidly things are setup, why dont we do it this way instead etc... only to actually set about working on fixing these things and reqlizing its either way harder than I thought, it makes more sense than I thought, or it just plumb isnt worth the effort.
The evervescent suggestions of a brighter more logical, even obvious, solutions, is often a clear indicator of domain inexperience or ignorance.
And it's surprisingly hard for etl departments to export to csv correctly. I mean, if they can't do csv they can't do anything more complicated for sure.
This article seems written by someone who never had to work with diverse data pipelines
I think that's a little unfair, it sounds like the author does have a decent amount of experience working with real-world CSV files:
I remember spending hours trying to identify an issue that caused columns to "shift" around 80% into a 40GB CSV file, and let me tell you, that just isn't fun.
> Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
If only CSV were CSV, as opposed to some form that's 80-90% CSV by line count with enough oddities to really make the parser ugly and hard to read.
See, the sweet spot isn't something completely unstructured, because then you feel justified in throwing up (your hands) and declaring defeat. The sweet spot is a file that's sufficiently close to being structured you can almost parse it nicely, but has enough bad lines you can't go in and fix them all by hand in a reasonable timeframe, and you can't tell upstream to get their shit in order because it's only a few lines.
Did you actually read the conclusion at the end of the article?
"Of course, we can't conclude that you should never export to CSV. If your users are just going to try to find the quickest way to turn your data into CSV anyway, there's no reason why you shouldn't deliver that. But it's a super fragile file format to use for anything serious like data integration between systems, so stick with something that at the very least has a schema and is more efficient to work with."
It's a premature optimization issue. If you don't have special requirements like IO throughput, or mission critical data accuracy guarantees, be biased towards picking the format that anyone can easily open in a spreadsheet.
"Of course there are better formats, but all these sources aren’t able to agree on some successful format."
It's the same with csv. They come in all kinds of formats because nobody agreed on the standard. Comma separated, semicolon separated, pipe separated, escaped, not escaped.
Everytime I have to deal with csv I first have to figure out how to parse it in code.
So I think the author is right, we must agree on a better format because that is what friends do.
You are also right because it's an illusion to think that this is going to change anythime soon. But who knows..
Every integration I’ve ever worked on has started off with high ideas of APIs and nice data standards. And has eventually devolved into “can we just put a CSV file on an FTP site…”. With the inevitable, “it’s not really CSV…”
"You give up human readable files, but what you gain in return is..."
Stop right there. You lose more than you gain.
Plus, taking the data out of [proprietary software app my client's data is in] in csv is usually easy. Taking the data out in Apache Parquet is...usually impossible, but if it is possible at all you'll need to write the code for it.
Loading the data into [proprietary software app my client wants data put into] using a csv is usually already a feature it has. If it doesn't, I can manipulate csv to put it into their import format with any language's basic tools.
And if it doesn't work, I can look at the csv myself, because it's human readable, to see what the problem is.
90% of real world coding is taking data from a source you don't control, and somehow getting it to a destination you don't control, possibly doing things with it along the way. Your choices are usually csv, xlsx, json, or [shudder] xml. Looking at the pros and cons of those is a reasonable discussion to have.
I think his arguments apply more closely to SQLite databases. They're not directly human readable, but boy are there a lot of tools for working with them.
We have a use case where we effectively need to have a relational database, but in git. The database doesn't change much, but when it does, references between tables may need to be updated. But we need to easily be able to see diffs between different versions. We're trying an SQLite DB, with exports to CSV as part of CI - the CSV files are human-readable and diff'able.
It's also worth noting that SQLite can ingest CSV files into memory and perform queries on them directly - if the files are not too large, it's possible to bypass the sqlite format entirely.
Real world example of this that we just experienced:
I work with a provider who offers CSV exports as the only way to access data. Recently, we found they were including unsanitized user input directly in fields. They weren't even quoting these fields.
The platform "notified their quality assurance team ASAP" (like every other issue, we never heard back), but we had a deadline. This, of course, was a mess, but being able to quickly open the file and fix quotes was all it took. I shudder at the thought of trying to salvage a corrupted binary with zero help from the holder of the data.
This sounds like a problem that wouldn’t have existed in the first place if following a binary protocol with a standard format and using a proper serialization library.
The issue comes from CSV files looking easy to generate by hand, when it in fact is not.
In my experience, human readable file formats are a mistake. As soon as people can read a single file they think that that's the entire format and that it's okay to write it by hand or write their own code for it. And when everyone writes code based on the just what they've personally seen about a format, everyone is sad. This is why not a single piece of software on earth uses the CSV RFC. This is why people hand you CSVs that don't quote string fields with commas in them. This is why you find software that can't handle non-comma delimiters. This is why you find software that assumes that any field made of digits is an integer and then crashes when it tries to do string operations on it. This is why you find software that can't be used unless you change your computer's locale because the front end understands locales and uses commas for numbers but the backend is running on a server and doesn't know what locales are and now everything is broken. This has happened for every single "human-readable" format in existence: html, markdown, CSV, rtf, json, everything. I consider human readability to be a death knell for a format's chances of broad, easy interoperability. There are certainly other things that can doom a format - parquet is almost too complex to implement and so only just barely works, for example - but I'll take a sqlite database over a csv every single time.
I think a takeaway could also be not to give people options when making a human-readable format. "you always need quotes, they're not optional" solves the comma problem. "the delimiter is always a comma" solves the delimiter problem. json has also fared better than csv, I'd say.
In the French locale, the decimal point is the comma, so "121.5" is written "121,5". It means, of course, that the comma can't be used as a separator, so the semicolon is used instead.
It means that depending whether or not the tool that exports the CSV is localized or not, you get commas or you get semicolons. If you are lucky, the tool that imports it speaks the same language. If you are unlucky, it doesn't, but you can still convert it. If you are really unlucky, then you get commas for both decimal numbers and separators, making the file completely unusable.
There is a CSV standard, RFC 4180, but no one seems to care.
There are tools to convert between the formats. Either you have a defined data pipeline where you know what you get at each step and apply the necessary transformations. Or you get random files and, yes, have to inspect them and see how to convert them if necessary.
It’s unfortunate that there isn’t a single CSV format, but for historical reasons it is what it is. It’s effectively more like a family of formats that share the same file extension.
Excel actually has a convention where it understands when there is a line
sep=;
at the start of the file.
By the way, in addition to differing separators, you can also get different character encodings.
Excel does understand a BOM to indicate UTF-8, but some versions of Excel unfortunately ignore it when the “sep=“ line is present…
Thank you! This is a game changer. As I was reading through these comments I was thinking how much better it would be if the separator could be specified in the file, but it would only be useful for my own stuff, if I was to do that on my own.
I’ll be trying this first thing tomorrow at work. I don’t do as much with CSVs as I used to, but am currently working with them a lot, and have a single entry that always throws the resulting Excel file off, and I can’t be bothered to figure out how to escape it (I spent too much time in the past messing with that and got nowhere). This sep= lines will hopefully solve my issues.
What kind of voodoo is this? I've always wanted something like this for non technical coworkers across countries but I didn't know it existed. I always just exported TSV and provided steps for how to import it (although I think most excel versions have native tsv support).
You are mistaken. Probably more countries overall use a decimal comma, but the decimal point is used as convention in many countries, including China, India, Nigeria and the Philippines.
Any serious CSV tool has the option to pick a delimiter. Usually semicolon or comma, some offer additional options. The only impact it has is on which fields need quoting. When using comma, all decimals in many langugaes need to be quoted. When using semicolon those don't need to be quoted.
Overall, semicolon feels like the superior delimiter.
Most sensible people don't export formatted numbers (e.g. 100.000,00), but even those are pretty trivial to import.
In my experience, this is only a problem when you are using Excel. It's ridiculous how bad Excel is at handling CSVs, I really cannot comprehend it. If you use LibreOffice all your problems magically disappear.
I find out that Excel follows windows locale, you can change the decimal and the thousand separator on windows, and it will affect how excel exports and reads CSVs.
If your tool reads CSV by doing `string_split(',', line);`, your tool is doing it wrong. There's a bunch of nuance and shit, which can make CSVs interesting to work with, but storing a comma in a field is a pretty solved issue if the tool in question has more than 5 minutes thought put into it.
If only a totally separate data field separator character had been invented early on and been given its own key on the keyboard, coloured: "only use for field delimiting". You know as well as I do that it would have been co-opted into another rôle within weeks, probably as a currency indicator.
You should probably use PSV - Point Separated Variable! Obviously we would need to adjust PSV to account for correct French word order (and actually use French correctly). Britain and USA would use something called PVS2 instead as a data interchange format with France which involves variable @@@ symbols as delimiters, unless it is a Friday which is undefined. The rest of the world would roll its eyes and use PSV with varying success. A few years later France would announce VSP on the back page of Le Monde, enshrine its use in law (in Aquitaine, during the fifteenth century) but not actually release the standard for fifteen years.
The world is odd. We have to work within an odd world.
Interestingly enough, you and I could look at a CSV encoded data set with commas as decimal separators and work out what is going on. You'll need a better parser!
My company has been using DSV for a bit: Dagger Separated Values. Unicode dagger (†) to separate values and double-dagger (‡) to indicate end of row. It allows us to easily handle both commas and newlines.
> In the French locale, the decimal point is the comma, so "121.5" is written "121,5". It means, of course, that the comma can't be used as a separator
Yeah, hon hon hon and all, but one of my (US) bank statements exports a CSV which uses commas in numbers in the US fashion, so $1,500 and the like. Writing a custom CSV munger to intake that into ledger-csv was... fun, but then again, only had to do it once.
Of course if you only consider the disadvantages, something looks bad.
The advantages of CSV are pretty massive though - if you support CSV you support import and export into a massive variety of business tools, and there is probably some form of OOTB support.
You have a (usually) portable transport format that can get the information into and out of an enormous variety of tools that do not necessarily require a software engineer in the middle.
I'm also struggling with such a quick dismissal of human readable formats. It's a huge feature.
What happens when there's a problem with a single CSV file in some pipeline that's been happily running fine for years? You can edit the thing and move on with your day. If the format isn't human readable, now you may have to make and push a software update to handle it.
Of course, CSV is a terrible format that can be horribly painful. No argument there.
But despite the pain, it's still far better than many alternatives. In many situations.
You can do tab-separated "CSV" and it'll be much better, avoid the quoting and delimiter issues that somehow trip up something half the time, and pretty much all these tools have always supported that format as well.
The reason CSV is popular is because it is (1) super simple, and (2) the simplicity leads to ubiquity. It is extremely easy to add CSV export and import capability to a data tool, and that has come to mean that there are no data tools that don't support CSV format.
Parquet is the opposite of simple. Even when good libraries are available (which it usually isn't), it is painful to read a Parquet file. Try reading a Parquet file using Java and Apache Parquet lib, for example.
Avro is similar. Last I checked there are two Avro libs for C# and each has its own issues.
Until there is a simple format that has ubiquitous libs in every language, CSV will continue to be the best format despite the issues caused by under-specification. Google Protobuf is a lot closer than Parquet or Avro. But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.
> Parquet is the opposite of simple. Even when good libraries are available (which it usually isn't), it is painful to read a Parquet file. Try reading a Parquet file using Java and Apache Parquet lib, for example.
It all seems rather complex, and even worse: not actually all that well described. I suppose all the information is technically there, but it's really not a well-design well-written specification that's easy to implement. The documentation seems like an afterthought.
> But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.
What, when I worked at Google concatenating protobuf strings was a common way to concatenate protobufs, they are absolutely splittable. People might not know it but there is a reason they are designed like they are, it is to handle big data as you say.
If you mean you can't split a single protobuf, sure, but you don't need to do that, it is like trying to split a single csv line, doesn't mean csv isn't splittable.
Edit: Maybe some of that tooling to work with protobufs are internal only or not a part of the external protobuf packages though.
I laughed. We don't call somebody writing a poor varint serializer for in-house use, then discovering that it doesn't handle negative numbers and floats that well so slapping a couple of hotfixes on top of it that make it necessary to have a protocol specification file, "design".
Protobuf does not have built-in delimiters or sync markers between records, which makes it not possible to start reading from an arbitrary point in the middle of a Protobuf-encoded file and correctly interpret the data. That makes Protobuf not a splitable format.
Avro is a terrible serialization format. Well, not necessarily the spec but all tooling around it is. It should never be picked over multitude of other, better options unless the company is, maybe, a Java shop.
> Google Protobuf is a lot closer than Parquet or Avro. But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.
Eh, I don't think that's the problem. If that was the problem, there are a zillion ways to chunk files; .tar is probably the most ubiquitous but there are others.
The bigger problem is that Protobuf is way harder to use. Part of the reason CSV is underspecified is it's simple enough it feels, at first glance, like it doesn't need specification. Protobuf has enough dark corners that I definitely don't know all of it, despite having used it pretty extensively.
I think Unicode Separated Values (USV) is a much better alternative and as another poster mentioned, I hope it takes off.
Author here. I see now that the title is too controversial, I should have toned that down. As I mention in the conclusion, if you're giving parquet files to your user and all they want to know is how to turn it into Excel/CSV, you should just give them Excel/CSV. It is, after all, what end users often want. I'm going to edit the intro to make the same point there.
If you're exporting files for machine consumption, please consider using something more robust than CSV.
Well what would be a more accurate title? "CSV format should only be for external interchange or archival; columnar formats like Parquet or Arrow better for performance"?
People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)
There's also an implicit assumption to each use-case about whether the data can/should fit in memory or not, and how much RAM a typical machine would have.
As you mention, it's pretty standard to store and access compressed CSV files as .csv.zip or .csv.gz, which mitigates at least trading off the space issue for a performance overhead when extracting or searching.
The historical reason a standard like CSV became so entrenched with business, financial and legal sectors is the same as other enterprise computing; it's not that users are ignorant; it's vendor and OS lock-in. Is there any tool/package that dynamically switches between formats internally? estimates comparative file sizes before writing? ("I see you're trying to write a 50Gb XLSX file...")
estimates read time when opening a file? etc. Those sort of things seem worth mentioning.
> Well what would be a more accurate title? "CSV format should only be for external interchange or archival; columnar formats like Parquet or Arrow better for performance"?
Something more boring, like "Consider whether other options make more sense for your data exports than CSV". Plenty of people have suggested other good options in comments on this submission, such as for example sqlite. I think the post comes off as if I'm trying to sell a particular file format for all uses cases, when what I had in mind when writing it was to discourage using CSV as a default. CSV has a place, certainly, but it offloads a lot of complexity on the people who are going to consume the data, in particular, they need to figure out how to interpret it. This can't necessarily be done by opening the file in an editor and looking at it, beyond a certain size you're going to need programming or great tools to inspect it anyway.
I was given an initial export of ~100 poor quality CSV files totaling around 3TB (~5-6 different tables, ~50 columns in each) in size a few years back, and had to automate ingestion of those and future exports. We could've saved a lot of work if the source was able to export data in a friendlier format. It happened more than once during that project that we were sent CSVs or Excel sheets that had mangled data, such as zip codes or phone numbers with leading 0s removed. I think it is a good thing to inform people of these problems and encourage the use of formats that don't necessity guessing data types. :shrug:
> People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)
This might be an interesting thing to follow up later, but would require a lot more work.
> I see now that the title is too controversial, I should have toned that down.
Sometimes a click-baity title is what you need to get a decent conversation/debate going. Considering how many comments this thread got, I'd say you achieved that even if sparking a lengthy HN thread had never been your intent.
I got a parquet file once and I was like WTF is this format?
The problem with parquet is it's complicated and you basically have to remap from parquet to whatever you're importing into because the people on the other side have remapped from whatever to parquet.
There are likely relationships and constraints there that you'll have to hack around - which is harder to do because the parquet tools sort of suck/aren't as flexible.
With CSV you can hack around any problem in the ETL process.
Or export to CSV correctly and test with Excel and/or LibreOffice. Honestly CSV is a very simple, well defined format, that is decades old and is “obvious”. I’ve had far more trouble with various export to excel functions over the years, that have much more complex third-party dependencies to function. Parsing CSV correctly is not hard, you just can’t use split and be done with it. This has been my coding kata in every programming language I’ve touched since I was a teenager learning to code.
How about intermediate systems that muck things up. String encoding going through a pipeline with a misconfiguration in the middle. Data with US dates pasted into UK Excel and converted back into CSV, so that the data is a mix of m/d/yy and d/m/yy depending on the magnitude of the numbers. Hand-munging of data in Excel generally, so that sometimes the data is misaligned WRT rows and columns.
I've seen things in CSV. I once wrote an expression language to help configure custom CSV import pipelines, because you'd need to iterate a predicate over the data to figure out which columns are which (the misalignment problem above).
If the use-case has these complexities, then other formats may be better. I'll go out on a limb and say that MOST data export to csv are simple column data where it works just fine - at least that's been my experience.
Intermediate systems like Excel will break anything, they aren’t constrained to CSV. Excel screws up at the level of a cell value, not at the file format.
- treat it as a join, and unroll by duplicating non-nested CSV data in separate rows for every element in the nested CSV
- treat it as a projection, have an extraction operator to project the cells you want
- treat it as text substitution problem; I've seen CSV files where every line of CSV was quoted like it was a single cell in a larger CSV row
You get nested CSV because upstream systems are often master/detail or XML but need to use CSV because everybody understands CSV because it's such a simple file format. Good stuff.
Indeed. It's like 'text file' - there are many ways to encode and decode these.
Add another munging the to list, ids that 'look like numbers' e.g. `0002345` will get converted to `2,345`. Better be sure to pre-pend ' i.e. `'0002345`
Most of the problems come from treating CSV fields as something other than US-ASCII text strings. They're not dates, they're not numbers, they're just freeform text. If you want fancy-pants features like "numbers" or the ability to represent all common words in use in the US (e.g. the California town Rancho Peñasquitos) and aren't a masochist, you don't want CSV.
> Parsing CSV correctly is not hard, you just can’t use split and be done with it.
Parsing RFC-compliant CSVs and telling clients to go away with non-compliant CSVs is not hard.
Parsing real world CSVs reliably is simply impossible. The best you can do is heuristics.
How do you interpret this row of CSV data?
1,5,The quotation mark "" is used...,2021-1-1
What is the third column? The RFC says that it should just be literally
> The quotation mark "" is used...
But the reality is that some producers of CSVs, which you will be expected to support, will just blindly apply double quote escaping, and expect you to read:
5. Each field may or may not be enclosed in double quotes (however some programs, such as Microsoft Excel, do not use double quotes at all). If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.
The deliminator is a setting that can be changed. I never said hard code. This is an interesting exercise to give students, but I am struggling to look back in my mind through the last 25 years of line of business application development where any of this was intractable. My approach has been to leverage objects. I have a base class for a CsvWriter and CsvReader that does RFC compliant work. I get the business stake holders to provide samples of files they need to import or export. I look at those with my eyes and make sub-classes as needed.
And data type influencing is a fun side project. I worked for a while on a fully generic CSV to SQL converter. Basically you end up with regex matches for different formats and you keep a running tally of errors encountered and then do a best fit for the column. Using a single CSV file is consistent with itself for weird formatting induced by whatever process the other side used. It actually worked really well, even on multi gigabyte CSV files from medical companies that one of my clients had to analyze with a standard set of SQL reports.
Y'all caught me being less than rigorous in my language while posting from my phone while in the middle of making breakfast for the kids and the wife to get out the door. To the "not well defined" aspect, I disagree in part. There is an RFC for CSV and that is defined. Now the conflated part is the use of CSV as an interchange format between various systems and locales. The complexity of data exchange, in my mind, is not the fault of the simple CSV format. Nor is it the fault of the format that third party systems have decided to not implement it as it is defined. That is a different challenge.
Data exchange via CSV becomes a multiple party negotiation between software (sometimes the other side is "set in stone" such as a third party commercial system or Excel that is entrenched), users, and users' local configuration. However, while not well defined none of these are intractable from a software engineering point of view. And if your system is valuable enough and needs to ingest data from enough places it is NOT intractable to build auto detection capabilities for all the edge cases. I have done it. Is it sometimes a challenge, yes. Do project managers love it when you give 8 point estimates for what seems like it should be a simple thing, no. That is why the coding kata RFC compliance solution as a base first makes the most sense and then you troubleshoot as bug reports come in about data problems with various files.
If you are writing a commercial, enterprise solution that needs to get it right on its own every time, then that becomes a much bigger project.
But do you know what is impossible, getting the entire universe of other systems that your customers are using to support some new interchange format. Sorry, that system is no longer under active development. No one is around to make code changes. That is not compatible with the internal tools. For better or worse, CSV is the lingua franca of columns of data. As developers, we deal with it.
And yes, do support Excel XLSX format if you can. There are multiple third party libraries to do this in various languages of various quality.
As a developer, I have made a lot of my living dealing with this stuff. It can be a fun challenge, frustrating at time, but in the end as professionals we solve what we need to to get the job done.
Unfortunately that is only the case as long as you stay within the US. for non-US users Excel has pretty annoying defaults, such as defaulting to ; instead of , as a separator for "CSV", or trouble because other languages and Excel instances use , instead of . for decimal separators.
A nice alternative I've used often is to constructor an excel table and then giving is an .xls extension, which Excel happily accepts and has requires much less user explanation than telling individual users how to get Excel to correctly parse a CSV.
> Unfortunately, that is only the case as long as you stay outside the US. For US users Excel has pretty annoying defaults, such as defaulting to , instead of ; as a separator for "CSV", or trouble because US instances of Excel use . instead of , for decimal separators.
* What does missing data look like? The empty string, NaN, 0, 1/1-1970, null, nil, NULL, \0?
* What date format will you need to parse? What does 5/5/12 mean?
* How multiline data has been written? Does it use quotation marks, properly escape those inside multiline strings, or maybe it just expects you to count the delimiter and by the way can delimiters occur inside bare strings?
And let me add my own question here:
what is the actual delimiter? Do you support `,`, `;` and `\t`?
One example that will kill loading a csv in excel beyond the usual dates problem. If you open in excel a csv file that has some large id stored as int64, they will be converted to an excel number (I suspect a double) and rounded. Also if you have a text column but where some of the codes are numeric with leading zeros, the leading zeros will be lost. And NULL is treated as the string "NULL".
I am aware you can import a csv file in excel by manually defining the column types but few people use that.
I'd be fine with an extension of the csv format with one extra top row to define the type of each column.
Excel will literally use a different value separator depending on the locale of the machine (if the decimal separator for numbers is a comma and not a dot, it’ll use a semicolon as a value separator instead of a comma).
I will hard disagree here. Always have has a clrf issue or another weirdness come up.
Especially if you work with teams from different countries, csv is hell. I always generate rfc compliant csv, not once it was accepted from day one. Once, it took us two weeks to make it pass the ingestion process (we didn't have access to the ingest logs and had to request them each day, after the midnight processing) so in the end, it was only 10 different tries, but still.
I had once an issue with json (well, not one created by me) , and it was clearly a documentation mistake. And I hate json (I'm an XML proponent usually). Csv is terrible.
Parsing the CSV you have in front of you is not hard (usually). Writing a parser that will work for all forms of CSV you might encounter is significantly harder.
Excel and data precision are really at odds. But it might be interesting to see what would happen if excel shipped with parquet import and export capabilities
> Honestly CSV is a very simple, well defined format, that is decades old and is “obvious”
This is the problem though. Everything thinks it is "obvious" and does their own broken implementation where they just concatenate values together with commas, and then outsources dealing with the garbage to whoever ends up with the file on their plate.
If a more complex, non-obvious format was required, instead of "easy I'll just concatenate values" they might actually decide to put engineering into it (or use a library)
Eh. I much prefer to produce and consume line delimited JSON. (Or just raw JSON). Its easy to parse, self descriptive and doesn't have any of CSV's ambiguity around delimiters and escape characters.
Its a little harder to load into a spreadsheet, but in my experience, way easier to reliably parse in any programming language.
JSON(newline delimited or full file) is significantly larger than csv. With csv the field name is mentioned once in the header row. In JSON every single line repeats the field names. It adds up fast, and is more of a difference than between csv to parquet.
I work with large volumes of data from many different sources. I’m lucky to get them to send csv. Of course there are better formats, but all these sources aren’t able to agree on some successful format.
Csv that’s zipped is producible and readable by everyone. And that makes is more efficient.
I’ve been reading these “everyone is stupid, why don’t they just do the simple, right thing and I don’t understand the real reason for success” articles for so long it just makes me think the author doesn’t have a mentor or an editor with deep experience.
It’s like arguing how much mp3 sucks and how we should all just use flac.
The author means well, I’m sure. Maybe his next article will be about how airlines should speak Esperanto because English is such a flawed language. That’s a clever and unique observation.
All that said (and perhaps counter to what I said), I do hope "Unicode Separated Values" takes off. It's essentially just a slight tweak to CSV where the delimiters are special unicode characters, so you don't have to have complicated quoting/escaping logic, and it also supports multiple sheets (i.e. a workbook) in a single file.
And the answer to that is always no. You will it think it's yes because it works for you, but when you send it to someone who has a different Excel version or simply different regional settings, it won't work. The recipient will first have to figure out what dialect you used to export.
If Excel had a standardised "Save as USV" option it would solve so many issues for me.
I get so many broken CSVs from third-parties
Commas can be typed by anyone on any keyboard and readable by anyone.
Special Unicode Characters(tm) can't be typed by anyone on any keyboard and readable by noone.
Convenience is a virtue.
https://www.iana.org/assignments/media-types/text/tab-separa...
Besides all the downsides CSV has, as soon as it's not only machine-machine communication and a human is involved, CSV is just simole enough.
Deleted Comment
And really is in search of a problem to solve.
If you have important data being shuffled around systems, pick something with a specification instead.
I also think we shouldn't blindly dismiss the performance drawbacks of CSV when working with data pipelines. At even modest scales it becomes hard to work with very large CSVs because the data often doesn't fit into memory, a problem easily solved by Parquet and other formats assuming you only need a subset.
Csv is not very performant, but it doesn’t matter for these use cases.
I’ll also add that I’m not working with the csvs, they are just I/o. So any memory issues are handled by the load process. I certainly don’t use csvs for my internal processes. Just for when someone sends me data or I have to send it back to them.
That being said my workstation is pretty big and can handle 10s of gigs of csv before I care. But that’s usually just for dev or debugging and anything that sticks around will be working with data in some proper store (usually parquet distributed across nodes).
Parquet and Avro are widely supported in backend languages and also in data analysis. I don't think the article is talking about exported-like-a-jpeg, but instead exported-like-a-daily-report-run: the data scientist doing the exporting is probably using R or Pandas instead of Excel, and can reasonably be expected to read https://arrow.apache.org/docs/r/reference/read_parquet.html.
Not saying csv doesn’t have its issues, but I don’t think the author made a convincing argument.
A lot of the issues the author brought up didn’t sound that bad and/or it sounds like he never looked at the source data first.
If you’re doing work with large datasets, I think it’s a good practice to at least go and look at the source data briefly to see what to expect.
This will give you a good idea of the format it outputs, data types, some domain context, etc. or some combination thereof and I don’t think it even takes that long.
Also, it reminds me of the arguments against excel in a way. Most people know what a csv is, more or less how to open it, and don’t need too much context when discussing the file. Someone will quickly understand if you tell them the file isn’t delimited properly right away. These are pros that shouldn’t be taken for granted.
Again, I’m not saying csv doesnt have issues or that there aren’t better alternatives, simply that I didn’t find this particular argument convincing.
But the same is true for csv, and they are not readable by everyone since you don't always know how to read them, there is not enough info for that
Also it's not a good reflection on "deep experience" if it leads to reflexive defense of common stupid things people do with wrong analogies (e.g, flac is less efficient, so more like csv)
"You give up human readable files,..."
I was genuinely interested in some alternative suggestions - but the human readableness of csv is what makes it so sticky imo.
The evervescent suggestions of a brighter more logical, even obvious, solutions, is often a clear indicator of domain inexperience or ignorance.
CSV created tons of issues regarding encoding, value separation etc.
I started talking to our customers and were able to define interfaces with better and aligned format. json made my life easier.
In some senses, I think internet culture (maybe modern intellectual culture generally) gets stuck in these repetitive conversations.
Reprosecuting without seemingly knowing about all the previous times the conversation has been had.
I think that's a little unfair, it sounds like the author does have a decent amount of experience working with real-world CSV files:
I remember spending hours trying to identify an issue that caused columns to "shift" around 80% into a 40GB CSV file, and let me tell you, that just isn't fun.
If only CSV were CSV, as opposed to some form that's 80-90% CSV by line count with enough oddities to really make the parser ugly and hard to read.
See, the sweet spot isn't something completely unstructured, because then you feel justified in throwing up (your hands) and declaring defeat. The sweet spot is a file that's sufficiently close to being structured you can almost parse it nicely, but has enough bad lines you can't go in and fix them all by hand in a reasonable timeframe, and you can't tell upstream to get their shit in order because it's only a few lines.
But I’d say the error rate is actually very low, maybe .1-1% and nowhere near 10-20% of data being messed up.
"Of course, we can't conclude that you should never export to CSV. If your users are just going to try to find the quickest way to turn your data into CSV anyway, there's no reason why you shouldn't deliver that. But it's a super fragile file format to use for anything serious like data integration between systems, so stick with something that at the very least has a schema and is more efficient to work with."
It's the same with csv. They come in all kinds of formats because nobody agreed on the standard. Comma separated, semicolon separated, pipe separated, escaped, not escaped.
Everytime I have to deal with csv I first have to figure out how to parse it in code.
So I think the author is right, we must agree on a better format because that is what friends do.
You are also right because it's an illusion to think that this is going to change anythime soon. But who knows..
Plus, taking the data out of [proprietary software app my client's data is in] in csv is usually easy. Taking the data out in Apache Parquet is...usually impossible, but if it is possible at all you'll need to write the code for it.
Loading the data into [proprietary software app my client wants data put into] using a csv is usually already a feature it has. If it doesn't, I can manipulate csv to put it into their import format with any language's basic tools.
And if it doesn't work, I can look at the csv myself, because it's human readable, to see what the problem is.
90% of real world coding is taking data from a source you don't control, and somehow getting it to a destination you don't control, possibly doing things with it along the way. Your choices are usually csv, xlsx, json, or [shudder] xml. Looking at the pros and cons of those is a reasonable discussion to have.
It's also worth noting that SQLite can ingest CSV files into memory and perform queries on them directly - if the files are not too large, it's possible to bypass the sqlite format entirely.
I work with a provider who offers CSV exports as the only way to access data. Recently, we found they were including unsanitized user input directly in fields. They weren't even quoting these fields.
The platform "notified their quality assurance team ASAP" (like every other issue, we never heard back), but we had a deadline. This, of course, was a mess, but being able to quickly open the file and fix quotes was all it took. I shudder at the thought of trying to salvage a corrupted binary with zero help from the holder of the data.
The issue comes from CSV files looking easy to generate by hand, when it in fact is not.
In the French locale, the decimal point is the comma, so "121.5" is written "121,5". It means, of course, that the comma can't be used as a separator, so the semicolon is used instead.
It means that depending whether or not the tool that exports the CSV is localized or not, you get commas or you get semicolons. If you are lucky, the tool that imports it speaks the same language. If you are unlucky, it doesn't, but you can still convert it. If you are really unlucky, then you get commas for both decimal numbers and separators, making the file completely unusable.
There is a CSV standard, RFC 4180, but no one seems to care.
It’s unfortunate that there isn’t a single CSV format, but for historical reasons it is what it is. It’s effectively more like a family of formats that share the same file extension.
Excel actually has a convention where it understands when there is a line
at the start of the file.By the way, in addition to differing separators, you can also get different character encodings.
Excel does understand a BOM to indicate UTF-8, but some versions of Excel unfortunately ignore it when the “sep=“ line is present…
Thank you! This is a game changer. As I was reading through these comments I was thinking how much better it would be if the separator could be specified in the file, but it would only be useful for my own stuff, if I was to do that on my own.
I’ll be trying this first thing tomorrow at work. I don’t do as much with CSVs as I used to, but am currently working with them a lot, and have a single entry that always throws the resulting Excel file off, and I can’t be bothered to figure out how to escape it (I spent too much time in the past messing with that and got nowhere). This sep= lines will hopefully solve my issues.
https://en.wikipedia.org/wiki/Decimal_separator
https://commons.wikimedia.org/wiki/File:DecimalSeparator.svg
Overall, semicolon feels like the superior delimiter.
Most sensible people don't export formatted numbers (e.g. 100.000,00), but even those are pretty trivial to import.
i.e.: ``` "1,20","2,3",hello "2,40","4,6",goodbye ```
If your tool reads CSV by doing `string_split(',', line);`, your tool is doing it wrong. There's a bunch of nuance and shit, which can make CSVs interesting to work with, but storing a comma in a field is a pretty solved issue if the tool in question has more than 5 minutes thought put into it.
You should probably use PSV - Point Separated Variable! Obviously we would need to adjust PSV to account for correct French word order (and actually use French correctly). Britain and USA would use something called PVS2 instead as a data interchange format with France which involves variable @@@ symbols as delimiters, unless it is a Friday which is undefined. The rest of the world would roll its eyes and use PSV with varying success. A few years later France would announce VSP on the back page of Le Monde, enshrine its use in law (in Aquitaine, during the fifteenth century) but not actually release the standard for fifteen years.
The world is odd. We have to work within an odd world.
Interestingly enough, you and I could look at a CSV encoded data set with commas as decimal separators and work out what is going on. You'll need a better parser!
Heh. Ah, HN. Always good for a laugh line.
The advantages of CSV are pretty massive though - if you support CSV you support import and export into a massive variety of business tools, and there is probably some form of OOTB support.
You have a (usually) portable transport format that can get the information into and out of an enormous variety of tools that do not necessarily require a software engineer in the middle.
I'm also struggling with such a quick dismissal of human readable formats. It's a huge feature.
What happens when there's a problem with a single CSV file in some pipeline that's been happily running fine for years? You can edit the thing and move on with your day. If the format isn't human readable, now you may have to make and push a software update to handle it.
Of course, CSV is a terrible format that can be horribly painful. No argument there.
But despite the pain, it's still far better than many alternatives. In many situations.
Getting records that contain newlines would be a bit trickier.
Parquet is the opposite of simple. Even when good libraries are available (which it usually isn't), it is painful to read a Parquet file. Try reading a Parquet file using Java and Apache Parquet lib, for example.
Avro is similar. Last I checked there are two Avro libs for C# and each has its own issues.
Until there is a simple format that has ubiquitous libs in every language, CSV will continue to be the best format despite the issues caused by under-specification. Google Protobuf is a lot closer than Parquet or Avro. But Protobuf is not a splitable format, which means it is not Big Data friendly, unlike Parquet, Avro and CSV.
I skimmed their docs a bit: https://parquet.apache.org/docs/
I would not look forward to implementing that.
It all seems rather complex, and even worse: not actually all that well described. I suppose all the information is technically there, but it's really not a well-design well-written specification that's easy to implement. The documentation seems like an afterthought.
This is probably why good libraries are rare.
> Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem
Nope and nope.
What, when I worked at Google concatenating protobuf strings was a common way to concatenate protobufs, they are absolutely splittable. People might not know it but there is a reason they are designed like they are, it is to handle big data as you say.
If you mean you can't split a single protobuf, sure, but you don't need to do that, it is like trying to split a single csv line, doesn't mean csv isn't splittable.
Edit: Maybe some of that tooling to work with protobufs are internal only or not a part of the external protobuf packages though.
I laughed. We don't call somebody writing a poor varint serializer for in-house use, then discovering that it doesn't handle negative numbers and floats that well so slapping a couple of hotfixes on top of it that make it necessary to have a protocol specification file, "design".
They support CSV but not your CSV.
For example, how does quoting work? Does quoting work?
They had never had a customer do X on Y field, so they never quoted it nor added code to quote it if needed..
Of course, we did X in one entry. Took me too long to find that which obviously messed up everything after.
Eh, I don't think that's the problem. If that was the problem, there are a zillion ways to chunk files; .tar is probably the most ubiquitous but there are others.
The bigger problem is that Protobuf is way harder to use. Part of the reason CSV is underspecified is it's simple enough it feels, at first glance, like it doesn't need specification. Protobuf has enough dark corners that I definitely don't know all of it, despite having used it pretty extensively.
I think Unicode Separated Values (USV) is a much better alternative and as another poster mentioned, I hope it takes off.
Dead Comment
If you're exporting files for machine consumption, please consider using something more robust than CSV.
People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)
There's also an implicit assumption to each use-case about whether the data can/should fit in memory or not, and how much RAM a typical machine would have.
As you mention, it's pretty standard to store and access compressed CSV files as .csv.zip or .csv.gz, which mitigates at least trading off the space issue for a performance overhead when extracting or searching.
The historical reason a standard like CSV became so entrenched with business, financial and legal sectors is the same as other enterprise computing; it's not that users are ignorant; it's vendor and OS lock-in. Is there any tool/package that dynamically switches between formats internally? estimates comparative file sizes before writing? ("I see you're trying to write a 50Gb XLSX file...") estimates read time when opening a file? etc. Those sort of things seem worth mentioning.
Something more boring, like "Consider whether other options make more sense for your data exports than CSV". Plenty of people have suggested other good options in comments on this submission, such as for example sqlite. I think the post comes off as if I'm trying to sell a particular file format for all uses cases, when what I had in mind when writing it was to discourage using CSV as a default. CSV has a place, certainly, but it offloads a lot of complexity on the people who are going to consume the data, in particular, they need to figure out how to interpret it. This can't necessarily be done by opening the file in an editor and looking at it, beyond a certain size you're going to need programming or great tools to inspect it anyway.
I was given an initial export of ~100 poor quality CSV files totaling around 3TB (~5-6 different tables, ~50 columns in each) in size a few years back, and had to automate ingestion of those and future exports. We could've saved a lot of work if the source was able to export data in a friendlier format. It happened more than once during that project that we were sent CSVs or Excel sheets that had mangled data, such as zip codes or phone numbers with leading 0s removed. I think it is a good thing to inform people of these problems and encourage the use of formats that don't necessity guessing data types. :shrug:
> People are busy; instead of hinting "something more robust than CSV", mention the alternatives and show a comparison (load time/search time/compression ratio) summary graph. (Where is the knee of the curve?)
This might be an interesting thing to follow up later, but would require a lot more work.
Sometimes a click-baity title is what you need to get a decent conversation/debate going. Considering how many comments this thread got, I'd say you achieved that even if sparking a lengthy HN thread had never been your intent.
The problem with parquet is it's complicated and you basically have to remap from parquet to whatever you're importing into because the people on the other side have remapped from whatever to parquet.
There are likely relationships and constraints there that you'll have to hack around - which is harder to do because the parquet tools sort of suck/aren't as flexible.
With CSV you can hack around any problem in the ETL process.
String encoding? Dates? Formatted numbers? Booleans (T/F/Y/N/etc)? Nested quotes? Nested CSV!?
How about intermediate systems that muck things up. String encoding going through a pipeline with a misconfiguration in the middle. Data with US dates pasted into UK Excel and converted back into CSV, so that the data is a mix of m/d/yy and d/m/yy depending on the magnitude of the numbers. Hand-munging of data in Excel generally, so that sometimes the data is misaligned WRT rows and columns.
I've seen things in CSV. I once wrote an expression language to help configure custom CSV import pipelines, because you'd need to iterate a predicate over the data to figure out which columns are which (the misalignment problem above).
- treat it as a join, and unroll by duplicating non-nested CSV data in separate rows for every element in the nested CSV
- treat it as a projection, have an extraction operator to project the cells you want
- treat it as text substitution problem; I've seen CSV files where every line of CSV was quoted like it was a single cell in a larger CSV row
You get nested CSV because upstream systems are often master/detail or XML but need to use CSV because everybody understands CSV because it's such a simple file format. Good stuff.
CSV gives you the freedom of choice how to write the data.
You need the french data format? You can write the french date format. You need # as separator? You can use # as separator.
Sure you can't read any file without knowing it's specification but that's not the usecase of CSV.
CSV is for data transfer between systems which know each other.
Add another munging the to list, ids that 'look like numbers' e.g. `0002345` will get converted to `2,345`. Better be sure to pre-pend ' i.e. `'0002345`
Deleted Comment
Parsing RFC-compliant CSVs and telling clients to go away with non-compliant CSVs is not hard.
Parsing real world CSVs reliably is simply impossible. The best you can do is heuristics.
How do you interpret this row of CSV data?
1,5,The quotation mark "" is used...,2021-1-1
What is the third column? The RFC says that it should just be literally
> The quotation mark "" is used...
But the reality is that some producers of CSVs, which you will be expected to support, will just blindly apply double quote escaping, and expect you to read:
The quotation mark " is used...
Or maybe you find a CSV producer in the wild (let's say... Spark: https://spark.apache.org/docs/latest/sql-data-sources-csv.ht...) that uses backslash escaping instead.
5. Each field may or may not be enclosed in double quotes (however some programs, such as Microsoft Excel, do not use double quotes at all). If fields are not enclosed with double quotes, then double quotes may not appear inside the fields.
Deleted Comment
And data type influencing is a fun side project. I worked for a while on a fully generic CSV to SQL converter. Basically you end up with regex matches for different formats and you keep a running tally of errors encountered and then do a best fit for the column. Using a single CSV file is consistent with itself for weird formatting induced by whatever process the other side used. It actually worked really well, even on multi gigabyte CSV files from medical companies that one of my clients had to analyze with a standard set of SQL reports.
Data exchange via CSV becomes a multiple party negotiation between software (sometimes the other side is "set in stone" such as a third party commercial system or Excel that is entrenched), users, and users' local configuration. However, while not well defined none of these are intractable from a software engineering point of view. And if your system is valuable enough and needs to ingest data from enough places it is NOT intractable to build auto detection capabilities for all the edge cases. I have done it. Is it sometimes a challenge, yes. Do project managers love it when you give 8 point estimates for what seems like it should be a simple thing, no. That is why the coding kata RFC compliance solution as a base first makes the most sense and then you troubleshoot as bug reports come in about data problems with various files.
If you are writing a commercial, enterprise solution that needs to get it right on its own every time, then that becomes a much bigger project.
But do you know what is impossible, getting the entire universe of other systems that your customers are using to support some new interchange format. Sorry, that system is no longer under active development. No one is around to make code changes. That is not compatible with the internal tools. For better or worse, CSV is the lingua franca of columns of data. As developers, we deal with it.
And yes, do support Excel XLSX format if you can. There are multiple third party libraries to do this in various languages of various quality.
As a developer, I have made a lot of my living dealing with this stuff. It can be a fun challenge, frustrating at time, but in the end as professionals we solve what we need to to get the job done.
A nice alternative I've used often is to constructor an excel table and then giving is an .xls extension, which Excel happily accepts and has requires much less user explanation than telling individual users how to get Excel to correctly parse a CSV.
* What does missing data look like? The empty string, NaN, 0, 1/1-1970, null, nil, NULL, \0?
* What date format will you need to parse? What does 5/5/12 mean?
* How multiline data has been written? Does it use quotation marks, properly escape those inside multiline strings, or maybe it just expects you to count the delimiter and by the way can delimiters occur inside bare strings?
And let me add my own question here:
what is the actual delimiter? Do you support `,`, `;` and `\t`?
What is the decimal delimiter “.”, “,”?
Most csv users don’t even know they have to be aware of all of these differences.
I am aware you can import a csv file in excel by manually defining the column types but few people use that.
I'd be fine with an extension of the csv format with one extra top row to define the type of each column.
And what fraction of those users would be able to anything with another format?
> an extension of the csv format with one extra top row to define the type of each column
If the goal is foolproof export to Excel, use XLSX.
Especially if you work with teams from different countries, csv is hell. I always generate rfc compliant csv, not once it was accepted from day one. Once, it took us two weeks to make it pass the ingestion process (we didn't have access to the ingest logs and had to request them each day, after the midnight processing) so in the end, it was only 10 different tries, but still.
I had once an issue with json (well, not one created by me) , and it was clearly a documentation mistake. And I hate json (I'm an XML proponent usually). Csv is terrible.
Parsing the CSV you have in front of you is not hard (usually). Writing a parser that will work for all forms of CSV you might encounter is significantly harder.
This is the problem though. Everything thinks it is "obvious" and does their own broken implementation where they just concatenate values together with commas, and then outsources dealing with the garbage to whoever ends up with the file on their plate.
If a more complex, non-obvious format was required, instead of "easy I'll just concatenate values" they might actually decide to put engineering into it (or use a library)
And yet you can anyway if you are confident that your CSV won't contain anything that would mess it up.
No.
Of course there are better formats. But for many use cases friends encourage friends to export to CSV.
Its a little harder to load into a spreadsheet, but in my experience, way easier to reliably parse in any programming language.