This week, a client I'm working for received a large volume of data, and as usual the data was sent as "flat" files—or CSV (comma-separated values1) files, as they are more often called. Everyone hates CSV files, because they are badly specified, contain little metadata and are generally an unreliable way to transfer information accurately. They continue to be used, of course, because they are the lowest-common denominator format and just about everything can read and write them in some fashion.
Some of the problems with CSV files are well captured in a pithy blog post by Jesse Donat entitled Falsehoods Programmers Believe about CSVs.
Among other things, the data we received this week featured:
- unescaped commas in unquoted (comma-separated) values;
- an unspecified non-UTF-8 encoding that also did not appear to be iso-8859-1 ("latin-1" to its friends), nor indeed iso-8859-15 ("latin-9");
- different null markers in different fields, and some cases, different null markers in a single field;2
- field names (column headers) that included spaces, apostrophes, dashes and (in at least one case) a non-ASCII non-alphanumeric character;
- multiple date formats, even within a single field, including some dates with three-digit years.
All of this is a bit frustrating, but far from unusual, and only one of these problems was actually fatal—the use of unquoted, unescaped separators in values, which makes the file inherently ambiguous. I'm almost sure this data was written but not read or validated, because I don't believe the supplier would have been able to read it reliably either.
In an ideal world, we'd move away from CSV files, but we also need to recognise not only that this probably won't happen, but that the universality, plain-text nature, grokkability and simplicity of CSV files are all strengths; for all that we might gain using fancier, better-specified formats, we would lose quite a lot too, not least the utility of awk, split, grep and friends in many cases.
So if we can't get away from CSV files, how can we increase reliability when using them? Standardizing might be good, but again, this is going to be hard to achieve. What we might be able to do, however, is to work towards a way of specifying flat files that at least allows a receiver of them to know what to expect, or a generator to know what to write. I've been involved with a few such ideas over the years, and the software my company produces (Miró) used its own non-standard, XML-based way of describing flat files.
What I'm thinking about is trying to produce something more general, less opinionated, and more modern (think JSON, rather than XML, for starters) that addresses more issues. The initial goal would be simply descriptive—to allow a metadata file to be created that accurately describes the specific features of a given flat file so that a reader (human or machine) knows how to interpret it. Over time, this might grow into something bigger. I think obvious things to do after the format is created include:
- In my case, getting Miró to accept these in place of3 its current XML-based files when reading (or writing) flat files. (Initially, at least, Miró would not be able to read or write all files that could be specified in this way, but could at least warn the user when it couldn't.)
- Also getting the Python
tddalibrary to be able to use this when using CSV files for input (and perhaps also for output).
- Writing an "argument generator" for some of the standard (Python) CSV readers and writers to set the read/write options to be consistent with a given metadata description, and then probably to provide wrapped versions of those readers/writers that can accept a path for a CSV file and a path to a metadata file and use the underlying CSV reader or writer to read or write the file using that specification.
- Writing (yet another) "smart" reader to try to read any old CSV files (using heuristics) and write out a metadata file that appears to match the data provided. This could not possibly work completely reliably because of all inherent ambiguity in flat files already alluded to, but an "80%" solution for real-world files should certainly be achievable as many programs make a reasonable job of handling arbitrary CSV files already.
- Writing a validator to confirm whether a given CSV file is consistent with the specification in the metadata file.
- Incorporating such a flat-file validator into TDDA so that it can check not only the (semantic) content of a dataset, but also the syntactic/formatting validity of data, confirming that it has been or can be read correctly.4
Together, a smart reader that generates a metadata file for a CSV file (item 4 above) and a validator that validates a CSV file against such a metadata specification (item 5) are very analogous to the current constraint discovery and data verification, respectively, but in the space of CSV files—roughly, "syntactic" conformance—rather than data (or "semantic") correctness.
Miró's Flat File Description format (XMD Files)
Here is an example, from its documentation, of the XMD data files that Miró uses.
<?xml version="1.0" encoding="UTF-8"?> <dataformat> <sep>,</sep> <!-- field separator --> <null></null> <!-- NULL marker --> <quoteChar>"</quoteChar> <!-- Quotation mark --> <encoding>UTF-8</encoding> <!-- any python coding name --> <allowApos>True</allowApos> <!-- allow apostophes in strings --> <skipHeader>False</skipHeader> <!-- ignore the first line of file --> <pc>False</pc> <!-- Convert 1.2% to 0.012 etc. --> <excel>False</excel> <!-- pad short lines with NULLs --> <dateFormat>eurodt</dateFormat> <!-- Miró date format name --> <fields> <field extname="ext1" name="int2" type="string"/> <field extname="ext2" name="int2" type="int"/> <field name="int3" type="real"/> <field extname="ext4" type="date"/> <field extname="ext5" type="date" format="rdt"/> <field extname="ext6" type="date" format="rd"/> <field extname="ext7" type="bool"/> </fields> <requireAllFields>False</requireAllFields> <banExtraFields>False</banExtraFields> </dataformat>
Three things to note immediately about this:
- I'm not presenting this as the solution: the XMD format is now rather out of vogue and there are a number of things I would definitely do differently fifteen years on (such as using more standard names for types and more standard date format specifiers).
- The XMD format is slightly more than just a flat file description, in that it contains a couple of things that are more about how to interpret and handle the data after reading, rather than simply describing the data.
- The XMD file supports the notion of two different names for a field.
extnameis the name in the CSV file (the external name), while the
nameis the name for Miró to use for the field. The semantics of this are slightly complicated, but allow for renaming of fields on import, and for naming of fields where there is no external name, or external names are repeated, or the external name is otherwise unusable by Miró. If the CSV file has a header and each field has a different name in the header, the order of the fields int he XMD file does not matter, but if there are missing or repeated field names, Miró will use the field order in the XMD file.
Notwithstanding the amazing variety seen in CSV files, as illuminated by Jesse Donat's aforementioned blogpost, most CSV files from mature systems vary only in the ways covered by a few of the items described in the CSV file. The most important things to know about a flat file overall are normally:
- Encoding. The file encoding—these days, most commonly UTF-8.
The separator character—most commonly a comma (
,), but pipe (
|), tab and semicolon (
;) are also frequently used.
- Quoting. What character is used to quote strings (if any). There are quite number of subtleties here (not all capable of being expressed in the XMD file) including:
- Missing Values. How are missing values (NULLs) denoted in the file, should there be any?
Escaping. How are characters "escaped"? This really covers a set of different issues, and the XMD file is not rich enough to cover all possibilities. One aspect is, when strings are quoted, how are quotes in the string handled? The most common answers are either by preceding them with an escape character, usually backslash (
"This is an escaped \" character in a string"
or by stuttering:
"This is a stuttered "" character in a string"
Escaping is also a way of including the separator in non-quoted values, like these display prices:
Price,DisplayPrice 100.0,£100.00 1000.0,£1\,000.00 1000000,£1\,000\,000.00
Escaping is also a way of specifying some special characters, e.g.
\nfor a newline,
\tfor a tab etc., and as a result when an actually backslash is required it is self-escaped (as
Row Truncation after the last non-null value. Are rows in which the last value is missing truncated? Like many CSV writers, Excel writes missing values as blanks so that
1,,3is read as
1for the first field, a missing value for the second field and
3for the third field. More quirkily, when Excel writes out CSV files, if there are n columns and the last m of them on a row are missing, Excel will write out only the non-missing values, and no further separators, so that there will be only n – m values on that line and only n – m – 1 separators. This behaviour is hard to describe and (as far as I know) unique to Excel, so in the XMD file this is simply marked as
- Header handling.
Although the common case is for CSV files to have a single line at
the start with the field names, sometimes there is no such line,
and sometimes there are multiple lines before the data (one or more
of which many specify the field names). As a minimum, a metadata
description needs to be able to specify whether there is a header
line, and ideally how many such lines there are and how headers
should be extracted from them.
If there are no headers, the specification should probably specify the
(Miró imaginatively calls the fields
FieldNif no fieldnames are available in the flat file or any XMD file.)
It's always useful and sometimes necessary to specify field types, and as discussed above, sometimes field names. Typing is almost always ambigous, and such ambiguity is increased if there are any bad values in the data. Moreover, in some cases (especially dates and timestamps), it is useful to specify the date format. Although good flat-file readers generally make a reasonable job of inferring types, and often date formats too, it is clearly helpful for a metadata specification to include these.
Just as date formats can vary between fields, other things can vary too, most obviously null indicators (missing value information), quoting and escaping. Moreover, if numeric data is formatted (e.g. including currency indicators, thousand separators etc.) these can all usefully be specified.
The final pair of settings in the XMD file look slightly different from
the others, partly because they are phrased as directives rather than
requireAllFields, when set, is a directive to Miró
to raise a warning or an error if any of the fields in the XMD file are
not present in the CSV file. Similarly,
banExtraFields is a directive
to raise such a warning or error if any fields are found in the CSV file
that are not listed in the XMD file. Miró has several ways to specify
whether infringements result in warnings or errors.
These directives can, however, be recast as declarations. The
banExtraFields directive, when true, can equally be thought of as a
declaration the field list is complete. Similarly, the
requireAllFields directive, when true, can be thought of as a
declaration that the field list is not just describing types that and
formats for fields that might be in the CSV files, but rather that all fields
listed are actually in the file.8
In principle, I think it would probably be better if these descriptions were more obviously descriptive or declarative, but I am struggling to find a pair of words/phrases that would capture that elegantly. At this point I am tempted to retain their imperative nature but make them slightly more symmetrical, perhaps with:
"require-all-fields": true, "allow-extra-fields": false
Alternatively a more declarative syntax might be something like:
"csv-file-might-omit-fields": false, "csv-file-might-include-extra-fields": false
The reader might wonder why the fields in the metadata file would ever not correspond exactly to those in file. In practice, it is not uncommon when dealing with relatively "good" CSV files to write an XMD file that specifies types and formats only for fields that trip up the flat-file reader. Conversely, it can be useful to have XMD files that describe a variety of possible files that share field names and types; in those cases, the extra ones do no harm.
What Might a Metadata File Look Like?
The XMD file gets quite a lot of things right:
- As XML, it's a standard format that's easy to read, though today JSON is clearly more popular for this sort of use. (It would be fairly easy to allow a common format to be expressed in JSON, XML or YAML, but there's something to be said for a single format, probably JSON.)
- All of the most fundamental overall properties are represented—encoding, separator, null marker, escape characters, and date format.
- There's a separation between the overall file properties and the per-field properties, with the ability to specify the actual fieldname in the file, the field type and, in the case of date fields, custom formats on a per-field basis, if necessary.
- It can give enough enough information to allow Excel-style truncated lines can be read successfully.
There are also a few major shortcomings:
- The single escape chaaracter specification covers multiple things.
- There is no explicit support for quote stuttering (which is fairly common).
- The format does not recognise multiple headers.
- The format does not provide any way to specify non-date field formats such as boolean specifiers, possible thousand separators and decimal point markers.
- The format assume a single NULL indicator for all fields and assumes that there is only one kind of missing value/missing value.
- The date formats supported are not comprehensive and are not expressed in a standard way.
- Type specifiers are also somewhat non-standard.
- XMD files fail to recognize the possibility that null markers are quoted, and implicitly assume that any empty string is distinct from a missing string value. This is probably too opinionated.
Some of these shortcomings reflect the fact that the XMD format was conceived less as a general-purpose flat-file descriptor than a specification as to how Miró should read or write a given flat file, and also a way for Miró to specify how it has written a flat file.
Essentially, I think a good flat-file description format would preserve the good aspects and remedy the faults identified, as well as providing a mechanism for specifying some more esoteric possibilies not mentioned so far.
I'll propose something concrete in subsequent posts.
Sometimes the separator in a flat file is a character other than a comma, and you occasionally see
.tsvused an extension when the separator is a tab character, or
.psvwhen the separator is a pipe character (
|). Often, however, a
csvextension is still used, and as result the acronym CSV is sometimes restyled as
character-separated values. I had always heard this extension attributed to Microsoft, but have been unable to verify this. ↩
To be fair, the notion of different kinds of missing values is reasonable—missing because it wasn't recorded, missing because it was unreadable, missing because it's an undefined result (e.g. mean of no values) etc. But this wasn't that: it was just multiple ways of denoting generic missing values. ↩
by which, of course, I mean as well as ... ↩
There's an interesting question as to whether the CSV format specification should be incorporated as an optional part of a TDDA file, and if so, whether it should simply be a nested section or whether the field-specific components should be merged with TDDA's field sections. There are pros and cons. ↩
Yes, some systems do this. ↩
I know, madness! But such practices occur! ↩
Miró's slightly extended version of TDDA files includes lists of required and allowed fields, which serve a similar purpose to these settings. ↩