What is wrong with a file for this? Sounds more like a local log or debug output that a single thread in a single process would be creating. A file is fine for high volume append only data like this. The only big issue is the format of that data.
I think SQLite is a great middle ground. It saves the database as a single .db file, and can do everything an SQL database can do. Querying for data is a lot more flexible and a lot faster. The tools for manipulating the data in any way you want are very good and very robust.
However, I’m not sure how it would affect file size. It might be smaller because JSON/YAML wastes a lot of characters on redundant information (field names) and storing numbers as text, which the database would store as binary data in a defined structure. On the other hand, extra space is used to make common SQL operations happen much faster using fancy data structures. I don’t know which effect is greater so file size could be bigger or smaller.
SQLite would definitely be smaller, faster, and require less memory.
Thing is, it’s 2025, roughly 20 years since anybody’s given half a shit about storage efficiency, memory efficiency, or even CPU efficiency for anything so small. Presumably this is not something they need to query dynamically.
True (in most contexts, probably including this one), but I think that only makes the case for SQLite stronger. What people do still care about is a good flexible, usable and reliable interface. I’m not sure how to get that with YAML.
YAML is not a good format for this. But any line based or steamable format would be good enough for log data like this. Really easy to parse with any language or even directly with shell scripts. No need to even know SQL, any text processing would work fine.
CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.
Smaller file size, lower data rate, less computational overhead, no conversion loss.
A 64 bit float requires 64 bits to store.
ASCII representation of a 64 bit float (in the example above) is 21 characters or 168 bits.
Also, if every record is the same then there is a huge overhead for storing the name of each value. Plus the extra spaces, commas and braces.
So, you are at least doubling the file size and data throughput. And there is precision loss when converting float-string-float. Plus the computational overhead of doing those conversions.
Something like sqlite is lightweight, fast and will store the native data types.
It is widely supported, and allows for easy querying of the data.
Also makes it easy for 3rd party programs to interact with the data.
If you are ever thinking of implementing some sort of data storage in files, consider sqlite first.
Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.
Because this is not log or debug data as OP said. In any case, what do you think would happen with this data? It will be analyzed by some sort of tool because no one could manually look at this much text data. In text, this can be like 1MB of data per second. So in a normal eye tracking session, probably hundreds of MB. The problem isn’t the storage space, but the time it will take to read that in and analyze it each time, forcing you to wait for processing or use lots of memory while reading it. And anyway, in most languages, it’s actually much easier to store the number values directly (in 8 bytes not the 30something this text representation uses) than to convert them to JSON, all languages have some built-in way to do that. And even if not, sqlite is piss-easy and does everything for you, being as simple as JSON.
There is just no reason to do it like that unless you just don’t think about what you’re doing or have no clue.
That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.
The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.
Maybe use a real database for that? I’m a fan of simple tools (e.g. plaintext) for simple usecases but please use appropriate tools.
What is wrong with a file for this? Sounds more like a local log or debug output that a single thread in a single process would be creating. A file is fine for high volume append only data like this. The only big issue is the format of that data.
What benefit would a database bring here?
I think SQLite is a great middle ground. It saves the database as a single .db file, and can do everything an SQL database can do. Querying for data is a lot more flexible and a lot faster. The tools for manipulating the data in any way you want are very good and very robust.
However, I’m not sure how it would affect file size. It might be smaller because JSON/YAML wastes a lot of characters on redundant information (field names) and storing numbers as text, which the database would store as binary data in a defined structure. On the other hand, extra space is used to make common SQL operations happen much faster using fancy data structures. I don’t know which effect is greater so file size could be bigger or smaller.
SQLite would definitely be smaller, faster, and require less memory.
Thing is, it’s 2025, roughly 20 years since anybody’s given half a shit about storage efficiency, memory efficiency, or even CPU efficiency for anything so small. Presumably this is not something they need to query dynamically.
True (in most contexts, probably including this one), but I think that only makes the case for SQLite stronger. What people do still care about is a good flexible, usable and reliable interface. I’m not sure how to get that with YAML.
YAML is not a good format for this. But any line based or steamable format would be good enough for log data like this. Really easy to parse with any language or even directly with shell scripts. No need to even know SQL, any text processing would work fine.
I didn’t look to much at the data but I think csv might actually be an appropriate format for this?
Nice simple plaintext and very easy to parse into a datastructure for analysing/using it in python or similar
CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.
Smaller file size, lower data rate, less computational overhead, no conversion loss.
A 64 bit float requires 64 bits to store.
ASCII representation of a 64 bit float (in the example above) is 21 characters or 168 bits.
Also, if every record is the same then there is a huge overhead for storing the name of each value. Plus the extra spaces, commas and braces.
So, you are at least doubling the file size and data throughput. And there is precision loss when converting float-string-float. Plus the computational overhead of doing those conversions.
Something like sqlite is lightweight, fast and will store the native data types.
It is widely supported, and allows for easy querying of the data.
Also makes it easy for 3rd party programs to interact with the data.
If you are ever thinking of implementing some sort of data storage in files, consider sqlite first.
Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.
It’s used to export tracking data to analyze later on. Something like SQLite seems like a much better choice to me.
Because this is not log or debug data as OP said. In any case, what do you think would happen with this data? It will be analyzed by some sort of tool because no one could manually look at this much text data. In text, this can be like 1MB of data per second. So in a normal eye tracking session, probably hundreds of MB. The problem isn’t the storage space, but the time it will take to read that in and analyze it each time, forcing you to wait for processing or use lots of memory while reading it. And anyway, in most languages, it’s actually much easier to store the number values directly (in 8 bytes not the 30something this text representation uses) than to convert them to JSON, all languages have some built-in way to do that. And even if not, sqlite is piss-easy and does everything for you, being as simple as JSON.
There is just no reason to do it like that unless you just don’t think about what you’re doing or have no clue.
That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.
The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.
Some order in the CSV data, if it weren’t a logfile, which i didn’t know.