No Budibase Support For CSV Data Relationships
I want to migrate my personal finance tracking from a spreadsheet to a database. I knew my spreadsheet had accumulated errors over the years, but I didn't really know how big of a mess it was. Not until I tried to perform database normalization where I brought up a Jupyter notebook to quickly iterate through this side quest. It handled all the repetitious fixes, some search/replace rules to fix inconsistency, and split the data into multiple tables as per normalization rules. While slicing the data up for multiple tables, my code is also responsible for generating unique identification numbers in a table (primary keys) to be used in references from other tables (foreign keys).
I had used Python default CSV (comma-separated value) library to read data exported from Excel into dictionary objects. I performed my error correction and data normalization with Python dictionaries and at the end I wrote those processed dictionaries back out as CSV files. One file per dictionary representing one database table. I thought I was in good shape as I went into Budibase data import menu to upload these CSV into the Budibase built-in database. The file uploads were uneventful, but then I tried to define relationships between those tables and got stuck. It seems Budibase does not allow relationships to be defined for tables of data uploaded from CSV files, which feels like an odd oversight.
Defining relationships between tables is definitely supported in Budibase. Almost every supported data source type has a "Define existing relationships" section. There's one for MySQL/MariaDB, there's a section for Oracle, another for Microsoft SQL Server, and a section for PostgreSQL. There is no such section for CSV Import, but I had thought that was merely a documentation omission. Surely there is a way to define existing relationships between tables! But the option is absent from Budibase user interface, so it wasn't an omission after all. I probably should have tested this assumption earlier, before I put in all the work for data cleanup and normalization. Now I have to figure out some other way forward. Next candidate: JSON data import.