There is a foreseeable problem with our little text database, It is too sluggish to compare tens of thousands records on substring level. This requires true binary database tables.
I added binary tables to the text tables. The database consists of tables "export", "export_CSV", "SlutBunnies" and "SlutBunnies_CSV". The _CSV versions are linked to the text files.
Then I happened to run into a stability problem which is typical for a embedded HSQLDB when doing mass updates. So I converted the embedded HSQL into a better stand-alone database. This is what everybody should do with a productive database which may become very large. No big deal.
---------------------------
I added some macro code and 2 forms with 2 push buttons each.
The first form makes it easy to replace the csv files. Push one of the buttons, pick whatever csv file happens to be the new "export" or "SlutBunny" file and it will be copied into the database directory replacing any existing csv file. This file replacement macro closes the database connection.
This macro does the same as before when you closed the office suite and manually copied the right file to the right place.
When you open the second form, your database connection will be re-opened with the new text tables in place and you see another pair of push buttons. Each of them executes a sequence of two SQL statements. The first statement deletes all data from the respective SlutBunnies or export table, the second statement inserts all the records from the respective _CSV table. In other words, old data are wiped out and replaced with the content of the current csv. This is fast and easy. It would be possible to keep old records that are not in the csv.
Without macro code you could do the same like this:
Open the database with the right csv files in place.
menu:Tools>SQL...
DELETE FROM "SlutBunnies";
DELETE FROM "export"
[Execute...]
menu:View>Refresh Tables
[or delete all rows in the open grid]
Copy "export_CSV" to "export"
Copy the view "vSlutBunny_Import" to "SlutBunnies".
Binary SlutBunnies differs a little bit from SlutBunnies_CSV. I appended a "quantity" column which is filled automatically when inserting the csv records. The view includes the quantity column. Since Base can not edit existing database views, I keep a query "qSlutBunny_Import" which can be edited easily. Then you may delete the view and rebuild the view from the modified query.
----------------------
cul wrote:3. I still need to compare the qQuanity "table" to the export table to find the mismatches and print only the mismatches using the export.ID as the primary key
See query qQuantDiff. I even added a report based on that query.
cul wrote:4. I need to do the same thing you've done with quantity for the name field - pull in the data to a query and then CAST? it to concatenate "Artist - Title" with all instances of an integer within brackets in the Artist field removed.
I would remove that when importing from csv analogue to the added quantity column. There is a query "qNumBrackets" which finds 2 such records with listing_id 435996244 and 519905533 having "Remixed [1]". Do you mean these records?
cul wrote:5. Compare this to the export table and print only the mismatches using the export.ID as the primary key
qMismatchNames returns 70 records where the concatenation "artist - title" differs from the corresponding export.Name with the matching ID
The match/mismatch queries are trivial. Far more trivial than anything you would forced into when using a spreadsheet.
Installation Instructions
The whole package in a zip:
https://www.mediafire.com/file/aiwrst4b ... s.zip/file
The contained database document includes a macro which connects the document with the external database when you open the document. All you need to do is extract everything into a "trusted directory" according to Tools>Options>Security>[Macro Security....]>tab "Trusted Sources". Do NOT turn off macro security. Just set up some trusted directories for macro contaminated stuff in your documents folder (no, "Downloads" is a bad candidate because that folder gets anything from anywhere). Call the file picker form and import your csv files. Call the sync form and sync the binary tables with the csv tables.
The database package is small because I removed all binary data and did not include the csv files. However, there is one large file which does not allow to upload the whole thing here. It is the database driver hsqldb.jar of version 2.5.0. I added this driver because it supports the use of regular expressions for non-trivial matching stuff. The driver shipped with your office suite is 13 years old.
Side note: I find it a little bit confusing to deal with "export" and "export.csv" since somebodie's export is our import. We do import these records into our database, don't we?