I am working on an app which requires several very large MyISAM tables being rebuilt daily (mainly searching tables which are rebuilt using the latest industry and site data). The queries used to rebuild these tables are extremely heavy and unsuitable for running against the main production database.
We have a separate server specifically for generating these tables and we use master-slave replication to keep an up-to-date copy of the production database on the processing server that we use as part of the rebuilding queries. My question is, once these tables are built, what is the best way of transferring them into the production database (effectively dropping the old table and replacing it with the new)?
We have previously just moved the binary data and index files between servers and copied into place. However this doesn’t play nicely with replication.
What other methods are there for this task? Would master-master replication between the production and processing database be any use?
How about building the new table in a separate table, e.g. “newfoo”, let it take as long as you need to generate it, and when you are ready to swap it in, simply rename it
drop table if exists oldfoo; rename table foo to oldfoo, newfoo to foo;
I had the same issue long time ago and this is what I did: Use two tables, one for even days and other table for odd days.
so by code, or store procedure you can choose where to insert that day by doing the number of the days of the year divided two and check the result.
so odd days you process the even table and viceversa.
this the code that we use to select the table
<?php $day_of_year = (int) date('z'); $table_sufix = $day_of_year % 2 ? "even" : "odd"; $insert_on = "table_" . $table_sufix; ?>
and the process
<?php $day_of_year = (int) date('z'); $table_sufix = $day_of_year % 2 ? "odd" : "even"; // note only this little change $process_table = "table_" . $table_sufix; ?>