You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Lately we used LHM for some bigger migrations on a big (22 mio rows) database table with a high load. During that migration (took 20 minutes with default stride size and throttle time) we had several dead locks that occured, because the app tried to update table rows, that were currently be copied in a chunk.
Previously we used the percona toolkit, where we didn't had this problem while migrating. After bit of investigating, we figured, that they do something really smart: They calculate the size of the chunk that is being copied dynamically to always stay within a given time (--chunk_size_time). I found a blog article about that here: http://www.xaprb.com/blog/2012/05/06/how-percona-toolkit-divides-tables-into-chunks/.
Of course we can estimate (or try out different values) the appropriate stride value for the throttler, depending on the write load on that table. But this would be a bit cumbersome and has to be done per table we want to migrate.
So I would suggest to update the throttler (or create a dynamic throttler) so it calculates the size of next chunk to copy depending on the runtime of last run.
What do you think?
The text was updated successfully, but these errors were encountered:
Lately we used LHM for some bigger migrations on a big (22 mio rows) database table with a high load. During that migration (took 20 minutes with default stride size and throttle time) we had several dead locks that occured, because the app tried to update table rows, that were currently be copied in a chunk.
Previously we used the percona toolkit, where we didn't had this problem while migrating. After bit of investigating, we figured, that they do something really smart: They calculate the size of the chunk that is being copied dynamically to always stay within a given time (
--chunk_size_time
). I found a blog article about that here: http://www.xaprb.com/blog/2012/05/06/how-percona-toolkit-divides-tables-into-chunks/.Of course we can estimate (or try out different values) the appropriate
stride
value for the throttler, depending on the write load on that table. But this would be a bit cumbersome and has to be done per table we want to migrate.So I would suggest to update the throttler (or create a dynamic throttler) so it calculates the size of next chunk to copy depending on the runtime of last run.
What do you think?
The text was updated successfully, but these errors were encountered: