Importing database very slow












1















I am setting up a new environment and I want to migrate the database. In order to do that, I am doing a dump from my current db and trying to import the dump to the appcloud mariadb server. That's the dump creation command:



mysqldump --extended-insert=FALSE --skip-add-locks --skip-lock-tables --no-autocommit --protocol=TCP -P 13000 --user=XXX --password=XXX xxx > "dump.sql"


I need to make single insert queries because some of the lines are too long and produce errors. In order to import it use the following command:



mysql --init-command="SET AUTOCOMMIT = 0;" --protocol TCP --port 13000  --host=127.0.0.1 --user=XXX --password=XXX --show-warnings xxx < dump.sql


I get this error pretty soon: ERROR 2006 (HY000) at line 3805: MySQL server has gone away



The dump is 1.2G so I have tried to split by table and making smaller files. It takes really long time and I still, for some of the files, get the error mentioned previously.



This process is really long and tedious. Is there any other way to accelerate this import? Any other process is more convenient for large dump files? At the moment, even with a not fully successful migration, it takes 2 days to push all data.










share|improve this question









New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

    – nbari
    Jan 18 at 13:12













  • @mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

    – Wilson Hauck
    2 days ago


















1















I am setting up a new environment and I want to migrate the database. In order to do that, I am doing a dump from my current db and trying to import the dump to the appcloud mariadb server. That's the dump creation command:



mysqldump --extended-insert=FALSE --skip-add-locks --skip-lock-tables --no-autocommit --protocol=TCP -P 13000 --user=XXX --password=XXX xxx > "dump.sql"


I need to make single insert queries because some of the lines are too long and produce errors. In order to import it use the following command:



mysql --init-command="SET AUTOCOMMIT = 0;" --protocol TCP --port 13000  --host=127.0.0.1 --user=XXX --password=XXX --show-warnings xxx < dump.sql


I get this error pretty soon: ERROR 2006 (HY000) at line 3805: MySQL server has gone away



The dump is 1.2G so I have tried to split by table and making smaller files. It takes really long time and I still, for some of the files, get the error mentioned previously.



This process is really long and tedious. Is there any other way to accelerate this import? Any other process is more convenient for large dump files? At the moment, even with a not fully successful migration, it takes 2 days to push all data.










share|improve this question









New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

    – nbari
    Jan 18 at 13:12













  • @mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

    – Wilson Hauck
    2 days ago
















1












1








1








I am setting up a new environment and I want to migrate the database. In order to do that, I am doing a dump from my current db and trying to import the dump to the appcloud mariadb server. That's the dump creation command:



mysqldump --extended-insert=FALSE --skip-add-locks --skip-lock-tables --no-autocommit --protocol=TCP -P 13000 --user=XXX --password=XXX xxx > "dump.sql"


I need to make single insert queries because some of the lines are too long and produce errors. In order to import it use the following command:



mysql --init-command="SET AUTOCOMMIT = 0;" --protocol TCP --port 13000  --host=127.0.0.1 --user=XXX --password=XXX --show-warnings xxx < dump.sql


I get this error pretty soon: ERROR 2006 (HY000) at line 3805: MySQL server has gone away



The dump is 1.2G so I have tried to split by table and making smaller files. It takes really long time and I still, for some of the files, get the error mentioned previously.



This process is really long and tedious. Is there any other way to accelerate this import? Any other process is more convenient for large dump files? At the moment, even with a not fully successful migration, it takes 2 days to push all data.










share|improve this question









New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I am setting up a new environment and I want to migrate the database. In order to do that, I am doing a dump from my current db and trying to import the dump to the appcloud mariadb server. That's the dump creation command:



mysqldump --extended-insert=FALSE --skip-add-locks --skip-lock-tables --no-autocommit --protocol=TCP -P 13000 --user=XXX --password=XXX xxx > "dump.sql"


I need to make single insert queries because some of the lines are too long and produce errors. In order to import it use the following command:



mysql --init-command="SET AUTOCOMMIT = 0;" --protocol TCP --port 13000  --host=127.0.0.1 --user=XXX --password=XXX --show-warnings xxx < dump.sql


I get this error pretty soon: ERROR 2006 (HY000) at line 3805: MySQL server has gone away



The dump is 1.2G so I have tried to split by table and making smaller files. It takes really long time and I still, for some of the files, get the error mentioned previously.



This process is really long and tedious. Is there any other way to accelerate this import? Any other process is more convenient for large dump files? At the moment, even with a not fully successful migration, it takes 2 days to push all data.







mysql restore






share|improve this question









New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited Jan 18 at 13:11









a_horse_with_no_name

294k46451545




294k46451545






New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Jan 18 at 12:42









mikelmikel

61




61




New contributor




mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






mikel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.













  • stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

    – nbari
    Jan 18 at 13:12













  • @mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

    – Wilson Hauck
    2 days ago





















  • stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

    – nbari
    Jan 18 at 13:12













  • @mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

    – Wilson Hauck
    2 days ago



















stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

– nbari
Jan 18 at 13:12







stackoverflow.com/q/10474922/1135424 check the answers, probably could give you an idea for examples using the max_allowed_packet when importing, like: mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql

– nbari
Jan 18 at 13:12















@mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

– Wilson Hauck
2 days ago







@mikel In your my.cnf or my.ini, thread_cache_size=24 to avoid thread starvation, stop / start services. And here are some useful tips at this url support.tigertech.net/mysql-large-inserts read it more than once, please.

– Wilson Hauck
2 days ago














0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






mikel is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54254282%2fimporting-database-very-slow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes








mikel is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















mikel is a new contributor. Be nice, and check out our Code of Conduct.













mikel is a new contributor. Be nice, and check out our Code of Conduct.












mikel is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54254282%2fimporting-database-very-slow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Homophylophilia

Updating UILabel text programmatically using a function

Cloud Functions - OpenCV Videocapture Read method fails for larger files from cloud storage