Versions Compared

    Key

    • This line was added.
    • This line was removed.
    • Formatting was changed.
    Comment: Published by Scroll Versions from this space and version 7.0-0

    ...

    Step Two: Export Data from MySQL to .csv (Comma-separated values) files

    Info
    titleEstimate time for data export

    The time it takes for the export depends on many parameters, such as your supervisor system's hardware (especially the speed of the database and export destination disk), the number of existing jobs, and number of frames (aka agenda items) for each job. For reference, on a Mac system with an SSD, every 256 jobs, each with 100 frames, took about 6 seconds to process. That would add up to about 4 minutes to process 10,000 jobs.


    In this step, you'll be using a script that we provide to dump MySQL data to .csv files into a folder on disk.

    1. Make sure that the MySQL server is running, and that you can connect to it using the mysql client, and that your qube table version is at 37.

      1. If you haven't changed the database administrator user and password, you should be able to do the following on a command prompt to confirm that the MySQL server is running:

        Code Block
        Linux: /usr/bin/mysql -u root -e 'SELECT * FROM qube.tableversion' 
        
        Mac: /usr/local/mysql/bin/mysql -u root -e 'SELECT * FROM qube.tableversion' 
        
        Windows: "C:\Program Files\pfx\qube\mysql\bin\mysql" -u root -e "SELECT * FROM qube.tableversion" 
      2. Make sure that the above command works and returns:

        Code Block
        +---------+
        | version |
        +---------+
        | 37      |
        +---------+
    2. If you get something less than 37 returned by the above command, it means that your current Qube supervisor version is older than 6.9-2, and that you need to update your MySQL database tables first, before you can upgrade the supervisor. To do so:

      1. Download the "upgrade_supervisor" program suitable for your supervisor platform from http://repo.pipelinefx.com/downloads/pub/db_migration_tools/

      2. On a command prompt, run the upgrade_supervisor program that you just downloaded. 

      3. Check that there weren't any critical errors reported by upgrade_supervisor
      4. Check that the version is now indeed updated to 37, by running the mysql -u root -e 'SELECT * FROM qube.tableversion' command again
    3. Choose a destination folder on your supervisor for the MySQL csv files. Make sure that your user and the mysql server process both have write permission to this folder and all its parent folders, and that the volume is sufficiently large. Also note that a faster disk, such as an SSD, will help speed up the export/import process. 

      Tip
      titleOn CentOS 7.x and possibly other Linux distros, create a working directory under /opt and do the export while running as the root user


      Do NOT use /tmp, /var/tmp (/usr/tmp), or any subdirectories under them. These OSs give the MySQL service its own private /tmp and /var/tmp folders, which prevents the mysqldump command from running correctly. Creating a subdirectory under /root does not work either, nor will a subdirectory in any user's home directory, since non-root users home directories are usually mode 700, so the MariaDB server can't access it.

      One approach that doeswork is creating a directory under /opt and opening up the permissions:

      No Format
      sudo mkdir -p /opt/mysql_dump 
      sudo chmod 755 /opt/mysql_dump

      Then, install the export_data_from_mysql.py script from the next step into this directory as the root user, and run the export script as root.

    4. Download the export_data_from_mysql.py script from http://repo.pipelinefx.com/downloads/pub/db_migration_tools/ and copy it into the destination folder.
    5. On a command prompt, go to the destination folder, and run export_data_from_mysql.py. Running it without any argument will create a subfolder in the current directory named "qube_mysqldump" and dump all files into it.

      Code Block
      python export_data_from_mysql.py
      1. You may override the dump subfolder and DB username, password, and mysql install location. Run "export_data_from_mysql.py -h" to see the list of options.

    6. Sit back. This process can take a long time to complete, depending on how many jobs you have on the system. 

    7. Once the process completes, make sure there were no errors reported on the terminal. Also have a look a the dump directory to confirm that there is a subfolder "qube" and a bunch of subfolders like "<number>qube" .

    8. Take a note of the dump directory location, and proceed to the next step, "Upgrade the Supervisor".
    Tip
    The time it takes for the export depends on many parameters, such as your supervisor system's hardware (especially the speed of the database and export destination disk), the number of existing jobs, and number of frames (aka agenda items) for each job. For reference, on a Mac system with an SSD, every 256 jobs, each with 100 frames, took about 6 seconds to process. That would add up to about 4 minutes to process 10,000 jobs.

    Step Three: Upgrade the Supervisor

    Proceed with the upgrade of the supervisor software.  Using the QubeInstaller is recommended, but you can also run the individual installer packages (RPMs, DEBs, MSIs, or PKGs), should you choose.

    Step Three: Upgrade the Supervisor

    See .Upgrading Qube! v7.0-0 for details, but come back here after upgrading the supervisor software.

    ...

    Step Four: Import Data into PostgreSQL from the .csv files

    Note

    Do this before making any change to your farm, or submitting new jobs.

    Info
    titleEstimated time for data import

    We have found that the import takes roughly 1/4 of the time for the export. Importing 10,000 jobs with 100 frames on average on a Mac system with an SSD took about 33 seconds.

    Importing the previously exported data

    Once you upgrade the supervisor, you are ready to import data into the new PostgreSQL server. Do this before making any change to your farm, or submitting new jobs. 

    1. Make sure that PostgreSQL server is running, and accepting connections:

      Code Block
      Linux: /usr/local/pfx/pgsql/bin/psql -p 50055 -d pfx -U qube -c "SELECT * FROM qube.tableversion"
      
      Mac: /Applications/pfx/pgsql/bin/psql -p 50055 -d pfx -U qube -c "SELECT * FROM qube.tableversion"
      
      Windows: "C:\Program Files\pfx\pgsql\bin\psql" -p 50055 -d pfx -U qube -c "SELECT * FROM qube.tableversion"

      Note that this should return:

      Code Block
      version 
      ---------
            51
      (1 row)
    2. On a command prompt, go to the folder where you ran the export script earlier. This should be the parent folder of the "qube_mysqldump" folder, by default.
    3. Run the import_data_into_pgsql.py script to import data from the csv files that were generated earlier.

      Code Block
      Linux: python /usr/local/pfx/qube/utils/pgsql/import_data_into_pgsql.py
      
      Mac: python /Applications/pfx/qube/utils/pgsql/import_data_into_pgsql.py
      
      Windows: python "C:\Program Files\pfx\qube\utils\pgsql\import_data_into_pgsql.py
    4. Sit back. This process will also take some time to complete, although it should be significantly faster than the export.
    5. Make sure there weren't any errors reported on the terminal.
    Tip

    Importing 10,000 jobs with 100 frames on average on a Mac system with an SSD took about 33 seconds.


    Step Five: Enable Activities on the New Supervisor

    Run the following commands to enable the new supervisor to accept new jobs and start dispatching jobs to workers

    Code Block
    qbadmin supervisor --unset stop_activity
    qbadmin supervisor --unset reject_submit

    You'll also need to unlock the workers you want to start using again. If you'd like to unlock all workers, then do:

    Code Block
    qbunlock --all

     

    Congratulations, you are done. Enjoy the new ride!