Migrate from MySQL to SQLite
#1
I successfully dumped my MySQL database and converted it with this great web page https://www.rebasedata.com/convert-mysql...ite-online

Then I just remade the whole .db file views with SQLite Studio

The only problem I have is that all my thumbnails and art is gone... is there a way to easily bring it back?

Now.. back to my question

Did I did this correctly or did I do something wrong, which caused all my thumbnails be gone?
Reply
#2
If the whole conversion is a 1-on-1 mirror copy in terms of field id's and all, then it *should* work. But if new id's were created somewhere, then your database probably corrupted itself.

A safer method is to do a full Video Library Export (w/ separate files), and do an import into the local Kodi databases.
Reply
#3
(2018-06-08, 08:06)Klojum Wrote: A safer method is to do a full Video Library Export (w/ separate files), and do an import into the local Kodi databases.
 While that may be true for video library, it is not true for music. Import only adds additional artist and album info to exitsing db entries, to populate an empty music database you have to scan the music files. Doing that is probably simpler and safer than converting MySQL to SQLite in the way the op has done, but could be slower I guess depending on where the music files are held. Chances are the question was not about the music library , but just wanted to set the facts out there for others.
Reply
#4
OP has indeed not been clear whether his database involves videos or music, or both. Smile
Reply
#5
it involves movies, however the db itself is huge... 125 ~ 130 mb aprox

to me doing the full video library export is a no go, because it might take weeks or even months to complete it as to where it is today.....

i opened the db in the art table and saw the data, it appears some (but not all) of the artwork was saved inside a computer hard drive, however most of the artwork comes from internet sources (amazon, et al.) but they dont seem to be loading either
Reply
#6
(2018-06-08, 16:25)vtwin0001 Wrote: it involves movies, however the db itself is huge... 125 ~ 130 mb aprox
That size for a database is not huge, it's peanuts. Huge databases are more like 10's of gigabytes and beyond. And the larger a database is, the more I would prefer a MySQL database server over a SQLite database setup. Large databases are not the problem, Kodi is the problem that is does its own sorting routines internally on top of the data retrieval. Especially the low-powered machines will suffer the consequences.

BTW, the links to all fanart/thumbs should still be in the local Textures13.db file. Or did you delete/overwrite that file already?

(2018-06-08, 16:25)vtwin0001 Wrote: because it might take weeks or even months to complete it as to where it is today....
Months..? Are you still using 5.25" floppy disks? Rofl
How many movies/tv episodes are we talking here?

What is your main reason anyway for going to SQLite for the video metadata? The local thumb caching is already done by fixed local SQLite files.
Reply
#7
(2018-06-08, 18:00)Klojum Wrote:
(2018-06-08, 16:25)vtwin0001 Wrote: it involves movies, however the db itself is huge... 125 ~ 130 mb aprox
That size for a database is not huge, it's peanuts. Huge databases are more like 10's of gigabytes and beyond. And the larger a database is, the more I would prefer a MySQL database server over a SQLite database setup. Large databases are not the problem, Kodi is the problem that is does its own sorting routines internally on top of the data retrieval. Especially the low-powered machines will suffer the consequences.  

I know, but since we're talking about 1 windows machine and 2 mibox, believe me, its huge for these devices (not the windows tho)
(2018-06-08, 18:00)Klojum Wrote: BTW, the links to all fanart/thumbs should still be in the local Textures13.db file. Or did you delete/overwrite that file already?

I havent done anything to TexturesXX.db however it doesnt seem to be doing its job
(2018-06-08, 18:00)Klojum Wrote:
(2018-06-08, 16:25)vtwin0001 Wrote: because it might take weeks or even months to complete it as to where it is today....
Months..? Are you still using 5.25" floppy disks? Rofl
How many movies/tv episodes are we talking here?  

Rofl No, it's just about a time issue (my job wont allow me to do this all the time.. I have been hoarding data for quite a few years and the database has been building upon the years, therefore it's folder 1, then folder 2 etc)
(2018-06-08, 18:00)Klojum Wrote: What is your main reason anyway for going to SQLite for the video metadata? The local thumb caching is already done by fixed local SQLite files. 
  
I want to turn off that [noisy] windows machine Laugh
Reply
#8
I think you misunderstand the export function. It is just a simple button press. My library of 600 movies and 4000 episodes is exported in less than 60 seconds

It is not the big effort that you seem to think it is. Click import-export link in my signature below.

The texturesxx.db is linked to the video database. If the links are broken, then you need to rescrape artwork
My Signature
Links to : Official:Forum rules (wiki) | Official:Forum rules/Banned add-ons (wiki) | Debug Log (wiki)
Links to : HOW-TO:Create Music Library (wiki) | HOW-TO:Create_Video_Library (wiki)  ||  Artwork (wiki) | Basic controls (wiki) | Import-export library (wiki) | Movie sets (wiki) | Movie universe (wiki) | NFO files (wiki) | Quick start guide (wiki)
Reply
#9
Thanks karellen

1- How do I rescrape artwork?

2- I still have the old MySQL Database which works perfectly on kodi, if I export to a single file, will this work if I Import this single file back to kodi and create its own sqlite database? --I'm not changing paths or anything....
Reply
#10
(2018-06-09, 06:56)vtwin0001 Wrote: 1- How do I rescrape artwork?
Depends... but see next answer first

(2018-06-09, 06:56)vtwin0001 Wrote: 2- I still have the old MySQL Database which works perfectly on kodi, if I export to a single file, will this work if I Import this single file back to kodi and create its own sqlite database? --I'm not changing paths or anything....
Ok, this is the saving grace. If you can re-instate the original database, then run the Export function to Separate Files. This will place a NFO File next to the movie. The benefit of this is that Kodi does not care where the movie is in the Source.

If you use Single File then your paths and filenames must be exactly the same. If you move movies around, or the NAS changes path addresses, then you will end up with a dead library. All the metadata is imported, but movies and tv epiodes wont play.

Do you have local artwork?
My Signature
Links to : Official:Forum rules (wiki) | Official:Forum rules/Banned add-ons (wiki) | Debug Log (wiki)
Links to : HOW-TO:Create Music Library (wiki) | HOW-TO:Create_Video_Library (wiki)  ||  Artwork (wiki) | Basic controls (wiki) | Import-export library (wiki) | Movie sets (wiki) | Movie universe (wiki) | NFO files (wiki) | Quick start guide (wiki)
Reply
#11
(2018-06-09, 01:28)vtwin0001 Wrote: I want to turn off that [noisy] windows machine Laugh
I'm not a fan of noise either. I use a fanless setup for that:
a simple Celeron motherboard with a fanless cpu, an SSD for storage as well as a pico psu for power. Total silence. Smile
Reply
#12
(2018-06-09, 07:14)Karellen Wrote: If you use Single File then your paths and filenames must be exactly the same. If you move movies around, or the NAS changes path addresses, then you will end up with a dead library. All the metadata is imported, but movies and tv epiodes wont play.

If I use single file, I understand it will then add the art inside a new folder, am I right? Will this folder need to be in a specific place or can I put it on the web?
(2018-06-09, 07:14)Karellen Wrote: Do you have local artwork? 
  
For some movies I do...
(2018-06-09, 10:33)Klojum Wrote:
(2018-06-09, 01:28)vtwin0001 Wrote: I want to turn off that [noisy] windows machine Laugh
I'm not a fan of noise either. I use a fanless setup for that:
a simple Celeron motherboard with a fanless cpu, an SSD for storage as well as a pico psu for power. Total silence. Smile 

Thanks for the advice my friend.. as soon as Im able to upgrade the windows machine I would.. but anyhow, I dont like to run servers... at least I learned this makes things very dependable... I'm looking forward to use something like resilio sync to keep everything up to date, so in case I move the mibox from my house to a friends, I can use it without a hitch Smile
Reply
#13
Hey guys.. coming back to you with excellent news 

I was able to successfully convert (very easily) the MySQL database to SQLite format

Using this script (on UBUNTU, with MySQL and SQLite installed

sql:
#!/usr/bin/awk -f

# Authors: @esperlu, @artemyk, @gkuenning, @dumblob

# FIXME detect empty input file and issue a warning

function printerr( s ){ print s | "cat >&2" }

BEGIN {
  if( ARGC != 2 ){
    printerr( \
      "USAGE: mysql2sqlite dump_mysql.sql > dump_sqlite3.sql\n" \
      "       file name - (dash) is not supported, because - means stdin")
    no_END = 1
    exit 1
  }

  # Find INT_MAX supported by both this AWK (usually an ISO C signed int)
  #   and SQlite.
  # On non-8bit-based architectures, the additional bits are safely ignored.

  # 8bit (lower precision should not exist)
  s="127"
  # "63" + 0 avoids potential parser misbehavior
  if( (s + 0) "" == s ){ INT_MAX_HALF = "63" + 0 }
  # 16bit
  s="32767"
  if( (s + 0) "" == s ){ INT_MAX_HALF = "16383" + 0 }
  # 32bit
  s="2147483647"
  if( (s + 0) "" == s ){ INT_MAX_HALF = "1073741823" + 0 }
  # 64bit (as INTEGER in SQlite3)
  s="9223372036854775807"
  if( (s + 0) "" == s ){ INT_MAX_HALF = "4611686018427387904" + 0 }
#  # 128bit
#  s="170141183460469231731687303715884105728"
#  if( (s + 0) "" == s ){ INT_MAX_HALF = "85070591730234615865843651857942052864" + 0 }
#  # 256bit
#  s="57896044618658097711785492504343953926634992332820282019728792003956564819968"
#  if( (s + 0) "" == s ){ INT_MAX_HALF = "28948022309329048855892746252171976963317496166410141009864396001978282409984" + 0 }
#  # 512bit
#  s="6703903964971298549787012499102923063739682910296196688861780721860882015036773488400937149083451713845015929093243025426876941405973284973216824503042048"
#  if( (s + 0) "" == s ){ INT_MAX_HALF = "3351951982485649274893506249551461531869841455148098344430890360930441007518386744200468574541725856922507964546621512713438470702986642486608412251521024" + 0 }
#  # 1024bit
#  s="89884656743115795386465259539451236680898848947115328636715040578866337902750481566354238661203768010560056939935696678829394884407208311246423715319737062188883946712432742638151109800623047059726541476042502884419075341171231440736956555270413618581675255342293149119973622969239858152417678164812112068608"
#  if( (s + 0) "" == s ){ INT_MAX_HALF = "44942328371557897693232629769725618340449424473557664318357520289433168951375240783177119330601884005280028469967848339414697442203604155623211857659868531094441973356216371319075554900311523529863270738021251442209537670585615720368478277635206809290837627671146574559986811484619929076208839082406056034304" + 0 }
#  # higher precision probably not needed

  FS=",$"
  print "PRAGMA synchronous = OFF;"
  print "PRAGMA journal_mode = MEMORY;"
  print "BEGIN TRANSACTION;"
}

# historically 3 spaces separate non-argument local variables
function bit_to_int( str_bit,   powtwo, i, res, bit, overflow ){
  powtwo = 1
  overflow = 0
  # 011101 = 1*2^0 + 0*2^1 + 1*2^2 ...
  for( i = length( str_bit ); i > 0; --i ){
    bit = substr( str_bit, i, 1 )
    if( overflow || ( bit == 1 && res > INT_MAX_HALF ) ){
      printerr( \
        NR ": WARN Bit field overflow, number truncated (LSBs saved, MSBs ignored)." )
      break
    }
    res = res + bit * powtwo
    # no warning here as it might be the last iteration
    if( powtwo > INT_MAX_HALF ){ overflow = 1; continue }
    powtwo = powtwo * 2
  }
  return res
}

# CREATE TRIGGER statements have funny commenting. Remember we are in trigger.
/^\/\*.*(CREATE.*TRIGGER|create.*trigger)/ {
  gsub( /^.*(TRIGGER|trigger)/, "CREATE TRIGGER" )
  print
  inTrigger = 1
  next
}
# The end of CREATE TRIGGER has a stray comment terminator
/(END|end) \*\/;;/ { gsub( /\*\//, "" ); print; inTrigger = 0; next }
# The rest of triggers just get passed through
inTrigger != 0 { print; next }

# CREATE VIEW looks like a TABLE in comments
/^\/\*.*(CREATE.*TABLE|create.*table)/ {
  inView = 1
  next
}
# end of CREATE VIEW
/^(\).*(ENGINE|engine).*\*\/Wink/ {
  inView = 0
  next
}
# content of CREATE VIEW
inView != 0 { next }

# skip comments
/^\/\*/ { next }

# skip PARTITION statements
/^ *[(]?(PARTITION|partition) +[^ ]+/ { next }

# print all INSERT lines
( /^ *\(/ && /\) *[,;] *$/ ) || /^(INSERT|insert|REPLACE|replace)/ {
  prev = ""

  # first replace \\ by \_ that mysqldump never generates to deal with
  # sequnces like \\n that should be translated into \n, not \<LF>.
  # After we convert all escapes we replace \_ by backslashes.
  gsub( /\\\\/, "\\_" )

  # single quotes are escaped by another single quote
  gsub( /\\'/, "''" )
  gsub( /\\n/, "\n" )
  gsub( /\\r/, "\r" )
  gsub( /\\"/, "\"" )
  gsub( /\\\032/, "\032" )  # substitute char

  gsub( /\\_/, "\\" )

  # sqlite3 is limited to 16 significant digits of precision
  while( match( $0, /0x[0-9a-fA-F]{17}/ ) ){
    hexIssue = 1
    sub( /0x[0-9a-fA-F]+/, substr( $0, RSTART, RLENGTH-1 ), $0 )
  }
  if( hexIssue ){
    printerr( \
      NR ": WARN Hex number trimmed (length longer than 16 chars)." )
    hexIssue = 0
  }
  print
  next
}

# CREATE DATABASE is not supported
/^(CREATE.*DATABASE|create.*database)/ { next }

# print the CREATE line as is and capture the table name
/^(CREATE|create)/ {
  if( $0 ~ /IF NOT EXISTS|if not exists/ || $0 ~ /TEMPORARY|temporary/ ){
    caseIssue = 1
    printerr( \
      NR ": WARN Potential case sensitivity issues with table/column naming\n" \
      "          (see INFO at the end)." )
  }
  if( match( $0, /`[^`]+/ ) ){
    tableName = substr( $0, RSTART+1, RLENGTH-1 )
  }
  aInc = 0
  prev = ""
  firstInTable = 1
  print
  next
}

# Replace `FULLTEXT KEY` (probably other `XXXXX KEY`)
/^  (FULLTEXT KEY|fulltext key)/ { gsub( /.+(KEY|key)/, "  KEY" ) }

# Get rid of field lengths in KEY lines
/ (PRIMARY |primary )?(KEY|key)/ { gsub( /\([0-9]+\)/, "" ) }

aInc == 1 && /PRIMARY KEY|primary key/ { next }

# Replace COLLATE xxx_xxxx_xx statements with COLLATE BINARY
/ (COLLATE|collate) [a-z0-9_]*/ { gsub( /(COLLATE|collate) [a-z0-9_]*/, "COLLATE BINARY" ) }

# Print all fields definition lines except the `KEY` lines.
/^  / && !/^(  (KEY|key)|\)Wink/ {
  if( match( $0, /[^"`]AUTO_INCREMENT|auto_increment[^"`]/) ){
    aInc = 1
    gsub( /AUTO_INCREMENT|auto_increment/, "PRIMARY KEY AUTOINCREMENT" )
  }
  gsub( /(UNIQUE KEY|unique key) (`.*`|".*") /, "UNIQUE " )
  gsub( /(CHARACTER SET|character set) [^ ]+[ ,]/, "" )
  # FIXME
  #   CREATE TRIGGER [UpdateLastTime]
  #   AFTER UPDATE
  #   ON Package
  #   FOR EACH ROW
  #   BEGIN
  #   UPDATE Package SET LastUpdate = CURRENT_TIMESTAMP WHERE ActionId = old.ActionId;
  #   END
  gsub( /ON UPDATE CURRENT_TIMESTAMP|on update current_timestamp/, "" )
  gsub( /(COLLATE|collate) [^ ]+ /, "" )
  gsub( /(ENUM|enum)[^)]+\)/, "text " )
  gsub( /(SET|set)\([^)]+\)/, "text " )
  gsub( /UNSIGNED|unsigned/, "" )
  gsub( /` [^ ]*(INT|int|BIT|bit)[^ ]*/, "` integer" )
  gsub( /" [^ ]*(INT|int|BIT|bit)[^ ]*/, "\" integer" )
  ere_bit_field = "[bB]'[10]+'"
  if( match($0, ere_bit_field) ){
    sub( ere_bit_field, bit_to_int( substr( $0, RSTART +2, RLENGTH -2 -1 ) ) )
  }
  # field comments are not supported
  gsub( / (COMMENT|comment).+$/, "" )
  # Get commas off end of line
  gsub( /,.?$/, "" )
  if( prev ){
    if( firstInTable ){
      print prev
      firstInTable = 0
    }
    else {
      print "," prev
    }
  }
  else {
    # FIXME check if this is correct in all cases
    if( match( $1,
        /(CONSTRAINT|constraint) \".*\" (FOREIGN KEY|foreign key)/ ) ){
      print ","
    }
  }
  prev = $1
}

/ ENGINE| engine/ {
  if( prev ){
    if( firstInTable ){
      print prev
      firstInTable = 0
    }
    else {
      print "," prev
    }
  }
  prev=""
  print ");"
  next
}
# `KEY` lines are extracted from the `CREATE` block and stored in array for later print
# in a separate `CREATE KEY` command. The index name is prefixed by the table name to
# avoid a sqlite error for duplicate index name.
/^(  (KEY|key)|\)Wink/ {
  if( prev ){
    if( firstInTable ){
      print prev
      firstInTable = 0
    }
    else {
      print "," prev
    }
  }
  prev = ""
  if( $0 == ");" ){
    print
  }
  else {
    if( match( $0, /`[^`]+/ ) ){
      indexName = substr( $0, RSTART+1, RLENGTH-1 )
    }
    if( match( $0, /\([^()]+/ ) ){
      indexKey = substr( $0, RSTART+1, RLENGTH-1 )
    }
    # idx_ prefix to avoid name clashes (they really happen!)
    key[tableName] = key[tableName] "CREATE INDEX \"idx_" \
       tableName "_" indexName "\" ON \"" tableName "\" (" indexKey ");\n"
  }
}

END {
  if( no_END ){ exit 1}
  # print all KEY creation lines.
  for( table in key ){ printf key[table] }

  print "END TRANSACTION;"

  if( caseIssue ){
    printerr( \
      "INFO Pure sqlite identifiers are case insensitive (even if quoted\n" \
      "     or if ASCII) and doesnt cross-check TABLE and TEMPORARY TABLE\n" \
      "     identifiers. Thus expect errors like \"table T has no column named F\".")
  }
}

I tell you this: I downloaded it off the internet, I'm just republishing it, it worked very well for me (and very fast)

After converting the database, I opened it on SQLiteStudio (in Windows, my main OS), and created the views

episode_view
sql:
SELECT episode.*, files.strFileName AS strFileName, path.strPath AS strPath, files.playCount AS playCount, files.lastPlayed AS lastPlayed, files.dateAdded AS dateAdded, tvshow.c00 AS strTitle, tvshow.c08 AS genre, tvshow.c14 AS studio, tvshow.c05 AS premiered, tvshow.c13 AS mpaa, bookmark.timeInSeconds AS resumeTimeInSeconds, bookmark.totalTimeInSeconds AS totalTimeInSeconds, bookmark.playerState AS playerState, rating.rating AS rating, rating.votes AS votes, rating.rating_type AS rating_type, uniqueid.value AS uniqueid_value, uniqueid.type AS uniqueid_type FROM episode JOIN files ON files.idFile=episode.idFile JOIN tvshow ON tvshow.idShow=episode.idShow JOIN seasons ON seasons.idSeason=episode.idSeason JOIN path ON files.idPath=path.idPath LEFT JOIN bookmark ON bookmark.idFile=episode.idFile AND bookmark.type=1 LEFT JOIN rating ON rating.rating_id=episode.c03 LEFT JOIN uniqueid ON uniqueid.uniqueid_id=episode.c20

movie_view
sql:
SELECT movie.*, sets.strSet AS strSet, sets.strOverview AS strSetOverview, files.strFileName AS strFileName, path.strPath AS strPath, files.playCount AS playCount, files.lastPlayed AS lastPlayed, files.dateAdded AS dateAdded, bookmark.timeInSeconds AS resumeTimeInSeconds, bookmark.totalTimeInSeconds AS totalTimeInSeconds, bookmark.playerState AS playerState, rating.rating AS rating, rating.votes AS votes, rating.rating_type AS rating_type, uniqueid.value AS uniqueid_value, uniqueid.type AS uniqueid_type FROM movie LEFT JOIN sets ON sets.idSet = movie.idSet JOIN files ON files.idFile=movie.idFile JOIN path ON path.idPath=files.idPath LEFT JOIN bookmark ON bookmark.idFile=movie.idFile AND bookmark.type=1 LEFT JOIN rating ON rating.rating_id=movie.c05 LEFT JOIN uniqueid ON uniqueid.uniqueid_id=movie.c09

musicvideo_view
sql:
SELECT musicvideo.*, files.strFileName as strFileName, path.strPath as strPath, files.playCount as playCount, files.lastPlayed as lastPlayed, files.dateAdded as dateAdded, bookmark.timeInSeconds AS resumeTimeInSeconds, bookmark.totalTimeInSeconds AS totalTimeInSeconds, bookmark.playerState AS playerState FROM musicvideo JOIN files ON files.idFile=musicvideo.idFile JOIN path ON path.idPath=files.idPath LEFT JOIN bookmark ON bookmark.idFile=musicvideo.idFile AND bookmark.type=1

season_view
sql:
SELECT seasons.*, tvshow_view.strPath AS strPath, tvshow_view.c00 AS showTitle, tvshow_view.c01 AS plot, tvshow_view.c05 AS premiered, tvshow_view.c08 AS genre, tvshow_view.c14 AS studio, tvshow_view.c13 AS mpaa, count(DISTINCT episode.idEpisode) AS episodes, count(files.playCount) AS playCount, min(episode.c05) AS aired FROM seasons JOIN tvshow_view ON tvshow_view.idShow = seasons.idShow JOIN episode ON episode.idShow = seasons.idShow AND episode.c12 = seasons.season JOIN files ON files.idFile = episode.idFile GROUP BY seasons.idSeason

tvshow_view
sql:
SELECT seasons.*, tvshow_view.strPath AS strPath, tvshow_view.c00 AS showTitle, tvshow_view.c01 AS plot, tvshow_view.c05 AS premiered, tvshow_view.c08 AS genre, tvshow_view.c14 AS studio, tvshow_view.c13 AS mpaa, count(DISTINCT episode.idEpisode) AS episodes, count(files.playCount) AS playCount, min(episode.c05) AS aired FROM seasons JOIN tvshow_view ON tvshow_view.idShow = seasons.idShow JOIN episode ON episode.idShow = seasons.idShow AND episode.c12 = seasons.season JOIN files ON files.idFile = episode.idFile GROUP BY seasons.idSeason

tvshowcounts
sql:
SELECT tvshow.idShow AS idShow, MAX(files.lastPlayed) AS lastPlayed, NULLIF(COUNT(episode.c12), 0) AS totalCount, COUNT(files.playCount) AS watchedcount, NULLIF(COUNT(DISTINCT(episode.c12)), 0) AS totalSeasons, MAX(files.dateAdded) as dateAdded FROM tvshow LEFT JOIN episode ON episode.idShow=tvshow.idShow LEFT JOIN files ON files.idFile=episode.idFile GROUP BY tvshow.idShow

After doing all this (which is fairly fast), just delete the Textures13.db file and the Thumbnails folder

Modify the advancedsettings file accordingly 

...and DONE 

NOTE TO MOD: Please excuse me if this post is too long. It was for me a great solution, and its a lot faster than using NFO exporting (and less cumbersome), if you feel you need to modify my post in any way, you're free to do so Smile
Reply

Logout Mark Read Team Forum Stats Members Help
Migrate from MySQL to SQLite0