![]() |
Movieplayer.it (Italian scraper) - Printable Version +- Kodi Community Forum (https://forum.kodi.tv) +-- Forum: Development (https://forum.kodi.tv/forumdisplay.php?fid=32) +--- Forum: Scrapers (https://forum.kodi.tv/forumdisplay.php?fid=60) +--- Thread: Movieplayer.it (Italian scraper) (/showthread.php?tid=58536) |
Movieplayer.it (Italian scraper) - sipontino - 2009-09-25 Hi all, i'm writing this italian scraper; it's 70% finished but now i get some problem with custom functions. Is there anyone can help me?? we have only 2 simple custom functions for cast and writer. Is there any way to test custom functions in Nicezia's xml scraper editor? Please explain me cleanly what i done wrong on custom function so i can finish scraper soon as possible and create a ticket to svn!! ![]() PS: Regular expressions match text perfectly (tested with Nicezia scraper editor) CODE on pastebin Bye all and tnxs in advance - sipontino - 2009-09-25 Plz help! - mkortstiege - 2009-09-25 If you really need help, pastebin or trac what you got so far. Do NOT use rapidshare or any other online hoster for plain text files .. - sipontino - 2009-09-26 vdrfan Wrote:If you really need help, pastebin or trac what you got so far. Do NOT use rapidshare or any other online hoster for plain text files ..Hey sorry, my pity!! pastebin Thanks - mkortstiege - 2009-09-26 Only did a quick test, but those two functions are working for me. On the other hand i mentioned some nasty html output in debug log. What exactly fails for you using the scraper in XBMC? - sipontino - 2009-09-29 Ok , now it's working; is there any way to cache external page wthout call custom function?? For fanart the sintax <fanart>url.jpg</fanart> is ok? thanks in advance - spiff - 2009-09-29 wherever you return an url you can cache it. not sure i understand your question though. no, that is not the syntax; Code: <fanart> - sipontino - 2009-09-29 spiff Wrote:wherever you return an url you can cache it. not sure i understand your question though. Ok for fanart sintax; For caching i mean how to cache the source code of a page in get details section without call custom function. Like putting a source code of externa page (not returned by get search result) in imput of regex of get details section ![]() - spiff - 2009-09-29 aha. you can nest several url's in getsearchresults <url>url1</url><url>url2</url> will fetch the two url's and stick them in $$1 and $$2 on the call to getdetails. |