Scrapy Spider. xml .gz Sitemaps - PullRequest
       15

Scrapy Spider. xml .gz Sitemaps

0 голосов
/ 20 апреля 2020

Попытка открыть и отсканировать все файлы Sitemap. xml в этом индексе Sitemap, вложенном в файлы. xml .gz. Я получаю сообщение об ошибке при попытке запустить следующий код ... Кто-нибудь знает, может ли Scrapy открыть файлы. xml .gz?

from scrapy.spiders import SitemapSpider

class SiteSpider(SitemapSpider):

    name = 'SiteSpider'

    sitemap_urls = ['https://cdn.shutterstock.com/sitemaps/image/sitemap/stock-image-sitemap-index.xml.gz']
    sitemap_rules = [('/image-photo/', 'parse_article')]

    def parse_article(self, response):
        print('parse_article url:', response.url)

        yield {'url': response.url}

# --- it runs without project and saves in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
        'USER_AGENT': 'Mozilla/5.0',

        # save in file as CSV, JSON or XML
        'FEED_FORMAT': 'csv',     # csv, json, xml
        'FEED_URI': 'output.csv', # 
})
c.crawl(SiteSpider)
c.start()

Ошибка

Traceback (most recent call last):   

File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 84, in evaluate_iterable
    for r in iterable:   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 84, in evaluate_iterable
    for r in iterable:   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 84, in evaluate_iterable
    for r in iterable:   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 84, in evaluate_iterable
    for r in iterable:   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/spiders/sitemap.py", line 53, in _parse_sitemap
    s = Sitemap(body)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/utils/sitemap.py", line 19, in __init__
    rt = self._root.tag AttributeError: 'NoneType' object has no attribute 'tag' 

2020-04-21 07:18:04 [scrapy.core.engine] INFO: Closing spider (finished)    
2020-04-21 07:18:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 251,  'downloader/request_count': 1,  'downloader/request_method_count/GET': 1,  'downloader/response_bytes': 263062,  'downloader/response_count': 1,  'downloader/response_status_count/200': 1,  'elapsed_time_seconds': 0.595945,  'finish_reason': 'finished',  'finish_time': datetime.datetime(2020, 4, 21, 12, 18, 4, 284475),  'log_count/DEBUG': 1,  'log_count/ERROR': 1,  'log_count/INFO': 10,  'memusage/max': 49127424,  'memusage/startup': 49123328,  'response_received_count': 1,  'scheduler/dequeued': 1,  'scheduler/dequeued/memory': 1,  'scheduler/enqueued': 1,  'scheduler/enqueued/memory': 1,  'spider_exceptions/AttributeError': 1,  'start_time': datetime.datetime(2020, 4, 21, 12, 18, 3, 688530)} 
2020-04-21 07:18:04 [scrapy.core.engine] INFO: Spider closed (finished) 
2020-04-21 07:18:04 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: sstksitemap) 
2020-04-21 07:18:04 [scrapy.utils.log] INFO: Versions: lxml 4.4.2.0, libxml2 2.9.4, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform macOS-10.15.3-x86_64-i386-64bit 
2020-04-21 07:18:04 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'sstksitemap', 'FEED_FORMAT': 'csv', 'FEED_URI': 'output.csv', 'NEWSPIDER_MODULE': 'sstksitemap.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['sstksitemap.spiders']} 
2020-04-21 07:18:04 [scrapy.extensions.telnet] INFO: Telnet Password: a2be3aaa1b68b3f2 
2020-04-21 07:18:04 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats',  'scrapy.extensions.telnet.TelnetConsole',  'scrapy.extensions.memusage.MemoryUsage',  'scrapy.extensions.feedexport.FeedExporter',  'scrapy.extensions.logstats.LogStats'] 
2020-04-21 07:18:04 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',  'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',  'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',  'scrapy.downloadermiddlewares.retry.RetryMiddleware',  'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',  'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',  'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',  'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2020-04-21 07:18:04 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',  'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',  'scrapy.spidermiddlewares.referer.RefererMiddleware',  'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',  'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2020-04-21 07:18:04 [scrapy.middleware] INFO: Enabled item pipelines: [] 
2020-04-21 07:18:04 [scrapy.core.engine] INFO: Spider opened 
2020-04-21 07:18:04 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2020-04-21 07:18:04 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 

Traceback (most recent call last):   

File "/Library/Frameworks/Python.framework/Versions/3.8/bin/scrapy", line 8, in <module>
    sys.exit(execute())   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/cmdline.py", line 146, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/cmdline.py", line 100, in _run_print_help
    func(*a, **kw)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/cmdline.py", line 154, in _run_command
    cmd.run(args, opts)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/commands/crawl.py", line 58, in run
    self.crawler_process.start()   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/scrapy/crawler.py", line 309, in start
    reactor.run(installSignalHandlers=False)  # blocking call   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/twisted/internet/base.py", line 1282, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)      
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/twisted/internet/base.py", line 1262, in startRunning
    ReactorBase.startRunning(self)   
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/twisted/internet/base.py", line 765, in startRunning
    raise error.ReactorNotRestartable() twisted.internet.error.ReactorNotRestartable
...