Невозможно очистить с помощью скрапа - PullRequest
0 голосов
/ 18 апреля 2020

Я пытаюсь очистить us-proxy.org . Моя среда Python 3.6.10, scrapy 2.0.1, и я использую контейнер spla sh на Docker.

Мой ожидаемый вывод - очистить все IP-адреса и порты по указанной ссылке в файле json.

Программа генерирует пустой файл и выдает следующие ошибки: *

2020-04-19 01:35:52 [scrapy.core.engine] INFO: Spider opened
2020-04-19 01:35:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped
0 items (at 0 items/min)
2020-04-19 01:35:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-04-19 01:35:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://us-proxy.org/robots.txt> (referer: None)
2020-04-19 01:35:53 [scrapy.core.scraper] ERROR: Error downloading <GET https://us-proxy.org via
https://us-proxy.org>
Traceback (most recent call last):
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy\core\downloader\middleware.py", line 42, in process_request
    defer.returnValue((yield download_func(request=request, spider=spider)))
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\twisted\internet\defer.py", line 1362, in returnValue
    raise _DefGen_Return(val)
twisted.internet.defer._DefGen_Return: <200 https://us-proxy.org/robots.txt>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy\core\downloader\middleware.py", line 36, in process_request
    response = yield deferred_from_coro(method(request=request, spider=spider))
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy_splash\middleware.py", line 358, in process_request
    priority=request.priority + self.rescheduling_priority_adjust
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy\http\request\__init__.py", line 104, in replace
    return cls(*args, **kwargs)
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy_splash\request.py", line 76, in __init__
    **kwargs)
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__
    self._set_url(url)
  File "c:\users\fc\anaconda3\envs\scrapy\lib\site-packages\scrapy\http\request\__init__.py", line 68, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: render.html
2020-04-19 01:35:53 [scrapy.core.engine] INFO: Closing spider (finished)
2020-04-19 01:35:53 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/builtins.ValueError': 1, 

Это то, что я пробовал -

# -*- coding: utf-8 -*-
import scrapy
from scrapy_splash import SplashRequest

class UsproxySpider(scrapy.Spider):
    name = 'usproxy'

    def start_requests(self):
        url= 'https://us-proxy.org'
        yield SplashRequest(url=url, callback=self.parse, endpoint='render.html', args={'wait': 0.5})

    def parse(self, response):
        for tr in response.xpath("//tabe[@id='proxylisttable']/tbody/tr/td[2]/text()"):
            yield {
                'ip': tr.xpath(".//td[1]/text()").extract_first(),
                'port': tr.xpath(".//td[2]/text()").extract_first()
            }
...