паук соскоблил элементы с веб-страницы, но в выводе ничего нет - PullRequest
1 голос
/ 02 апреля 2020

Я пытаюсь вычеркнуть название продукта и его цену с этой веб-страницы Я написал следующий паук

import scrapy
from ..items import MenDataItem
class MenCollectionSpider(scrapy.Spider):
    name = 'men_collection'
    allowed_domains = ['www.exportleftovers.com']
    start_urls = ['https://www.exportleftovers.com/collections/men']

    def parse(self, response):

        items = MenDataItem()

        for product in response.xpath("//div[@class = 'product-list collection-matrix clearfix equal-columns--clear equal-columns--outside-trim']/div/div/a/div"):

            title =  product.xpath(".//a[@class='product-info__caption ']/div[@class='product-details']/span[@class = 'title']/text()").get()
            price = product.xpath(".//a[@class='product-info__caption ']/div[@class='product-details']/span[@class = 'price ']/span[@class='current_price']/span[@class='money']/text()").get()

            items['title'] = title
            items['price'] = price

            yield items

, следующий - items.py

import scrapy


class MenDataItem(scrapy.Item):
    # define the fields for your item here like:
    title = scrapy.Field()
    price = scrapy.Field()

и обеспечивает следующий вывод

2020-04-01 19:25:07 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-04-01 19:25:08 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exportleftovers.com/robots.txt> (referer: None)
2020-04-01 19:25:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exportleftovers.com/collections/men> (referer: None)
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.exportleftovers.com/collections/men>
{'title': None, 'price': None}
2020-04-01 19:25:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-04-01 19:25:09 [scrapy.extensions.feedexport] INFO: Stored csv feed (24 items) in: data.csv
2020-04-01 19:25:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 531,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 184287,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 4, 2, 2, 25, 9, 457430),
 'item_scraped_count': 24,
 'log_count/DEBUG': 26,
 'log_count/INFO': 10,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 4, 2, 2, 25, 7, 590785)}
2020-04-01 19:25:09 [scrapy.core.engine] INFO: Spider closed (finished)

Как можно видеть здесь каждый раз, когда он предоставляет значение «none» для обеих переменных, значения xpath прекрасно работают для поиска 24 элементов на этой странице, как я и сохранил вывод в файл data.csv, в журналах видно, что 24 файла были сохранены в файле csv и в выходном файле нет ничего, кроме имен заголовков. Может ли кто-нибудь помочь

1 Ответ

0 голосов
/ 02 апреля 2020

Я верю, что другой подход в соответствии с тем, что у вас есть, будет делать это:

import scrapy
from ..items import MenDataItem
class MenCollectionSpider(scrapy.Spider):
    name = 'men_collection'
    allowed_domains = ['www.exportleftovers.com']
    start_urls = ['https://www.exportleftovers.com/collections/men']

    def parse(self, response):

        items = MenDataItem()

        titles = response.xpath("*//a[@class='product-info__caption ']/div[@class='product-details']/span[@class = 'title']/text()").getall()
        prices = response.xpath("*//a[@class='product-info__caption ']/div[@class='product-details']/span[@class = 'price ']/span[@class='current_price']/span[@class='money']/text()").getall()

        for i in range(len(titles)):
            items['title'] = titles[i]
            items['price'] = prices[i]

            yield items

Интересно, решит ли это проблему. Дайте мне знать, если это так! : D

...