Scrapy не тянет цены закрытия от Yahoo! финансов - PullRequest
0 голосов
/ 23 марта 2020

Я пытаюсь получить цены закрытия и процентные изменения для трех тикеров из Yahoo! Финансы с помощью Scrapy. Тем не менее, я не получаю данных, даже несмотря на то, что я подтвердил свою работу с XPath и перенес меня в нужное место на текущей странице, используя консоль в Chrome. Может кто-нибудь сообщить мне, что здесь может происходить?

items.py:

from scrapy.item import Item, Field

class InvestmentItem(Item):
    ticker = Field()
    closing_px = Field()
    closing_pct = Field()

investment_spider.py

from scrapy import Spider
from scrapy.selector import Selector
from investment.items import InvestmentItem

class InvestmentSpider(Spider):
    name = "investment"
    allowed_domains = ["finance.yahoo.com"]
    start_urls = ["https://finance.yahoo.com/quote/SPY?p=SPY", "https://finance.yahoo.com/quote/DIA?p=DIA", "https://finance.yahoo.com/quote/QQQ?p=QQQ"]

    def parse(self, response):
        results = Selector(response).xpath('//div[@class="D(ib) Mend(20px)"]')

        for result in results:
            item = InvestmentItem()
            item['closing_px'] = result.xpath('//span[@class="Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)"]/text()').extract()[0]
            item['closing_pct'] = result.xpath('//span[@class="Trsdu(0.3s) Fw(500) Pstart(10px) Fz(24px) C($dataRed)"]/text()').extract()[0]
            yield item

вывод из консоли:

2020-03-22 23:42:26 [scrapy.utils.log] INFO: Scrapy 2.0.0 started (bot: investment)
2020-03-22 23:42:26 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.8.2 (v3.8.2:7b3ab5921f, Feb 24 2020, 17:52:18) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform macOS-10.15.3-x86_64-i386-64bit
2020-03-22 23:42:26 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-03-22 23:42:26 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'investment',
 'NEWSPIDER_MODULE': 'investment.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['investment.spiders']}
2020-03-22 23:42:26 [scrapy.extensions.telnet] INFO: Telnet Password: 4d82e058cd5967c1
2020-03-22 23:42:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-03-22 23:42:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-03-22 23:42:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-03-22 23:42:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-03-22 23:42:26 [scrapy.core.engine] INFO: Spider opened
2020-03-22 23:42:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-03-22 23:42:26 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-03-22 23:42:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://finance.yahoo.com/robots.txt> (referer: None)
2020-03-22 23:42:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://finance.yahoo.com/quote/SPY?p=SPY> (referer: None)
2020-03-22 23:42:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://finance.yahoo.com/quote/QQQ?p=QQQ> (referer: None)
2020-03-22 23:42:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://finance.yahoo.com/quote/DIA?p=DIA> (referer: None)
2020-03-22 23:42:29 [scrapy.core.engine] INFO: Closing spider (finished)
2020-03-22 23:42:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 923,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 495443,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 4,
 'elapsed_time_seconds': 2.296482,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 3, 23, 3, 42, 29, 66553),
 'log_count/DEBUG': 4,
 'log_count/INFO': 10,
 'memusage/max': 48963584,
 'memusage/startup': 48963584,
 'response_received_count': 4,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2020, 3, 23, 3, 42, 26, 770071)}
2020-03-22 23:42:29 [scrapy.core.engine] INFO: Spider closed (finished)

Заранее спасибо!

Ответы [ 2 ]

0 голосов
/ 23 марта 2020

Необходимые страницы динамически отображаются в React. Необходимая информация находится внутри тега script и переменной root .App.main

Для получения информации я использую этот docs . Другой вариант - использовать Spla sh или Селен рендеринг.

Рабочий пример:

from scrapy import Spider
from scrapy.selector import Selector
from investment.items import InvestmentItem

import json

class InvestmentSpider(Spider):
    name = "investment"
    allowed_domains = ["finance.yahoo.com"]
    start_urls = ["https://finance.yahoo.com/quote/SPY?p=SPY", "https://finance.yahoo.com/quote/DIA?p=DIA", "https://finance.yahoo.com/quote/QQQ?p=QQQ"]

    def parse(self, response):
        pattern = r'\broot\.App\.main\s*=\s*(\{.*?\})\s*;\s*\n'
        json_data = response.css('script::text').re_first(pattern)
        price = json.loads(json_data)['context']['dispatcher']['stores']['QuoteSummaryStore']['price']

        item = InvestmentItem()
        item['closing_px'] = price['regularMarketPrice']['fmt']
        item['closing_pct'] = price['regularMarketChange']['fmt']

        yield item
0 голосов
/ 23 марта 2020

надеюсь, это поможет вам

class InvestmentSpider(Spider):
    name = "investment"


    def start_requests(self):
        urls = ["https://finance.yahoo.com/quote/SPY?p=SPY", "https://finance.yahoo.com/quote/DIA?p=DIA", "https://finance.yahoo.com/quote/QQQ?p=QQQ"]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)


    def parse(self, response):
        data = response.xpath('//div[@class="D(ib) Mend(20px)"]/span/text()').extract()

        item = InvestmentItem()
        item['closing_px'] = data[0] #1st span
        item['closing_pct'] = data[1] #2nd span

        yield item


Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...