Вот моя попытка сканировать список URL-адресов на первой странице сайта блогов AWS. Но это ничего не возвращает. Я думаю, что с моим xpath что-то не так, но не знаю, как это исправить.
import scrapy
class AwsblogSpider(scrapy.Spider):
name = 'awsblog'
allowed_domains = ['aws.amazon.com/blogs']
start_urls = ['http://aws.amazon.com/blogs/']
def parse(self, response):
blogs = response.xpath('//li[@class="m-card"]')
for blog in blogs:
url = blog.xpath('.//div[@class="m-card-title"]/a/@href').extract()
print(url)
Attempt 2
import scrapy
class AwsblogSpider(scrapy.Spider):
name = 'awsblog'
allowed_domains = ['aws.amazon.com/blogs']
start_urls = ['http://aws.amazon.com/blogs/']
def parse(self, response):
blogs = response.xpath('//div[@class="aws-directories-container"]')
for blog in blogs:
url = blog.xpath('//li[@class="m-card"]/div[@class="m-card-title"]/a/@href').extract_first()
print(url)
Вывод журнала:
2019-11-06 10:38:30 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-11-06 10:38:30 [scrapy.core.engine] INFO: Spider opened
2019-11-06 10:38:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-11-06 10:38:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-11-06 10:38:31 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://aws.amazon.com/robots.txt> from <GET http://aws.amazon.com/robots.txt>
2019-11-06 10:38:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://aws.amazon.com/robots.txt> (referer: None)
2019-11-06 10:38:31 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://aws.amazon.com/blogs/> from <GET http://aws.amazon.com/blogs/>
2019-11-06 10:38:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://aws.amazon.com/blogs/> (referer: None)
2019-11-06 10:38:32 [scrapy.core.engine] INFO: Closing spider (finished)
Любая помощь будет принята с благодарностью!