Я работал с python и scrapy на прошлой неделе, используя следующий учебник: https://realpython.com/web-scraping-with-scrapy-and-mongodb/
Что делает этот учебник, так это прорабатывает главные вопросы и их URL-адреса на стеке-потоке с веб-сайтом scrapyЗатем crawler сохраняет его в базе данных и коллекции mongoDB.
Я пытаюсь адаптировать то, что было сделано в руководстве, для очистки и хранения нескольких элементов в нескольких коллекциях для одной базы данных mongoDB, а затем экспортировать их в CSV.формат, я выяснил, как сделать большую часть этого, но у меня возникли проблемы с "xpaths", которые scrapy использует для поиска определенных элементов на веб-странице, чтобы быть более конкретным, я выяснил, как сделатьpipleline на mongodb и хранение нескольких коллекций, а также изменение имен коллекций на основе имени очищаемого элемента, но я не могу заставить «пауков» работать конкретно с xpaths, или, насколько я понимаю, проблема заключается в неправильности xpaths,
У меня нет опыта работы со скрапом, и я провел несколько дней, пытаясь выяснить, как делать xpaths, но я не могу заставить его работать.
Страница, которую яПытаюсь очистить: https://stackoverflow.com/
Паук для заголовков вопросов и URL-адресов, который работает как задумано:
from scrapy import Spider
from scrapy.selector import Selector
from stack.items import QuestionItem
class QuestionSpider(Spider):
name = "questions"
allowed_domains = ["stackoverflow.com"]
start_urls = [
"http://stackoverflow.com/questions?pagesize=50&sort=newest",
]
def parse(self, response):
questions = Selector(response).xpath('//div[@class="summary"]/h3')
for question in questions:
item = QuestionItem()
item['title'] = question.xpath(
'a[@class="question-hyperlink"]/text()').extract()[0]
item['url'] = question.xpath(
'a[@class="question-hyperlink"]/@href').extract()[0]
yield item
Паук для количества ответов, голосов и просмотров, который не являетсяработает как задумано:
from scrapy import Spider
from scrapy.selector import Selector
from stack.items import PopularityItem
class PopularitySpider(Spider):
name = "popularity"
allowed_domains = ["stackoverflow.com"]
start_urls = [
"http://stackoverflow.com/questions?pagesize=50&sort=newest",
]
def parse(self, response):
popularity = Selector(response).xpath('//div[@class="summary"]/h3')
for poppart in popularity:
item = PopularityItem()
item['votes'] = poppart.xpath(
'div[contains(@class, "votes")]/text()').extract()
item['answers'] = poppart.xpath(
'div[contains(@class, "answers")]/text()').extract()
item['views'] = poppart.xpath(
'div[contains(@class, "views")]/text()').extract()
yield item
И, наконец, третий паук, который имеет проблемы, аналогичные второму пауку.
со вторым пауком Я получаю следующие выходные данные и данные, сохраненные в моей базе данных mongoDB посленачиная паука с:
scrapy crawl popularity
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410d"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410e"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410f"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94110"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94111"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94112"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94113"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94114"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94115"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94116"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94117"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94118"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94119"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411a"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411b"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411c"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411d"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411e"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411f"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94120"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
, поскольку вы можете видеть, что все элементы пусты, единственный способ, которым я смог получить некоторый вывод, был с xpath:
//div[contains(@class, "views")]/text()
На мойпонимание "//" означает все элементы в div, где class = "views"
с использованием этого метода работает только частично, так как я получаю вывод только для элемента views, и весь вывод сохраняется в одной строке элемента, а затем сновадля йСледующий цикл в выводе для всех сохраняется в следующей строке элементов, что имеет смысл, потому что я использую
//div instead of div
Это происходит "или я думаю, что это" из-за цикла, где он проходит черезколичество «сводных» классов на странице в качестве метода для указания скребку, сколько строк нужно очистить и сохранить, это делается с помощью следующего сегмента xpath и кода «Я отображал его выше, но только для ясности»:
def parse(self, response):
popularity = Selector(response).xpath('//div[@class="summary"]/h3')
for poppart in popularity:
вывод, который я даю при использовании
//div
, выглядит следующим образом:
{ "_id" : ObjectId("5bbdf34ab395bb249c3c71c2"), "votes" : [ "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n " ], "answers" : [ ], "views" : [ "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 3 views\r\n", "\r\n 8 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 5 views\r\n", "\r\n 10 views\r\n", "\r\n 5 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 14 views\r\n", "\r\n 2 views\r\n", "\r\n 5 views\r\n", "\r\n 3 views\r\n", "\r\n 5 views\r\n", "\r\n 3 views\r\n", "\r\n 6 views\r\n", "\r\n 7 views\r\n", "\r\n 3 views\r\n", "\r\n 7 views\r\n", "\r\n 5 views\r\n", "\r\n 14 views\r\n", "\r\n 4 views\r\n", "\r\n 12 views\r\n", "\r\n 16 views\r\n", "\r\n 7 views\r\n", "\r\n 7 views\r\n", "\r\n 7 views\r\n", "\r\n 4 views\r\n", "\r\n 4 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 3 views\r\n", "\r\n 3 views\r\n", "\r\n 8 views\r\n", "\r\n 2 views\r\n", "\r\n 10 views\r\n", "\r\n 6 views\r\n", "\r\n 3 views\r\n" ] }
{ "_id" : ObjectId("5bbdf34ab395bb249c3c71c3"), "votes" : [ "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n ", "\r\n " ], "answers" : [ ], "views" : [ "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 3 views\r\n", "\r\n 8 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 5 views\r\n", "\r\n 10 views\r\n", "\r\n 5 views\r\n", "\r\n 2 views\r\n", "\r\n 2 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 14 views\r\n", "\r\n 2 views\r\n", "\r\n 5 views\r\n", "\r\n 3 views\r\n", "\r\n 5 views\r\n", "\r\n 3 views\r\n", "\r\n 6 views\r\n", "\r\n 7 views\r\n", "\r\n 3 views\r\n", "\r\n 7 views\r\n", "\r\n 5 views\r\n", "\r\n 14 views\r\n", "\r\n 4 views\r\n", "\r\n 12 views\r\n", "\r\n 16 views\r\n", "\r\n 7 views\r\n", "\r\n 7 views\r\n", "\r\n 7 views\r\n", "\r\n 4 views\r\n", "\r\n 4 views\r\n", "\r\n 3 views\r\n", "\r\n 2 views\r\n", "\r\n 4 views\r\n", "\r\n 3 views\r\n", "\r\n 3 views\r\n", "\r\n 8 views\r\n", "\r\n 2 views\r\n", "\r\n 10 views\r\n", "\r\n 6 views\r\n", "\r\n 3 views\r\n" ] }
Введите "it", чтобы узнать больше
I 'm показывает только две строки, но делает это для количества строк, указанного в forloop.
Подводя итог, я считаю, что я делаю что-то не так с моими xpaths здесь.любая помощь была бы признательна, так как я потратил много дней, пытаясь исправить это безуспешно.
Я включаю мою линию, настройки и элементы для завершения.
Настройки:
BOT_NAME = 'stack'
SPIDER_MODULES = ['stack.spiders']
NEWSPIDER_MODULE = 'stack.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'stack (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
ITEM_PIPELINES = {'stack.pipelines.MongoDBPipeline': 300}
MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "testpop13"
Пункты:
import scrapy
from scrapy.item import Item, Field
class QuestionItem(Item):
title = Field()
url = Field()
class PopularityItem(Item):
votes = Field()
answers = Field()
views = Field()
class ModifiedItem(Item):
lastModified = Field()
modName = Field()
Пипелин:
import pymongo
import logging
class StackPipeline(object):
def process_item(self, item, spider):
return item
from scrapy.conf import settings
from scrapy.exceptions import DropItem
from scrapy import log
class MongoDBPipeline(object):
def __init__(self):
connection = pymongo.MongoClient(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
self.db = connection[settings['MONGODB_DB']]
def process_item(self, item, spider):
collection = self.db[type(item).__name__.lower()]
logging.info(collection.insert(dict(item)))
return item
и, наконец, как выглядит правильный вывод из паука вопросов:
> db.questionitem.find()
{ "_id" : ObjectId("5bbdfa29b395bb1c74c9721c"), "title" : "Why I can't enforce EditTextPreference to take just numbers?", "url" : "/questions/52741046/why-i-cant-enforce-edittextpreference-to-take-just-numbers" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721d"), "title" : "mysql curdate method query is not giving correct result", "url" : "/questions/52741045/mysql-curdate-method-query-is-not-giving-correct-result" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721e"), "title" : "how to execute FME workbench with parameters in java", "url" : "/questions/52741044/how-to-execute-fme-workbench-with-parameters-in-java" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721f"), "title" : "create a top 10 list for multiple groups with a ranking in python", "url" : "/questions/52741043/create-a-top-10-list-for-multiple-groups-with-a-ranking-in-python" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97220"), "title" : "Blob binding not working in VS2017 Azure function template", "url" : "/questions/52741041/blob-binding-not-working-in-vs2017-azure-function-template" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97221"), "title" : "How to convert float to vector<unsigned char> in C++?", "url" : "/questions/52741039/how-to-convert-float-to-vectorunsigned-char-in-c" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97222"), "title" : "Nginx serving server and static build", "url" : "/questions/52741038/nginx-serving-server-and-static-build" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97223"), "title" : "Excel Shortout key to format axis bound?", "url" : "/questions/52741031/excel-shortout-key-to-format-axis-bound" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97224"), "title" : "POST successful but the data doesn't appear in the controller", "url" : "/questions/52741029/post-successful-but-the-data-doesnt-appear-in-the-controller" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97225"), "title" : "Node - Nested For loop async behaviour", "url" : "/questions/52741028/node-nested-for-loop-async-behaviour" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97226"), "title" : "KSH Shell script not zipping up files", "url" : "/questions/52741027/ksh-shell-script-not-zipping-up-files" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97227"), "title" : "Property 'replaceReducer' does not exist on type 'Store<State>' After upgrading @ngrx/store", "url" : "/questions/52741023/property-replacereducer-does-not-exist-on-type-storestate-after-upgrading" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97228"), "title" : "passing more than 10 arguments to a shell script within gitlab yaml", "url" : "/questions/52741022/passing-more-than-10-arguments-to-a-shell-script-within-gitlab-yaml" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97229"), "title" : "Setting an environmental variable in a docker-compose.yml file is the same as setting that variable in a .env file?", "url" : "/questions/52741021/setting-an-environmental-variable-in-a-docker-compose-yml-file-is-the-same-as-se" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722a"), "title" : "Pass list of topics from application yml to KafkaListener", "url" : "/questions/52741016/pass-list-of-topics-from-application-yml-to-kafkalistener" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722b"), "title" : "Copy numbers at the beggining of each line to the end of line", "url" : "/questions/52741015/copy-numbers-at-the-beggining-of-each-line-to-the-end-of-line" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722c"), "title" : "Pretty JSON retrieved from response in GoLang", "url" : "/questions/52741013/pretty-json-retrieved-from-response-in-golang" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722d"), "title" : "Swift: Sorting Core Data child entities based on Date in each parent", "url" : "/questions/52741010/swift-sorting-core-data-child-entities-based-on-date-in-each-parent" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722e"), "title" : "How to create Paypal developer account", "url" : "/questions/52741009/how-to-create-paypal-developer-account" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722f"), "title" : "output of the program and explain why a and b showing different values", "url" : "/questions/52741008/output-of-the-program-and-explain-why-a-and-b-showing-different-values" }
Type "it" for more
Из этого вывода я могу сохранить его в CSV, и все работает.
Я прошу прощения за длинный пост, я хотел быть как можно более полным об этом, если требуется какая-либо другая информация, пожалуйста, не стесняйтесь спрашиватьЯ буду внимательно следить за этим вопросом.
Заранее благодарен за любую помощь.