повторение одного и того же кода скребка по различным URL - PullRequest
0 голосов
/ 06 мая 2019

Теперь мне нужно повторить один и тот же код на нескольких поддоменах. Это мой текущий код:


Я отредактировал свой код, чтобы лучше отразить мой вопрос:

for base in urls:
    urls = ["https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/almagro/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/palermo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/villa-crespo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/balvanera/empanadas-delivery",]
    page = 1
    restaurants = []

while True:
    soup = bs(requests.get(base + str(page)).text, "html.parser")
    page += 1
    sections = soup.find_all("section", attrs={"class": "restaurantData"})

    if not sections: break

    for section in sections:
        for elem in section.find_all("a", href=True, attrs={"class": "arrivalName"}):
            restaurants.append({"name": elem.text, "url": elem["href"],})

Мне нужен .CSV со следующими столбцами:

[(url, name of all restaurants in each url, url for each restaurant)]

1 Ответ

0 голосов
/ 06 мая 2019

Извините, что так долго ...

Я думаю, это то, что вы ищете:

from bs4 import BeautifulSoup as bs
from urllib.request import urlopen as uReq
import bs4
import requests
import csv

urls = ["https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/almagro/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/palermo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/villa-crespo/empanadas-delivery","https://www.pedidosya.com.ar/restaurantes/buenos-aires/balvanera/empanadas-delivery",]

#writing

with open("output.csv", 'w', newline='') as csvfile:
    writer = csv.writer(csvfile, delimiter=',')
    writer.writerow(['subdomain', 'name', 'url']) #delete this line if you don't want the header

    for url in urls:
        base = url+ "?bt=RESTAURANT&page="
        page = 1
        restaurants = []

        while True:
            soup = bs(requests.get(base + str(page)).text, "html.parser")                
            sections = soup.find_all("section", attrs={"class": "restaurantData"})

            if not sections: break

            for section in sections:
                for elem in section.find_all("a", href=True, attrs={"class": "arrivalName"}):
                    restaurants.append({"name": elem.text, "url": elem["href"],})
                    writer.writerow([base+str(page),elem.text,elem["href"]])
            page += 1    

#reading

file = open("output.csv", 'r')    
reader = csv.reader(file)

for row in reader:
    #the output is a bunch of lists, which you can do what you want with
    print(row)

Вот вывод:

subdomain,name,url
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Cümen-Cümen Empanadas Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,El Maitén Empanadas - Al horno o fritas,https://www.pedidosya.com.ar/restaurantes/buenos-aires/el-maiten-empanadas-al-horno-o-fritas-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Cümen-Cümen Empanadas - Barrio Norte,https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-barrio-norte-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,La Carbonera,https://www.pedidosya.com.ar/restaurantes/buenos-aires/la-carbonera-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Tatú Empanadas Salteñas Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/tatu-empanadas-saltenas-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Morita Palermo,https://www.pedidosya.com.ar/restaurantes/buenos-aires/morita-palermo-menu
https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1,Doña Eulogia,https://www.pedidosya.com.ar/restaurantes/buenos-aires/dona-eulogia-menu
...
...
...

Вывод, когда вы читаете CSV с Python:

['subdomain', 'name', 'url']
['https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1', 'Cümen-Cümen Empanadas Palermo', 'https://www.pedidosya.com.ar/restaurantes/buenos-aires/cumen-cumen-empanadas-palermo-menu']
['https://www.pedidosya.com.ar/restaurantes/buenos-aires/recoleta/empanadas-delivery?bt=RESTAURANT&page=1', 'El Maitén Empanadas - Al horno o fritas', 'https://www.pedidosya.com.ar/restaurantes/buenos-aires/el-maiten-empanadas-al-horno-o-fritas-menu']
...
...
...

Поэтому, когда вы читаете CSV, вы получаете это (выше), которое представляет собой набор списков, через которые вы можете перебирать.

Удачи!

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...