Мой код Webscraping с Beautifulsoup не проходит мимо первой страницы - PullRequest
0 голосов
/ 22 апреля 2019

это, кажется, не проходит мимо первой страницы. В чем дело? Также, если искомое слово находится в ссылке, оно не даст правильных вхождений, будет отображаться 5 выходов с 5 в качестве вхождения

import requests from bs4 import BeautifulSoup 

for i in range (1,5):

    url = 'https://www.nairaland.com/search/ipob/0/0/0/{}'.format(i)
    the_word = 'is' 
    r = requests.get(url, allow_redirects=False)
    soup = BeautifulSoup(r.content, 'lxml')
    words = soup.find(text=lambda text: text and the_word in text) 
    print(words) 
    count =  len(words)
    print('\nUrl: {}\ncontains {} occurrences of word: {}'.format(url, count, the_word))

Ответы [ 4 ]

1 голос
/ 22 апреля 2019

Если вы хотите пройти первые 6 страниц, измените диапазон в цикле:

for i in range (6):   # the first page is addressed at index `0`

или:

for i in range (0,6):

вместо:

for i in range (1,5):    # this will start from the second page, since the second page is indexed at `1`
0 голосов
/ 22 апреля 2019

Кроме того, поисковое слово имеет свое собственное имя класса, так что вы можете просто посчитать их. Ниже правильно возвращается, где не найден на странице. Вы можете использовать этот подход в вашем цикле.

import requests 
from bs4 import BeautifulSoup as bs

r = requests.get('https://www.nairaland.com/search?q=afonja&board=0&topicsonly=2')
soup = bs(r.content, 'lxml')
occurrences = len(soup.select('.highlight'))
print(occurrences)

import requests 
from bs4 import BeautifulSoup as bs

for i in range(9):
    r = requests.get('https://www.nairaland.com/search/afonja/0/0/0/{}'.format(i))
    soup = bs(r.content, 'lxml')
    occurrences = len(soup.select('.highlight'))
    print(occurrences)
0 голосов
/ 22 апреля 2019

Попробуйте:

import requests
from bs4 import BeautifulSoup 

for i in range(6):
    url = 'https://www.nairaland.com/search/ipob/0/0/0/{}'.format(i)
    the_word = 'afonja' 
    r = requests.get(url, allow_redirects=False)
    soup = BeautifulSoup(r.content, 'lxml')
    words = soup.find(text=lambda text: text and the_word in text) 
    print(words)
    count = 0
    if words:
        count = len(words)
    print('\nUrl: {}\ncontains {} occurrences of word: {}'.format(url, count, the_word))

РЕДАКТИРОВАТЬ после новых спецификаций.

Предполагая, что слово для подсчета такое же, как в URL, вы можете заметить, что слово подсвечивается встраницы, и распознается по span class=highlight в html.

Таким образом, вы можете использовать этот код:

import requests
from bs4 import BeautifulSoup 

for i in range(6):
    url = 'https://www.nairaland.com/search/afonja/0/0/0/{}'.format(i)
    the_word = 'afonja' 
    r = requests.get(url, allow_redirects=False)
    soup = BeautifulSoup(r.content, 'lxml')
    count = len(soup.find_all('span', {'class':'highlight'})) 
    print('\nUrl: {}\ncontains {} occurrences of word: {}'.format(url, count, the_word))

и вы получите:

Url: https://www.nairaland.com/search/afonja/0/0/0/0
contains 30 occurrences of word: afonja

Url: https://www.nairaland.com/search/afonja/0/0/0/1
contains 31 occurrences of word: afonja

Url: https://www.nairaland.com/search/afonja/0/0/0/2
contains 36 occurrences of word: afonja

Url: https://www.nairaland.com/search/afonja/0/0/0/3
contains 30 occurrences of word: afonja

Url: https://www.nairaland.com/search/afonja/0/0/0/4
contains 45 occurrences of word: afonja

Url: https://www.nairaland.com/search/afonja/0/0/0/5
contains 50 occurrences of word: afonja
0 голосов
/ 22 апреля 2019

Для меня это прекрасно работает:

import requests
from bs4 import BeautifulSoup

if __name__ == "__main__":

    # correct the range, 0, 6 to go from first page to the fifth one (starting counting from "0")
    # or try 0, 5 to go from 0 to 5 (five pages in total)
    for i in range(0, 6): # range(0, 4)

        url = 'https://www.nairaland.com/search/ipob/0/0/0/{}'.format(i)
        print(url, "url")
        the_word = 'is'
        r = requests.get(url, allow_redirects=False)
        soup = BeautifulSoup(r.content, 'lxml')
        words = soup.find(text=lambda text: text and the_word in text)
        print(words)
        count =  len(words)
        print('\nUrl: {}\ncontains {} occurrences of word: {}'.format(url, count, the_word))

Это вывод:

https://www.nairaland.com/search/ipob/0/0/0/0 url
 is somewhere in Europe sending semi nude video on the internet.Are you proud of such groups with such leader?

Url: https://www.nairaland.com/search/ipob/0/0/0/0
contains 110 occurrences of word: is
https://www.nairaland.com/search/ipob/0/0/0/1 url
Notre is a French word; means 'Our"...and Dame means "Lady" So Notre Dame means Our Lady.

Url: https://www.nairaland.com/search/ipob/0/0/0/1
contains 89 occurrences of word: is
https://www.nairaland.com/search/ipob/0/0/0/2 url
How does all this uselessness Help Foolish 

Url: https://www.nairaland.com/search/ipob/0/0/0/2
contains 43 occurrences of word: is
https://www.nairaland.com/search/ipob/0/0/0/3 url
Dumb fuckers everywhere. I thought I was finally going to meet someone that has juju and can show me. Instead I got a hopeless broke buffoon that loves boasting online. Nairaland I apologize on the behalf of this waste of space and time. He is not even worth half of the data I have spent writing this post. 

Url: https://www.nairaland.com/search/ipob/0/0/0/3
contains 308 occurrences of word: is
https://www.nairaland.com/search/ipob/0/0/0/4 url
People like FFK, Reno, Fayose etc have not been touched, it is an unknown prophet that hasn't said anything against the FG that you expect the FG to waste its time on. 

Url: https://www.nairaland.com/search/ipob/0/0/0/4
contains 168 occurrences of word: is
https://www.nairaland.com/search/ipob/0/0/0/5 url
 children send them to prison

Url: https://www.nairaland.com/search/ipob/0/0/0/5
contains 29 occurrences of word: is

Process finished with exit code 0
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...