Очистка каждой строки из таблицы HTML - PullRequest
0 голосов
/ 11 июня 2019

Я собираю таблицу HTML с веб-страницы, но она просто перетаскивает содержимое первой строки снова и снова, а не уникальные значения в каждой строке.Кажется, что позиционные аргументы (tds [0] -tds [5]) применимы только к первой строке, я просто не знаю, как указать коду переходить к каждой новой строке.

import requests
from bs4 import BeautifulSoup



headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}


url = 'https://www.fdic.gov/bank/individual/failed/banklist.html'
r = requests.get(url, headers = headers)

soup = BeautifulSoup(r.text, 'html.parser')


mylist5 = []
for tr in soup.find_all('table'):
    tds = tr.findAll('td')
    for x in tds:
        output5 = ("Bank: %s, City: %s, State: %s, Closing Date: %s, Cert #: %s, Acquiring Inst: %s \r\n" % (tds[0].text, tds[1].text, tds[2].text, tds[5].text, tds[3].text, tds[4].text))
        mylist5.append(output5)
        print(output5)

Ответы [ 3 ]

1 голос
/ 11 июня 2019

Я немного изменил ваш код - я игнорирую первую строку (заголовок), а затем повторяюсь по строкам (tr), а не просто td:

import requests
from bs4 import BeautifulSoup

headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}

url = 'https://www.fdic.gov/bank/individual/failed/banklist.html'
r = requests.get(url, headers = headers)

soup = BeautifulSoup(r.text, 'html.parser')


mylist5 = []
for tr in soup.find_all('table'):
    rows = tr.findAll('tr')[1:]
    for row in rows:
        row = row.findAll('td')
        output5 = ("Bank: %s, City: %s, State: %s, Closing Date: %s, Cert #: %s, Acquiring Inst: %s \r\n" % (row[0].text, row[1].text, row[2].text, row[5].text, row[3].text, row[4].text))
        mylist5.append(output5)
        print(output5)

Печать:

Bank: The Enloe State Bank, City: Cooper, State: TX, Closing Date: May 31, 2019, Cert #: 10716, Acquiring Inst: Legend Bank, N. A. 

Bank: Washington Federal Bank for Savings, City: Chicago, State: IL, Closing Date: December 15, 2017, Cert #: 30570, Acquiring Inst: Royal Savings Bank 

Bank: The Farmers and Merchants State Bank of Argonia, City: Argonia, State: KS, Closing Date: October 13, 2017, Cert #: 17719, Acquiring Inst: Conway Bank 

Bank: Fayette County Bank, City: Saint Elmo, State: IL, Closing Date: May 26, 2017, Cert #: 1802, Acquiring Inst: United Fidelity Bank, fsb 

Bank: Guaranty Bank, (d/b/a BestBank in Georgia & Michigan) , City: Milwaukee, State: WI, Closing Date: May 5, 2017, Cert #: 30003, Acquiring Inst: First-Citizens Bank & Trust Company 

Bank: First NBC Bank, City: New Orleans, State: LA, Closing Date: April 28, 2017, Cert #: 58302, Acquiring Inst: Whitney Bank 

Bank: Proficio Bank, City: Cottonwood Heights, State: UT, Closing Date: March 3, 2017, Cert #: 35495, Acquiring Inst: Cache Valley Bank 

... и т. Д.

0 голосов
/ 11 июня 2019

Я бы лично использовал здесь панд:

import pandas as pd

table = pd.read_html('https://www.fdic.gov/bank/individual/failed/banklist.html')[0]
print(table)
0 голосов
/ 11 июня 2019

Вы можете использовать find_all с пониманием списка:

import requests
from bs4 import BeautifulSoup as soup
d = soup(requests.get('https://www.fdic.gov/bank/individual/failed/banklist.html').text, 'html.parser')
h, data = [i.text for i in d.find_all('th')], [[i.text for i in b.find_all('td')] for b in d.find_all('tr')[1:]]

Вывод (сокращен из-за ограничения символов SO):

['Bank Name', 'City', 'ST', 'CERT', 'Acquiring Institution', 'Closing Date', 'Updated Date']
[['The Enloe State Bank', 'Cooper', 'TX', '10716', 'Legend Bank, N. A.', 'May 31, 2019', 'June 5, 2019'], ['Washington Federal Bank for Savings', 'Chicago', 'IL', '30570', 'Royal Savings Bank', 'December 15, 2017', 'February 1, 2019'], ['The Farmers and Merchants State Bank of Argonia', 'Argonia', 'KS', '17719', 'Conway Bank', 'October 13, 2017', 'February 21, 2018'], ['Fayette County Bank', 'Saint Elmo', 'IL', '1802', 'United Fidelity Bank, fsb', 'May 26, 2017', 'January 29, 2019'], ['Guaranty Bank, (d/b/a BestBank in Georgia & Michigan) ', 'Milwaukee', 'WI', '30003', 'First-Citizens Bank & Trust Company', 'May 5, 2017', 'March 22, 2018'], ['First NBC Bank', 'New Orleans', 'LA', '58302', 'Whitney Bank', 'April 28, 2017', 'January 29, 2019'], ['Proficio Bank', 'Cottonwood Heights', 'UT', '35495', 'Cache Valley Bank', 'March 3, 2017', 'January 29, 2019'], ]
...