Извлечение имен файлов из строки HTML с Python 2.7 - PullRequest
0 голосов
/ 04 февраля 2020

Я анализирую HTML документ с BeautifulSoup.

from bs4 import BeautifulSoup
import requests
import re
page = requests.get("http://www.crmpicco.co.uk/?page_id=82&lottoId=27")

soup = BeautifulSoup(page.content, 'html.parser')
entry_content = soup.find_all('div', class_='entry-content')

print(entry_content[1])

, который дает мне эту строку:

<div class="entry-content"><span class="red">Week 27: </span><br/><br/>Saturday 1st February 2020<br/>(in red)<br/><br/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif" vspace="12" width="70"/><br/><br/>Wednesday 5th February 2020<br/><br/><strong><span class="red">RESULTS NOT AVAILABLE</span></strong><br/><br/><br/><br/><a href="?page_id=82">Click here</a> to see other results.<br/> </div>

Я хотел бы получить имена файлов каждого из gif paths в строке, и я (думаю), способ findall в модуле регулярных выражений - это способ сделать это, но я не добиваюсь большого успеха.

Каков оптимальный способ сделать это ? Можно ли это сделать за один звонок с BeautifulSoup?

Ответы [ 3 ]

0 голосов
/ 04 февраля 2020

Я не могу найти на вашей странице div с entry-content, но это должно сработать. Измените col-md-4 на entry-content.

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
import re


page = requests.get("http://www.crmpicco.co.uk/?page_id=82&lottoId=27")

soup = BeautifulSoup(page.content, 'html.parser')

for entry_content in soup.find_all('div',class_='col-md-4'):
    print(entry_content.img['src'].rsplit('/', 1)[-1].split('.')[0])
zce
691505
gaiq
0 голосов
/ 04 февраля 2020

Я рекомендую другое решение, которое совместимо с Python 2 и python 3 и идеально подходит для извлечения данных из XML.

from simplified_scrapy.simplified_doc import SimplifiedDoc
html = '''
<div class="entry-content"><span class="red">Week 27: </span><br/><br/>Saturday 1st February 2020<br/>(in red)<br/><br/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif" vspace="12" width="70"/> <img height="50" src="http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif" vspace="12" width="70"/><br/><br/>Wednesday 5th February 2020<br/><br/><strong><span class="red">RESULTS NOT AVAILABLE</span></strong><br/><br/><br/><br/><a href="?page_id=82">Click here</a> to see other results.<br/> </div>
'''
doc = SimplifiedDoc(html)
div = doc.select('div.entry-content')
srcs = div.selects('img>src()')
print (srcs)
print ([src.rsplit('/', 1)[-1].split('.')[0] for src in srcs])

Результат:

['http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif', 'http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif', 'http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif']
['17', '21', '31', '47', 'bonus43']

Вот еще примеры: https://github.com/yiyedata/simplified-scrapy-demo/blob/master/doc_examples/

0 голосов
/ 04 февраля 2020

Вместо регулярных выражений я бы рекомендовал использовать HTMLParser класс (python2 / python3) из стандартной библиотеки. У него есть метод handle_starttag, который вызывается для обработки начала тега.

>>> source = "\n".join(entry_content) # I assume "entry_content" is a list of div elements.
>>>
>>> try:
...     from HTMLParser import HTMLParser # python 2
... except ImportError:
...     from html.parser import HTMLParser
...
>>> class SrcParser(HTMLParser):
...     def __init__(self, *args, **kwargs):
...         self.links = []
...         self._basename = kwargs.pop('only_basename', False)
...         super(SrcParser, self).__init__(*args, **kwargs)
...
...     def handle_starttag(self, tag, attrs):
...         for attr, val in attrs:
...             if attr == 'src' and val.endswith("gif"):
...                 if self._basename:
...                     import os.path
...                     val = os.path.splitext(os.path.basename(val))[0]
...                 self.links.append(val)
...
>>> source_parser = SrcParser()
>>> source_parser.feed(source)
>>> print(*source_parser.links, sep='\n')
http://www.crmpicco.co.uk/wp-content/themes/2010/images/lotto_balls/17.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/21.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/31.gif
http://www.crmpicco.co.uk/wp-content/themes/2010/images/balls/47.gif
http://www.crmpicco.co.uk/wp-content/themes/lotto2010/images/balls/bonus43.gif
>>>
>>> source_parser = SrcParser(only_basename=True)
>>> source_parser.feed(source)
>>> print(*source_parser.links, sep='\n')
17
21
31
47
bonus43
...