Я анализирую Github readmes и обнаруживаю, что следующее действительно хорошо работает:
import re
import lxml.html
def strip_markdown(x):
links_sub = re.sub(r'\[(.+)\]\([^\)]+\)', r'\1', x)
bold_sub = re.sub(r'\*\*([^*]+)\*\*', r'\1', links_sub)
emph_sub = re.sub(r'\*([^*]+)\*', r'\1', bold_sub)
return emph_sub
def strip_html(x):
return lxml.html.fromstring(x).text_content() if x else ''
А потом
readme = """<img src="https://raw.githubusercontent.com/kootenpv/sky/master/resources/skylogo.png" />
sky is a web scraping framework, implemented with the latest python versions in mind (3.4+).
It uses the asynchronous `asyncio` framework, as well as many popular modules
and extensions.
Most importantly, it aims for **next generation** web crawling where machine intelligence
is used to speed up the development/maintainance/reliability of crawling.
It mainly does this by considering the user to be interested in content
from *domains*, not just a collection of *single pages*
([templating approach](#templating-approach))."""
strip_markdown(strip_html(readme))
Удаляет все уценки и HTML правильно.