Jump to content

Beautiful Soup (HTML parser)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Turtlecrown (talk | contribs) at 12:40, 29 May 2024 (cleanup; headings; tags). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Beautiful Soup
Original author(s)Leonard Richardson
Initial release2004 (2004)
Stable release
4.12.3[1] Edit this on Wikidata / 17 January 2024; 16 months ago (17 January 2024)
Repository
Written inPython
PlatformPython
TypeHTML parser library, Web scraping
License
Websitewww.crummy.com/software/BeautifulSoup/

Beautiful Soup is a Python package for parsing HTML and XML documents, including those with malformed markup. It creates a parse tree for documents that can be used to extract data from HTML,[3] which is useful for web scraping.[2][4]

History

Beautiful Soup was started by in 2004 by Leonard Richardson.[citation needed] It takes its name from the poem Beautiful Soup from Alice's Adventures in Wonderland[5] and is a reference to the term "tag soup" meaning poorly-structured HTML code.[6] Richardson continues to contribute to the project,[7] which is additionally supported by paid open-source maintainers from the company Tidelift.[8]

Versions

Beautiful Soup 3 was the official release line of Beautiful Soup from May 2006 to March 2012. The current release is Beautiful Soup 4.x. Beautiful Soup 4 can be installed with pip install beautifulsoup4.

In 2021, Python 2.7 support was retired and the release 4.9.3 was the last to support Python 2.7.[9]

Usage

Beautiful Soup represents parsed data as a tree which can be searched and iterated over with ordinary Python loops.[10]

Code example

The example below uses the Python standard library's urllib[11] to load Wikipedia's main page, then uses Beautiful Soup to parse the document and search for all links within.

#!/usr/bin/env python3
# Anchor extraction from HTML document
from bs4 import BeautifulSoup
from urllib.request import urlopen
with urlopen('https://en.wikipedia.org/wiki/Main_Page') as response:
    soup = BeautifulSoup(response, 'html.parser')
    for anchor in soup.find_all('a'):
        print(anchor.get('href', '/'))

See also

References

  1. ^ "Changelog". Retrieved 18 January 2024.
  2. ^ a b "Beautiful Soup website". Retrieved 18 April 2012. Beautiful Soup is licensed under the same terms as Python itself
  3. ^ Hajba, Gábor László (2018), Hajba, Gábor László (ed.), "Using Beautiful Soup", Website Scraping with Python: Using BeautifulSoup and Scrapy, Apress, pp. 41–96, doi:10.1007/978-1-4842-3925-4_3, ISBN 978-1-4842-3925-4
  4. ^ Python, Real. "Beautiful Soup: Build a Web Scraper With Python – Real Python". realpython.com. Retrieved 2023-06-01.
  5. ^ makcorps (2022-12-13). "BeautifulSoup tutorial: Let's Scrape Web Pages with Python". Retrieved 2024-01-24.
  6. ^ "Python Web Scraping". Udacity. 2021-02-11. Retrieved 2024-01-24.
  7. ^ "Code : Leonard Richardson". Launchpad. Retrieved 2020-09-19.
  8. ^ Tidelift. "beautifulsoup4 | pypi via the Tidelift Subscription". tidelift.com. Retrieved 2020-09-19.
  9. ^ Richardson, Leonard (7 Sep 2021). "Beautiful Soup 4.10.0". beautifulsoup. Google Groups. Retrieved 27 September 2022.
  10. ^ "How To Scrape Web Pages with Beautiful Soup and Python 3 | DigitalOcean". www.digitalocean.com. Retrieved 2023-06-01.
  11. ^ Python, Real. "Python's urllib.request for HTTP Requests – Real Python". realpython.com. Retrieved 2023-06-01.