-
Notifications
You must be signed in to change notification settings - Fork 1.6k
pattern web
The pattern.web module has tools for online data mining: asynchronous requests, a uniform API for web services (Google, Bing, Twitter, Facebook, Wikipedia, Wiktionary, Flickr, RSS), a HTML DOM parser, HTML tag stripping functions, a web crawler, webmail, caching, Unicode support.
It can be used by itself or with other pattern modules: web | db | en | search | vector | graph.
- URLs
- Asynchronous requests
- Search engine + web services (google, bing, twitter, facebook, wikipedia, flickr)
- Web sort
- HTML to plaintext
- HTML DOM parser
- PDF parser
- Crawler
- Locale
- Cache
The URL
object is a subclass of
Python's urllib2.Request
that can be
used to connect to a web address. The URL.download()
method can be used to retrieve
the content (e.g., HTML source code). The constructor's method
parameter defines how query
data is encoded:
-
GET
: query data is encoded in the URL string (usually for retrieving data). -
POST
: query data is encoded in the message body (for posting data).
url = URL(string='', method=GET, query={})
url.string # u'http://user:[email protected]:30/path/page?p=1#anchor'
url.parts # Dictionary of attributes:
url.protocol # u'http'
url.username # u'user'
url.password # u'pw'
url.domain
# u'domain.com'
url.port
# 30
url.path
# [u'path']
url.page
# u'page'
url.query # {u'p': 1}
url.querystring # u'p=1'
url.anchor # u'anchor'
url.exists # False if URL.open() raises a HTTP404NotFound.
url.redirect # Actual URL after redirection, or None.
url.headers # Dictionary of HTTP response headers.
url.mimetype # Document MIME-type.
url.open(timeout=10, proxy=None)
url.download(timeout=10, cached=True, throttle=0, proxy=None, unicode=False)
url.copy()
-
URL()
expects a string that starts with a valid protocol (e.g.http://
). -
URL.open()
returns a connection from which data can be retrieved withconnection.read()
. -
URL.download()
caches and returns the retrieved data.
It raises aURLTimeout
if the download time exceeds the giventimeout
.
It sleeps forthrottle
seconds after the download is complete.
A proxy server can be given as a(host,
protocol)
-tuple, e.g.,('proxy.com',
'https')
.
Withunicode=True
, returns the data as a Unicode string. By default it isFalse
because the data can be binary (e.g., JPEG, ZIP) butunicode=True
is advised for HTML.
The example below downloads an image.
The extension()
helper function parses
the file extension from a file name:
>>> from pattern.web import URL, extension
>>>
>>> url = URL('http://www.clips.ua.ac.bemedia/pattern_schema.gif')
>>> f = open('test' + extension(url.page), 'wb') # save as test.gif
>>> f.write(url.download())
>>> f.close()
The download()
function takes a URL
string, calls URL.download()
and
returns the retrieved data. It takes the same optional parameters as
URL.download()
. This saves you a line
of code.
>>> from pattern.web import download
>>> html = download('http://www.clips.ua.ac.be/', unicode=True)
The URL.mimetype
can be used to check
the type of document at the given URL. This is more reliable than
sniffing the filename extension (which may be omitted).
>>> from pattern import URL, MIMETYPE_IMAGE
>>>
>>> url = URL('http://www.clips.ua.ac.bemedia/pattern_schema.gif')
>>> print url.mimetype in MIMETYPE_IMAGE
True
*Global* | *Value* |
`MIMETYPE_WEBPAGE` | `['text/html']` |
`MIMETYPE_STYLESHEET` | `['text/css']` |
`MIMETYPE_PLAINTEXT` | `['text/plain']` |
`MIMETYPE_PDF` | `['application/pdf']` |
`MIMETYPE_NEWSFEED` | `['application/rss+xml', 'application/atom+xml']` |
`MIMETYPE_IMAGE` | `['image/gif', 'image/jpeg', 'image/png']` |
`MIMETYPE_AUDIO` | `['audio/mpeg', 'audio/mp4', 'audio/x-wav']` |
`MIMETYPE_VIDEO` | `['video/mpeg', 'video/mp4', 'video/avi', 'video/quicktime']` |
`MIMETYPE_ARCHIVE` | `['application/x-tar', 'application/zip']` |
`MIMETYPE_SCRIPT` | `['application/javascript']` |
The URL.open()
and URL.download()
methods raise a URLError
if an error occurs (e.g., no
internet connection, server is down). URLError
has a number of subclasses:
*Exception* | *Description* |
`URLError` | URL has errors (e.g. a missing `t` in `htp://`) |
`URLTimeout` | URL takes too long to load. |
`HTTPError` | URL causes an error on the contacted server. |
`HTTP301Redirect` | URL causes too many redirects. |
`HTTP400BadRequest` | URL contains an invalid request. |
`HTTP401Authentication` | URL requires a login and a password. |
`HTTP403Forbidden` | URL is not accessible (check user-agent). |
`HTTP404NotFound` | URL doesn't exist. |
`HTTP500InternalServerError` | URL causes an error (bug?) on the server. |
The URL.open()
and URL.download()
methods have two optional
parameters user\_agent
and referrer
, which can be used to identify the
application accessing the web. Some websites include code to block out
any application except browsers. By setting a user\_agent
you can make the application
appear as a browser. This is called spoofing and it is not encouraged,
but sometimes necessary.
For example, to pose as a Firefox browser:
>>> URL('http://www.clips.ua.ac.be').download(user_agent='Mozilla/5.0')
The find\_urls()
function can be used
to parse URLs from a text string. It will retrieve a list of links
starting with http://
, https://
, www.
and domain names ending with .com
, .org
.
.net
. It will detect and strip leading
punctuation (open parens) and trailing punctuation (period, comma, close
parens). Similarly, the find\_email()
function can be used to parse e-mail addresses from a string.
>>> from pattern.web import find_urls
>>> print find_urls('Visit our website (www․clips.ua.ac.be)', unique=True)
['www.clips.ua.ac.be']
The asynchronous()
function can be used
to execute a function "in the background" (i.e., threaded). It takes the
function, its arguments and optional keyword arguments. It returns an
AsynchronousRequest
object that
contains the function's return value (when done). The main program does
not halt in the meantime.
request = asynchronous(function, *args, **kwargs)
request.done # True when the function is done.
request.elapsed # Running time, in seconds.
request.value # Function return value when done (or None).
request.error # Function Exception (or None).
request.now() # Waits for function and returns its value.
The example below executes a Google query without halting the main
program. Instead, it displays a "busy" message (e.g., a progress bar
updated in the application's event loop) until request.done
.
>>> from pattern.web import asynchronous, time, Google
>>>
>>> request = asynchronous(Google().search, 'holy grail', timeout=4)
>>> while not request.done:
>>> time.sleep(0.1)
>>> print 'busy...'
>>> print request.value
There is no way to stop a thread. You are responsible for ensuring that the given function doesn't hang.
The SearchEngine
object has a number of
subclasses that can be used to query different web services (e.g.,
Google, Wikipedia). SearchEngine.search()
returns a list of Result
objects for a given query string –
similar to a search field and a results page in a browser.
engine = SearchEngine(license=None, throttle=1.0, language=None)
engine.license # Service license key.
engine.throttle # Time between requests (being nice to server).
engine.language # Restriction for Result.language (e.g., 'en').
engine.search(query,
type = SEARCH, # SEARCH | IMAGE | NEWS
start = 1, # Starting page.
count = 10, # Results per page.
size = None # Image size: TINY | SMALL | MEDIUM | LARGE
cached = True) # Cache locally?
Note: SearchEngine.search()
takes the same optional
parameters as URL.download()
.
SearchEngine
is subclassed by Google
, Yahoo
, Bing
,
DuckDuckGo
, Twitter
, Facebook
, Wikipedia
, Wiktionary
, Wikia
, DBPedia
, Flickr
and Newsfeed
. The constructors take the same
parameters:
engine = Google(license=None, throttle=0.5, language=None)
engine = Bing(license=None, throttle=0.5, language=None)
engine = Twitter(license=None, throttle=0.5, language=None)
engine = Facebook(license=None, throttle=1.0, language='en')
engine = Wikipedia(license=None, throttle=5.0, language=None)
engine = Flickr(license=None, throttle=5.0, language=None)
Each search engine has different settings for the search()
method. For example, Twitter.search()
returns up to 3000 results
for a given query (30 queries with 100 results each, or 300 queries with
10 results each). It has a limit of 150 queries per 15 minutes. Each
call to search()
counts as one query.
*Engine* | *type* | *start* | *count* | *limit* | *throttle* |
`Google` | `SEARCH1` | 1-100/`count` | 1-10 | *paid* | 0.5 |
`Bing` | `SEARCH` `|` `NEWS` `|` `IMAGE`12 | 1-1000/`count` | 1-50 | paid | 0.5 |
`Yahoo` | `SEARCH` `|` `NEWS` `|` `IMAGE`13 | 1-1000/`count` | 1-50 | paid | 0.5 |
`DuckDuckGo` | `SEARCH` | 1 | - | - | 0.5 |
`Twitter` | `SEARCH` | 1-3000/`count` | 1-100 | 600/hour | 0.5 |
`Facebook` | `SEARCH` `|` `NEWS` | 1 | 1-100 | 500/hour | 1.0 |
`Wikipedia` | `SEARCH` | 1 | 1 | - | 5.0 |
`Wiktionary` | `SEARCH` | 1 | 1 | - | 5.0 |
`Wikia` | `SEARCH` | 1 | 1 | - | 5.0 |
`DBPedia` | `SPARQL` | 1+ | 1-1000 | 10/sec | 1.0 |
`Flickr ` |
`IMAGE` | 1+ | 1-500 | - | 5.0 |
`Newsfeed` | `NEWS` | 1 | 1+ | ? | 1.0 |
1 Google
, Bing
and Yahoo
are paid services – see
further how to obtain a license key.
2 Bing.search(type=NEWS)
has a count
of 1-15.
3 Yahoo.search(type=IMAGES)
has a count
of 1-35.
*
*Web service license key
Some services require a license key. They may work without one, but this
implies that you share a public license key (and query limit) with other
users of the pattern.web module. If the query limit is exceeded, SearchEngine.search()
raises a SearchEngineLimitError
.
-
Google
is a paid service ($1 for 200 queries), with a 100 free queries per day. When you obtain a license key (follow the link below), activate "Custom Search API" and "Translate API" under "Services" and look up the key under "API Access". -
Bing
is a paid service ($1 for 500 queries), with a 5,000 free queries per month. -
Yahoo
is a paid service ($1 for 1250 queries) that requires an OAuth key + secret, which can be passed as a tuple:Yahoo(license=(key,
secret))
.
Obtain a license key: Google,
Bing,
Yahoo,
Twitter,
Facebook,
Flickr.
*
Web service request throttle*
A SearchEngine.search()
request takes a
minimum amount of time to complete, as outlined in the table above. This
is intended as etiquette towards the server providing the service. Raise
the throttle
value if you plan to run
multiple queries in batch. Wikipedia requests are especially intensive.
If you plan to mine a lot of data from Wikipedia, download the
Wikipedia
database
instead.
SearchEngine.search()
returns a list of
Result
objects. It has an additional
total
property, which is the total
number of results available for the given query. Each Result
is a dictionary with extra properties:
result = Result(url)
result.url # URL of content associated with the given query.
result.title # Content title.
result.text # Content summary.
result.language # Content language.
result.author # For news items and images.
result.date # For news items.
result.download(timeout=10, cached=True, proxy=None)
-
Result.download()
takes the same optional parameters asURL.download()
. - The attributes (e.g.,
result.text
) are Unicode strings.
For example:
>>> from pattern.web import Bing, SEARCH, plaintext
>>>
>>> engine = Bing(license=None) # Enter your license key.
>>> for i in range(1,5):
>>> for result in engine.search('holy handgrenade', type=SEARCH, start=i):
>>> print repr(plaintext(result.text))
>>> print
u"The Holy Hand Grenade of Antioch is a fictional weapon from ..."
u'Once the number three, being the third number, be reached, then ...'
Since SearchEngine.search()
takes the
same optional parameters as URL.download()
it is easy to disable local
caching, set a proxy server, a throttle (minimum time) or a timeout
(maximum time).
>>> from pattern.web import Google
>>>
>>> engine = Google(license=None) # Enter your license key.
>>> for result in engine.search('tim', cached=False, proxy=('proxy.com', 'https'))
>>> print result.url
>>> print result.text
Image search*
For Flickr
, Bing
and Yahoo
, image URLs retrieved with search(type=IMAGE)
can be filtered by setting
the size
to TINY
, SMALL
,
MEDIUM
, LARGE
or None
(any size). Images may be subject to
copyright.
For Flickr
, use search(copyright=False)
to retrieve results
with no copyright restrictions (either public domain or Creative Commons
by-sa).
For Twitter
, each result has a Result.picture
property with the URL to the
user's profile picture.
Google.translate()
returns the
translated string in the given language.
Google.identify()
returns a (language
code,
confidence)
-tuple for a given string.
>>> from pattern.web import Google
>>>
>>> s = "C'est un lapin, lapin de bois. Quoi? Un cadeau."
>>> g = Google()
>>> print g.translate(s, input='fr', output='en', cached=False)
>>> print g.identify(s)
u"It's a rabbit, wood. What? A gift."
(u'fr', 0.76)
Remember to activate the Translate API in the Google API Console. Max. 1,000 characters per request.
The start
parameter of Twitter.search()
takes an int
(= the starting page, cfr. other search
engines) or a tweet.id
. If you create
two Twitter
objects, their result pages
for a given query may not correspond, since new tweets become available
more quickly than we can query pages. The best way is to pass the last
seen tweet id:
>>> from pattern.web import Twitter
>>>
>>> t = Twitter()
>>> i = None
>>> for j in range(3):
>>> for tweet in t.search('win', start=i, count=10):
>>> print tweet.text
>>> print
>>> i = tweet.id
Twitter.stream()
returns an endless,
live stream of Result
objects. A Stream
is a Python list that accumulates each
time Stream.update()
is called:
>>> from pattern.web import Twitter
>>>
>>> s = Twitter().stream('#fail')
>>> for i in range(10):
>>> time.sleep(1)
>>> s.update(bytes=1024)
>>> print s[-1].text if s else ''
To clear the accumulated list, call Stream.clear()
.
Twitter.trends()
returns a list of 10
"trending topics":
>>> from pattern.web import Twitter
>>> print Twitter().trends(cached=False)
[u'#neverunderstood', u'Not Top 10', ...]
Wikipedia.search()
returns a single
WikipediaArticle
for the given
(case-sensitive) query, which is the title of an article. Wikipedia.index()
returns an iterator over
all article titles on Wikipedia. The language
parameter of the Wikipedia()
defines the language of the
returned articles (by default it is "en"
, which corresponds to
en.wikipedia.org).
article = WikipediaArticle(title='', source='', links=[])
article.source # Article HTML source.
article.string # Article plaintext unicode string.
article.title # Article title.
article.sections # Article sections.
article.links # List of titles of linked articles.
article.external # List of external links.
article.categories # List of categories.
article.media # List of linked media (images, sounds, ...)
article.languages # Dictionary of (language, article)-items.
article.language # Article language (i.e., 'en').
article.disambiguation # True if it is a disambiguation page
article.plaintext(**kwargs) # See plaintext() for parameters overview.
article.download(media, **kwargs)
WikipediaArticle.plaintext()
is similar
to plaintext()
, with special attention
for MediaWiki markup. It strips metadata, infoboxes, table of contents,
annotations, thumbnails and disambiguation links.
WikipediaArticle.sections
is a list
of WikipediaSection
objects. Each
section has a title and a number of paragraphs that belong together.
section = WikipediaSection(article, title='', start=0, stop=0, level=1)
section.article # WikipediaArticle parent.
section.parent # WikipediaSection this section is part of.
section.children # WikipediaSections belonging to this section.
section.title # Section title.
section.source # Section HTML source.
section.string # Section plaintext unicode string.
section.content # Section string minus title.
section.level # Section nested depth (from 0).
section.links # List of titles of linked articles.
section.tables # List of WikipediaTable objects.
The following example downloads a Wikipedia article and prints the title of each section, indented according to the section level:
>>> from pattern.web import Wikipedia
>>>
>>> article = Wikipedia().search('cat')
>>> for section in article.sections:
>>> print repr(' ' * section.level + section.title)
u'Cat'
u' Nomenclature and etymology'
u' Taxonomy and evolution'
u' Genetics'
u' Anatomy'
u' Behavior'
u' Sociability'
u' Grooming'
u' Fighting'
...
WikipediaSection.tables
is a list
of WikipediaTable
objects. Each table
has a title, headers and rows.
table = WikipediaTable(section, title='', headers=[], rows=[], source='')
table.section # WikipediaSection parent.
table.source # Table HTML source.
table.title # Table title.
table.headers # List of table column headers.
table.rows # List of table rows, each a list of column values.
Wikia is a free hosting service for thousands
of wikis. Wikipedia
, Wiktionary
and Wikia
all inherit the MediaWiki
base class, so Wikia
has the same methods and properties as
Wikipedia
. Its constructor takes the
name of a domain on Wikia. Note the use of Wikia.index()
, which returns an iterator over
all available article titles:
>>> from pattern.web import Wikia
>>>
>>> w = Wikia(domain='montypython')
>>> for i, title in enumerate(w.index(start='a', throttle=1.0, cached=True)):
>>> if i >= 3:
>>> break
>>> article = w.search(title)
>>> print repr(article.title)
u'Albatross'
u'Always Look on the Bright Side of Life'
u'And Now for Something Completely Different'
DBPedia is a database of structured
information mined from Wikipedia and stored as (subject, predicate,
object)-triples (e.g., cat is-a
animal).
DBPedia can be queried with
SPARQL, where subject,
predicate and/or object can be given as ?variables
. The Result
objects in the list returned from
DBPedia.search()
have the variables as
additional properties:
>>> from pattern.web import DBPedia
>>>
>>> sparql = '\n'.join((
>>> 'prefix dbo: <http://dbpedia.org/ontology/>',
>>> 'select ?person ?place where {',
>>> ' ?person a dbo:President.',
>>> ' ?person dbo:birthPlace ?place.',
>>> '}'
>>> ))
>>> for r in DBPedia().search(sparql, start=1, count=10):
>>> print '%s (%s)' % (r.person.name, r.place.name)
Álvaro Arzú (Guatemala City)
Árpád Göncz (Budapest)
...
Facebook.search(query,
type=SEARCH)
returns a list of Result
objects, where each result is a
(publicly available) post that contains (or which comments contain) the
given query.
Facebook.search(id,
type=NEWS)
returns posts from a given user
profile. You need to supply a personal license key. You can get a key
when you authorize Pattern
to search Facebook in your name.
Facebook.search(id,
type=COMMENTS)
retrieves comments for a given
post's Result.id
. You can also pass the
id of a post or a comment to Facebook.search(id, type=LIKES)
to retrieve
users that liked it.
>>> from pattern.web import Facebook, NEWS, COMMENTS, LIKES
>>>
>>> fb = Facebook(license='your key')
>>> me = fb.profile(id=None) # user info dict
>>>
>>> for post in fb.search(me['id'], type=NEWS, count=100):
>>> print repr(post.id)
>>> print repr(post.text)
>>> print repr(post.url)
>>> if post.comments > 0:
>>> print '%i comments' % post.comments
>>> print [(r.text, r.author) for r in fb.search(post.id, type=COMMENTS)]
>>> if post.likes > 0:
>>> print '%i likes' % post.likes
>>> print [r.author for r in fb.search(post.id, type=LIKES)]
u'530415277_10151455896030278'
u'Tom De Smedt likes CLiPS Research Center'
u'http://www.facebook.com/CLiPS.UA'
1 likes
[(u'485942414773810', u'CLiPS Research Center')]
....
The maximum count
for COMMENTS
and LIKES
is 1000 (by default, 10).
The Newsfeed
object is a wrapper for
Mark Pilgrim's Universal Feed Parser.
Newsfeed.search()
takes the URL of an
RSS or Atom news feed and returns a list of Result
objects.
>>> from pattern.web import Newsfeed
>>>
>>> NATURE = 'http://www.nature.com/nature/current_issue/rss/index.html'
>>> for result in Newsfeed().search(NATURE)[:5]:
>>> print repr(result.title)
u'Biopiracy rules should not block biological control'
u'Animal behaviour: Same-shaped shoals'
u'Genetics: Fast disease factor'
u'Biomimetics: Material monitors mugginess'
u'Cell biology: Lung lipid hurts breathing'
Newsfeed.search()
has an optional
parameter tags
, which is a list of
custom tags to parse:
>>> for result in Newsfeed().search(NATURE, tags=['dc:identifier']):
>>> print result.dc_identifier
The return value of SearchEngine.search()
has a total
property which can be used to sort
queries by "crowdvoting". The sort()
function sorts a given list of terms according to their total result
count, and returns a list of (percentage,
term)
-tuples.
sort(
terms = [], # List of search terms.
context = '', # Term used for sorting.
service = GOOGLE, # GOOGLE | BING | YAHOO | FLICKR
license = None, # Service license key.
strict = True, # Wrap query in quotes?
prefix = False, # context + term or term + context?
cached = True)
When a context
is defined, the function
sorts by relevance to the context, e.g., sort(\["black",
"white"\],
context="Darth
Vader")
yields black as the best candidate,
because "black
Darth
Vader"
is more common in search results.
Now let's see who is more dangerous:
>>> from pattern.web import sort
>>>
>>> results = sort(terms=[
>>> 'arnold schwarzenegger',
>>> 'chuck norris',
>>> 'dolph lundgren',
>>> 'steven seagal',
>>> 'sylvester stallone',
>>> 'mickey mouse'], context='dangerous', prefix=True)
>>>
>>> for weight, term in results:
>>> print "%.2f" % (weight * 100) + '%', term
84.34% 'dangerous mickey mouse'
9.24% 'dangerous chuck norris'
2.41% 'dangerous sylvester stallone'
2.01% 'dangerous arnold schwarzenegger'
1.61% 'dangerous steven seagal'
0.40% 'dangerous dolph lundgren'
The HTML source code of a web page can be retrieved with URL.download()
. HTML is a markup language
that uses tags to define text formatting. For example, <b>hello</b>
displays hello
in bold. For many tasks we may want to strip the formatting so we can
analyze (e.g., parse or
count) the plain text.
The plaintext()
function removes HTML
formatting from a string.
plaintext(html, keep=[], replace=blocks, linebreaks=2, indentation=False)
It performs the following steps to clean up the given string:
-
Strip javascript: remove all
<script>
elements. -
Strip CSS: remove all
<style>
elements. -
Strip comments: remove all
<!-- -->
elements. -
Strip forms: remove all
<form>
elements. - Strip tags: remove all HTML tags.
-
Decode entities: replace
<
with<
(for example). - Collapse spaces: replace consecutive spaces with a single space.
- Collapse linebreaks: replace consecutive linebreaks with a single linebreak.
- Collapse tabs: replace consecutive tabs with a single space, optionally indentation (i.e., tabs at the start of a line) can be preserved.
plaintext parameters
The keep
parameter is a list of tags to
retain. By default, attributes are stripped, e.g., <table border="0">
becomes <table>
. To preserve specific
attributes, a dictionary can be given: {"a":
\["href"\]}
.
The replace
parameter defines how HTML
elements are replaced with other characters to improve plain text
layout. It is a dictionary of tag
→
(before,
after)
items. By default, it replaces block
elements (i.e., <h1>
, ``<h2>
,
``<p>
, ``<div>
, ``<table>
, ...) with two linebreaks,
<th>
and <tr>
with one linebreak, <td>
with one tab, and <li>
with an asterisk (\*
) before and a linebreak after.
The linebreaks
parameter defines the
maximum number of consecutive linebreaks to retain.
The indentation
parameter defines
whether or not to retain tab indentation.
The following example downloads a HTML document and keeps a minimal amount of formatting (headings, bold, links).
>>> from pattern.web import URL, plaintext
>>>
>>> s = URL('http://www.clips.ua.ac.be').download()
>>> s = plaintext(s, keep={'h1':[], 'h2':[], 'strong':[], 'a':['href']})
>>> print s
plaintext = strip + decode + collapse
The different steps in plaintext()
are
available as separate functions:
decode_utf8(string) # Byte string to Unicode string.
encode_utf8(string) # Unicode string to byte string.
strip_tags(html, keep=[], replace=blocks) # Non-trivial, using SGML parser.
strip_between(a, b, string) # Remove anything between (and including) a and b.
strip_javascript(html) # Strips between '<script*>' and '</script'.
strip_inline_css(html) # Strips between '<style*>' and '</style>'.
strip_comments(html) # Strips between '<!--' and '-->'.
strip_forms(html) # Strips between '<form*>' and '</form>'.
decode_entities(string) # '<' => '<'
encode_entities(string) # '<' => '<'
decode_url(string) # 'and%2For' => 'and/or'
encode_url(string) # 'and/or' => 'and%2For'
collapse_spaces(string, indentation=False, replace=' ')
collapse_tabs(string, indentation=False, replace=' ')
collapse_linebreaks(string, threshold=1)
The Document Object Model (DOM) is a language-independent convention for
representing HTML, XHTML and XML documents. The pattern.web module
includes a HTML DOM parser (based on Leonard Richardson's
BeautifulSoup) that can
be used to traverse a HTML document as a tree of linked Python objects.
This is useful to extract specific portions from a HTML string retrieved
with URL.download()
.
The DOM consists of a DOM
object that
contains Text
, Comment
and Element
objects.
All of these are subclasses of Node
.
node = Node(html, type=NODE)
node.type # NODE | TEXT | COMMENT | ELEMENT | DOCUMENT
node.source # HTML source.
node.parent # Parent node.
node.children # List of child nodes.
node.next # Next child in node.parent (or None).
node.previous # Previous child in node.parent (or None).
node.traverse(visit=lambda node: None)
Text
, Comment
and Element
are subclasses of Node
. For example, 'the
<b>cat</b>'
is parsed to Text('the')
+ Element('cat',
tag='b')
. The Element
object has a number of additional
properties:
element = Element(html)
element.tag # Tag name.
element.attrs # Dictionary of attributes, e.g. {'class':'comment'}.
element.id # Value for id attribute (or None).
element.source # HTML source.
element.content # HTML source minus open and close tag.
element.by_id(str) # First nested Element with given id.
element.by_tag(str) # List of nested Elements with given tag name.
element.by_class(str) # List of nested Elements with given class.
element.by_attr(**kwargs) # List of nested Elements with given attribute.
element(selector) # List of nested Elements matching a CSS selector.
-
Element.by\_tag()
can include a class (e.g.,"div.header"
) or an id (e.g.,"div\#content"
).
A wildcard can be used to match any tag. (e.g."\*.even"
).
The element is searched recursively (children in children, etc.) -
Element.by\_attr()
takes one or more keyword arguments (e.g.,name="keywords"
). -
Element(selector)
returns a list of nested elements that match the given CSS selector:
Overview of CSS selectors:
CSS Selector | Description |
element('*') | all nested elements |
element('*#x') | all nested elements with `id="x"` |
element('div#x') | all nested `<div>` elements with `id="x"` |
element('div.x') | all nested `<div>` elements with `class="x"` |
element('div[class="x"]') | all nested` <div>` elements with attribute `"class"` = `"x"` |
element('div:first-child') | the first child in a `<div>` |
element('div a') | all nested `<a>`'s inside a nested `<div>` |
element('div, a') | all nested `<a>`'s and `<div>` elements |
element('div + a') | all nested `<a>`'s directly preceded by a `<div>` |
element('div > a') | all nested `<a>`'s directly inside a nested `<div>` |
element('div < a') | all nested `<div>`'s directly containing an `<a>` |
>>> from pattern.web import Element
>>>
>>> div = Element('<div> <a>1st</a> <a>2nd<a> </div>')
>>> print div('a:first-child')
>>> print div('a:first-child')[0].source
[Element(tag='a')]
<a>1st</a>
The top-level element in the Document Object Model.
dom = DOM(html)
dom.declaration # <!doctype> TEXT Node.
dom.head # <head> Element.
dom.body # <body> Element.
The following example retrieves the most recent reddit entries. The pattern.web module does not include a reddit search engine, but we can parse entries directly from the HTML source. This is called screen scraping, and many websites will strongly dislike it.
>>> from pattern.web import URL, DOM, plaintext
>>>
>>> url = URL('http://www.reddit.com/top/')
>>> dom = DOM(url.download(cached=True))
>>> for e in dom('div.entry')[:3]: # Top 3 reddit entries.
>>> for a in e('a.title')[:1]: # First <a class="title">.
>>> print repr(plaintext(a.content))
u'Invisible Kitty'
u'Naturally, he said yes.'
u"I'd just like to remind everyone that /r/minecraft exists and not everyone wants"
"to have 10 Minecraft posts a day on their front page."
Absolute URLs*
Links parsed from the DOM
can be
relative (e.g., starting with "../"
instead of "http://"
).
To get the absolute URL, you can use the abs()
function in combination with URL.redirect
:
>>> from pattern.web import URL, DOM, abs
>>>
>>> url = URL('http://www.clips.ua.ac.be')
>>> dom = DOM(url.download())
>>> for link in dom('a'):
>>> print abs(link.attributes.get('href',''), base=url.redirect or url.string)
Portable Document Format (PDF) is a popular open standard, where text, fonts, images and layout are contained in a single document that displays the same across systems. However, extracting the source text from a PDF can be difficult.
The PDF
object (based on
PDFMiner) parses the
source text from a PDF file.
>>> from pattern.web import URL, PDF
>>>
>>> url = URL('http://www.clips.ua.ac.be/sites/default/files/ctrs-002_0.pdf')
>>> pdf = PDF(url.download())
>>> print pdf.string
CLiPS Technical Report series 002 September 7, 2010
Tom De Smedt, Vincent Van Asch, Walter Daelemans
Computational Linguistics & Psycholinguistics Research Center
...
URLs linking to a PDF document can be identified with: URL.mimetype
in
MIMETYPE\_PDF
.
A web crawler or web spider can be used to traverse the web
automatically. The Crawler
object takes
a list of URLs. These are then visited by the crawler. If they lead to a
web page, the HTML content is parsed for new links. These are added to
the list of links scheduled for a visit.
The given domains
is a list of allowed
domain names. An empty list means the crawler can visit the entire web.
The given delay
defines the number of
seconds to wait before revisiting the same (sub)domain – continually
hammering one server with a robot disrupts requests from the website's
regular visitors (this is called a denial-of-service attack).
crawler = Crawler(links=[], domains=[], delay=20.0, sort=FIFO)
crawler.domains # Domains allowed to visit (e.g., ['clips.ua.ac.be']).
crawler.delay # Delay between visits to the same (sub)domain.
crawler.history # Dictionary of (domain, time last visited)-items.
crawler.visited # Dictionary of URLs visited.
crawler.sort # FIFO | LIFO (how new links are queued).
crawler.done # True when all links have been visited.
crawler.push(link, priority=1.0, sort=LIFO)
crawler.pop(remove=True)
crawler.next # Yields the next scheduled link = Crawler.pop(False)
crawler.crawl(method=DEPTH) # DEPTH | BREADTH | None.
crawler.priority(link, method=DEPTH)
crawler.follow(link)
crawler.visit(link, source=None)
crawler.fail(link)
-
Crawler.crawl()
is meant to be called continuously in a loop. It selects a link to visit and parses the HTML content for new links. Themethod
parameter defines whether the crawler prefers internal links (DEPTH
) or external links to other domains (BREADTH
). If the link leads to a recently visited domain (i.e., elapsed time <Crawler.delay
) it is temporarily skipped. To disable this behaviour, use an optionalthrottle
parameter >=Crawler.delay
.
-
Crawler.priority()
is called fromCrawler.crawl()
to determine the priority (0.0
-1.0
) of a newLink
, where links with highest priority are visited first. It can be overridden in a subclass.
-
Crawler.follow()
is called fromCrawler.crawl()
to determine if it should schedule the givenLink
for a visit. By default it yieldsTrue
. It can be overridden to disallow selected links.
-
Crawler.visit()
is called fromCrawler.crawl()
when aLink
is visited. The givensource
is a HTML string with the page content. By default, this method does nothing (it should be overridden).
-
Crawler.fail()
is called fromCrawler.crawl()
for links whose MIME-type could not be determined, or which raise aURLError
while downloading.
The crawler uses Link
objects
internally, which contain additional information besides the URL string:
link = Link(url, text='', relation='')
link.url # Parsed from <a href=''> attribute.
link.text # Parsed from <a title=''> attribute.
link.relation # Parsed from <a rel=''> attribute.
link.referrer # Parent web page URL.
The following example shows a subclass of Crawler
that prints each link it visits.
Since it uses DEPTH
for crawling, it
will prefer internal links.
>>> from pattern.web import Crawler
>>>
>>> class Polly(Crawler):
>>> def visit(self, link, source=None):
>>> print 'visited:', repr(link.url), 'from:', link.referrer
>>> def fail(self, link):
>>> print 'failed:', repr(link.url)
>>>
>>> p = Polly(links=['http://www.clips.ua.ac.be/'], delay=3)
>>> while not p.done:
>>> p.crawl(method=DEPTH, cached=False, throttle=3)
visited: u'http://www.clips.ua.ac.be/'
visited: u'http://www.clips.ua.ac.be/#navigation'
visited: u'http://www.clips.ua.ac.be/colloquia'
visited: u'http://www.clips.ua.ac.be/computational-linguistics'
visited: u'http://www.clips.ua.ac.be/contact'
Note: Crawler.crawl()
takes the same parameters as
URL.download()
, e.g., cached=False
or throttle=10
.
The crawl()
function returns an
iterator that yields (Link,
source)
-tuples. When it is idle (e.g.,
waiting for the delay
on a domain) it
yields (None
, None
).
crawl(
links = [],
domains = [],
delay = 20.0,
sort = FIFO,
method = DEPTH, **kwargs)
>>> from pattern.web import crawl
>>>
>>> for link, source in crawl('http://www.clips.ua.ac.be/', delay=3, throttle=3):
>>> print link
Link(url=u'http://www.clips.ua.ac.be/')
Link(url=u'http://www.clips.ua.ac.be/#navigation')
Link(url=u'http://www.clips.ua.ac.be/computational-linguistics')
...
The Mail
object can be used to retrieve
e-mail messages from Gmail, provided that IMAP is
enabled. It
may also work with other services, by passing the server address to the
service
parameter (e.g., service="imap.gmail.com"
). With secure=False
(no SSL) the default port
is 143.
mail = Mail(username, password, service=GMAIL, port=993, secure=True)
mail.folders # Dictionary of (name, MailFolder)-items.
mail.[folder] # E.g., Mail.inbox.read(id)
mail.[folder].count # Number of messages in folder.
mail.[folder].search(query, field=FROM) # FROM | SUBJECT | DATE
mail.[folder].read(id, attachments=False, cached=True)
-
Mail.folders
is aname
→MailFolder
dictionary. Common names includeinbox
,spam
andtrash
. -
MailFolder.search()
returns a list of e-mail id's, most recent first. -
MailFolder.read()
retrieves the e-mail with given id as aMessage
.
A Message
has the following properties:
message = Mail.[folder].read(i)
message.author # Unicode string, sender name + e-mail address.
message.email_address # Unicode string, sender e-mail address.
message.date # Unicode string, date received.
message.subject # Unicode string, message subject.
message.body # Unicode string, message body.
message.attachments # List of (MIME-type, str)-tuples.
The following example retrieves spam e-mails containing the word "wish":
>>> from pattern.web import Mail, GMAIL, SUBJECT
>>>
>>> gmail = Mail(username='...', password='...', service=GMAIL)
>>> print gmail.folders.keys()
['drafts', 'spam', 'personal', 'work', 'inbox', 'mail', 'starred', 'trash']
>>> i = gmail.spam.search('wish', field=SUBJECT)[0] # What riches await...
>>> m = gmail.spam.read(i)
>>> print ' From:', m.author
>>> print 'Subject:', m.subject
>>> print 'Message:'
>>> print m.body
From: u'Vegas VIP Clib <[email protected]>'
Subject: u'Your wish has been granted'
Message: u'No one has claimed our jackpot! This is your chance to try!'
The pattern.web.locale module contains functions for region and language
codes, based on the ISO-639 language code (e.g., en
), the ISO-3166 region code (e.g., US
) and the IETF BCP 47 language-region
specification (en-US
):
encode_language(name) # 'English' => 'en'
decode_language(code) # 'en' => 'English'
encode_region(name) # 'United States' => 'US'
decode_region(code) # 'US' => 'United States'
languages(region) # 'US' => ['en']
regions(language) # 'en' => ['AU', 'BZ', 'CA', ...]
regionalize(language) # 'en' => ['en-US', 'en-AU', ...]
market(language) # 'en' => 'en-US'
The geocode()
function recognizes a
number of world capital cities and returns a tuple (latitude
, longitude
, ISO-639
, region
).
geocode(location) # 'Brussels' => (50.83, 4.33, u'nl', u'Belgium')
This is useful in combination with the geo
parameter for Twitter.search()
to obtain regional tweets:
>>> from pattern.web import Twitter
>>> from pattern.web.locale import geocode
>>>
>>> twitter = Twitter(language='en')
>>> for tweet in twitter.search('restaurant', geo=geocode('Brussels')[:2]):
>>> print tweet.text
u'Did you know: every McDonalds restaurant has free internet in Belgium...'
By, default, URL.download()
and SearchEngine.search()
will cache results
locally. Once the results of a query have been cached, there is no need
to connect to the internet (i.e., the query runs faster). Over time the
cache can grow quite large, filling up with whatever was downloaded –
from tweets to zip archives.
To empty the cache:
>>> from pattern.web import cache
>>> cache.clear()
- BeautifulSoup (BSD): robust HTML parser for Python.
- Scrapy (BSD): screen scraping and web crawling with Python.