A Python parser for MediaWiki wikicode https://mwparserfromhell.readthedocs.io/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

преди 12 години
преди 10 години
преди 11 години
преди 11 години
преди 10 години
преди 12 години
преди 12 години
преди 12 години
преди 10 години
преди 12 години
преди 12 години
преди 9 години
преди 11 години
преди 11 години
преди 12 години
преди 12 години
преди 12 години
преди 12 години
преди 11 години
преди 12 години
преди 11 години
преди 12 години
преди 10 години
преди 11 години
преди 10 години
преди 11 години
преди 11 години
преди 11 години
преди 11 години
преди 9 години
преди 11 години
преди 9 години
преди 11 години
преди 9 години
преди 11 години
преди 8 години
преди 8 години
преди 8 години
преди 12 години
преди 11 години
преди 12 години
преди 12 години
преди 10 години
преди 11 години
преди 12 години
преди 11 години
преди 12 години
преди 7 години
преди 12 години
преди 10 години
преди 11 години
преди 12 години
преди 11 години
преди 12 години
преди 11 години
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195
  1. mwparserfromhell
  2. ================
  3. .. image:: https://img.shields.io/travis/earwig/mwparserfromhell/develop.svg
  4. :alt: Build Status
  5. :target: http://travis-ci.org/earwig/mwparserfromhell
  6. .. image:: https://img.shields.io/coveralls/earwig/mwparserfromhell/develop.svg
  7. :alt: Coverage Status
  8. :target: https://coveralls.io/r/earwig/mwparserfromhell
  9. **mwparserfromhell** (the *MediaWiki Parser from Hell*) is a Python package
  10. that provides an easy-to-use and outrageously powerful parser for MediaWiki_
  11. wikicode. It supports Python 2 and Python 3.
  12. Developed by Earwig_ with contributions from `Σ`_, Legoktm_, and others.
  13. Full documentation is available on ReadTheDocs_. Development occurs on GitHub_.
  14. Installation
  15. ------------
  16. The easiest way to install the parser is through the `Python Package Index`_;
  17. you can install the latest release with ``pip install mwparserfromhell``
  18. (`get pip`_). Make sure your pip is up-to-date first, especially on Windows.
  19. Alternatively, get the latest development version::
  20. git clone https://github.com/earwig/mwparserfromhell.git
  21. cd mwparserfromhell
  22. python setup.py install
  23. You can run the comprehensive unit testing suite with
  24. ``python setup.py test -q``.
  25. Usage
  26. -----
  27. Normal usage is rather straightforward (where ``text`` is page text)::
  28. >>> import mwparserfromhell
  29. >>> wikicode = mwparserfromhell.parse(text)
  30. ``wikicode`` is a ``mwparserfromhell.Wikicode`` object, which acts like an
  31. ordinary ``str`` object (or ``unicode`` in Python 2) with some extra methods.
  32. For example::
  33. >>> text = "I has a template! {{foo|bar|baz|eggs=spam}} See it?"
  34. >>> wikicode = mwparserfromhell.parse(text)
  35. >>> print(wikicode)
  36. I has a template! {{foo|bar|baz|eggs=spam}} See it?
  37. >>> templates = wikicode.filter_templates()
  38. >>> print(templates)
  39. ['{{foo|bar|baz|eggs=spam}}']
  40. >>> template = templates[0]
  41. >>> print(template.name)
  42. foo
  43. >>> print(template.params)
  44. ['bar', 'baz', 'eggs=spam']
  45. >>> print(template.get(1).value)
  46. bar
  47. >>> print(template.get("eggs").value)
  48. spam
  49. Since nodes can contain other nodes, getting nested templates is trivial::
  50. >>> text = "{{foo|{{bar}}={{baz|{{spam}}}}}}"
  51. >>> mwparserfromhell.parse(text).filter_templates()
  52. ['{{foo|{{bar}}={{baz|{{spam}}}}}}', '{{bar}}', '{{baz|{{spam}}}}', '{{spam}}']
  53. You can also pass ``recursive=False`` to ``filter_templates()`` and explore
  54. templates manually. This is possible because nodes can contain additional
  55. ``Wikicode`` objects::
  56. >>> code = mwparserfromhell.parse("{{foo|this {{includes a|template}}}}")
  57. >>> print(code.filter_templates(recursive=False))
  58. ['{{foo|this {{includes a|template}}}}']
  59. >>> foo = code.filter_templates(recursive=False)[0]
  60. >>> print(foo.get(1).value)
  61. this {{includes a|template}}
  62. >>> print(foo.get(1).value.filter_templates()[0])
  63. {{includes a|template}}
  64. >>> print(foo.get(1).value.filter_templates()[0].get(1).value)
  65. template
  66. Templates can be easily modified to add, remove, or alter params. ``Wikicode``
  67. objects can be treated like lists, with ``append()``, ``insert()``,
  68. ``remove()``, ``replace()``, and more. They also have a ``matches()`` method
  69. for comparing page or template names, which takes care of capitalization and
  70. whitespace::
  71. >>> text = "{{cleanup}} '''Foo''' is a [[bar]]. {{uncategorized}}"
  72. >>> code = mwparserfromhell.parse(text)
  73. >>> for template in code.filter_templates():
  74. ... if template.name.matches("Cleanup") and not template.has("date"):
  75. ... template.add("date", "July 2012")
  76. ...
  77. >>> print(code)
  78. {{cleanup|date=July 2012}} '''Foo''' is a [[bar]]. {{uncategorized}}
  79. >>> code.replace("{{uncategorized}}", "{{bar-stub}}")
  80. >>> print(code)
  81. {{cleanup|date=July 2012}} '''Foo''' is a [[bar]]. {{bar-stub}}
  82. >>> print(code.filter_templates())
  83. ['{{cleanup|date=July 2012}}', '{{bar-stub}}']
  84. You can then convert ``code`` back into a regular ``str`` object (for
  85. saving the page!) by calling ``str()`` on it::
  86. >>> text = str(code)
  87. >>> print(text)
  88. {{cleanup|date=July 2012}} '''Foo''' is a [[bar]]. {{bar-stub}}
  89. >>> text == code
  90. True
  91. Likewise, use ``unicode(code)`` in Python 2.
  92. Limitations
  93. -----------
  94. While the MediaWiki parser generates HTML, mwparserfromhell acts as an interface to
  95. the source code. mwparserfromhell therefore is unaware of template definitions since
  96. if it would substitute templates with their output you would no longer be working
  97. with the source code. This has several implications:
  98. * Start and end tags generated by templates aren't recognized e.g. ``<b>foobar{{bold-end}}``.
  99. * Templates adjacent to external links e.g. ``http://example.com{{foo}}`` are
  100. considered part of the link.
  101. * Crossed constructs like ``{{echo|''Hello}}, world!''`` are not supported,
  102. the first node is treated as plain text.
  103. The current workaround for cases where you are not interested in text
  104. formatting is to pass ``skip_style_tags=True`` to ``mwparserfromhell.parse()``.
  105. This treats ``''`` and ``'''`` like plain text.
  106. A future version of mwparserfromhell will include multiple parsing modes to get
  107. around this restriction.
  108. Configuration unawareness
  109. -------------------------
  110. * `word-ending links`_ are not supported since the linktrail rules are language-specific.
  111. * Localized namespace names aren't recognized, e.g. ``[[File:...]]``
  112. links are treated as regular wikilinks.
  113. * Anything that looks like an XML tag is parsed as a tag since,
  114. the available tags are extension-dependent.
  115. Integration
  116. -----------
  117. ``mwparserfromhell`` is used by and originally developed for EarwigBot_;
  118. ``Page`` objects have a ``parse`` method that essentially calls
  119. ``mwparserfromhell.parse()`` on ``page.get()``.
  120. If you're using Pywikibot_, your code might look like this::
  121. import mwparserfromhell
  122. import pywikibot
  123. def parse(title):
  124. site = pywikibot.Site()
  125. page = pywikibot.Page(site, title)
  126. text = page.get()
  127. return mwparserfromhell.parse(text)
  128. If you're not using a library, you can parse any page using the following
  129. Python 3 code (via the API_)::
  130. import json
  131. from urllib.parse import urlencode
  132. from urllib.request import urlopen
  133. import mwparserfromhell
  134. API_URL = "https://en.wikipedia.org/w/api.php"
  135. def parse(title):
  136. data = {"action": "query", "prop": "revisions", "rvlimit": 1,
  137. "rvprop": "content", "format": "json", "titles": title}
  138. raw = urlopen(API_URL, urlencode(data).encode()).read()
  139. res = json.loads(raw)
  140. text = res["query"]["pages"].values()[0]["revisions"][0]["*"]
  141. return mwparserfromhell.parse(text)
  142. .. _MediaWiki: http://mediawiki.org
  143. .. _ReadTheDocs: http://mwparserfromhell.readthedocs.org
  144. .. _Earwig: http://en.wikipedia.org/wiki/User:The_Earwig
  145. .. _Σ: http://en.wikipedia.org/wiki/User:%CE%A3
  146. .. _Legoktm: http://en.wikipedia.org/wiki/User:Legoktm
  147. .. _GitHub: https://github.com/earwig/mwparserfromhell
  148. .. _Python Package Index: http://pypi.python.org
  149. .. _get pip: http://pypi.python.org/pypi/pip
  150. .. _word-ending links: https://www.mediawiki.org/wiki/Help:Links#linktrail
  151. .. _EarwigBot: https://github.com/earwig/earwigbot
  152. .. _Pywikibot: https://www.mediawiki.org/wiki/Manual:Pywikibot
  153. .. _API: http://mediawiki.org/wiki/API