Quellcode durchsuchen

More specific docs for contexts and tokenizer.

tags/v0.1
Ben Kurtovic vor 11 Jahren
Ursprung
Commit
b2b49ebd80
2 geänderte Dateien mit 48 neuen und 2 gelöschten Zeilen
  1. +22
    -2
      mwparserfromhell/parser/contexts.py
  2. +26
    -0
      mwparserfromhell/parser/tokens.py

+ 22
- 2
mwparserfromhell/parser/contexts.py Datei anzeigen

@@ -29,10 +29,30 @@ heading of level two. This is used to determine what tokens are valid at the
current point and also if the current parsing route is invalid.

The tokenizer stores context as an integer, with these definitions bitwise OR'd
to add them, AND'd to check if they're set, and XOR'd to remove them.
to add them, AND'd to check if they're set, and XOR'd to remove them. The
advantage of this is that contexts can have sub-contexts (as FOO == 0b11 will
cover BAR == 0b10 and BAZ == 0b01).

Local (stack-specific) contexts:

* TEMPLATE
** TEMPLATE_NAME
** TEMPLATE_PARAM_KEY
** TEMPLATE_PARAM_VALUE
* HEADING
** HEADING_LEVEL_1
** HEADING_LEVEL_2
** HEADING_LEVEL_3
** HEADING_LEVEL_4
** HEADING_LEVEL_5
** HEADING_LEVEL_6

Global contexts:

* GL_HEADING
"""

# Local (stack-specific) contexts:
# Local contexts:

TEMPLATE = 0b000000111
TEMPLATE_NAME = 0b000000001


+ 26
- 0
mwparserfromhell/parser/tokens.py Datei anzeigen

@@ -28,6 +28,32 @@ a syntactically valid form by the
:py:class:`~mwparserfromhell.parser.tokenizer.Tokenizer`, and then converted
into the :py:class`~mwparserfromhell.wikicode.Wikicode` tree by the
:py:class:`~mwparserfromhell.parser.builder.Builder`.

Tokens:

* Text = make("Text")
* *Templates*
** TemplateOpen
** TemplateParamSeparator
** TemplateParamEquals
** TemplateClose
** HTMLEntityStart
** HTMLEntityNumeric
** HTMLEntityHex
** HTMLEntityEnd
* *Headings*
** HeadingStart
** HeadingEnd
* *Tags*
** TagOpenOpen
** TagAttrStart
** TagAttrEquals
** TagAttrQuote
** TagCloseOpen
** TagCloseSelfclose
** TagOpenClose
** TagCloseClose

"""

from __future__ import unicode_literals


Laden…
Abbrechen
Speichern