Explorar el Código

More specific docs for contexts and tokenizer.

tags/v0.1
Ben Kurtovic hace 12 años
padre
commit
b2b49ebd80
Se han modificado 2 ficheros con 48 adiciones y 2 borrados
  1. +22
    -2
      mwparserfromhell/parser/contexts.py
  2. +26
    -0
      mwparserfromhell/parser/tokens.py

+ 22
- 2
mwparserfromhell/parser/contexts.py Ver fichero

@@ -29,10 +29,30 @@ heading of level two. This is used to determine what tokens are valid at the
current point and also if the current parsing route is invalid.

The tokenizer stores context as an integer, with these definitions bitwise OR'd
to add them, AND'd to check if they're set, and XOR'd to remove them.
to add them, AND'd to check if they're set, and XOR'd to remove them. The
advantage of this is that contexts can have sub-contexts (as FOO == 0b11 will
cover BAR == 0b10 and BAZ == 0b01).

Local (stack-specific) contexts:

* TEMPLATE
** TEMPLATE_NAME
** TEMPLATE_PARAM_KEY
** TEMPLATE_PARAM_VALUE
* HEADING
** HEADING_LEVEL_1
** HEADING_LEVEL_2
** HEADING_LEVEL_3
** HEADING_LEVEL_4
** HEADING_LEVEL_5
** HEADING_LEVEL_6

Global contexts:

* GL_HEADING
"""

# Local (stack-specific) contexts:
# Local contexts:

TEMPLATE = 0b000000111
TEMPLATE_NAME = 0b000000001


+ 26
- 0
mwparserfromhell/parser/tokens.py Ver fichero

@@ -28,6 +28,32 @@ a syntactically valid form by the
:py:class:`~mwparserfromhell.parser.tokenizer.Tokenizer`, and then converted
into the :py:class`~mwparserfromhell.wikicode.Wikicode` tree by the
:py:class:`~mwparserfromhell.parser.builder.Builder`.

Tokens:

* Text = make("Text")
* *Templates*
** TemplateOpen
** TemplateParamSeparator
** TemplateParamEquals
** TemplateClose
** HTMLEntityStart
** HTMLEntityNumeric
** HTMLEntityHex
** HTMLEntityEnd
* *Headings*
** HeadingStart
** HeadingEnd
* *Tags*
** TagOpenOpen
** TagAttrStart
** TagAttrEquals
** TagAttrQuote
** TagCloseOpen
** TagCloseSelfclose
** TagOpenClose
** TagCloseClose

"""

from __future__ import unicode_literals


Cargando…
Cancelar
Guardar