aboutsummaryrefslogtreecommitdiff
path: root/compiler/ast.go (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Use new parser and DFA compilerRyo Nihei2021-12-101-469/+0
|
* Change APIsRyo Nihei2021-08-011-2/+4
| | | | | | | | | | | | | | | | | | | | | | Change fields of tokens, results of lexical analysis, as follows: - Rename: mode -> mode_id - Rename: kind_id -> mode_kind_id - Add: kind_id The kind ID is unique across all modes, but the mode kind ID is unique only within a mode. Change fields of a transition table as follows: - Rename: initial_mode -> initial_mode_id - Rename: modes -> mode_names - Rename: kinds -> kind_names - Rename: specs[].kinds -> specs[].kind_names - Rename: specs[].dfa.initial_state -> specs[].dfa.initial_state_id Change public types defined in the spec package as follows: - Rename: LexModeNum -> LexModeID - Rename: LexKind -> LexKindName - Add: LexKindID - Add: StateID
* Add fragment expressionRyo Nihei2021-05-251-0/+38
| | | | A fragment entry is defined by an entry whose `fragment` field is `true`, and is referenced by a fragment expression (`\f{...}`).
* Improve performance of the symbolPositionSetRyo Nihei2021-05-041-29/+37
| | | | | | | | | | When using a map to represent a set, performance degrades due to the increased number of calls of runtime.mapassign. Especially when the number of symbols is large, as in compiling a pattern that contains character properties like \p{Letter}, adding elements to the set alone may take several tens of seconds of CPU time. Therefore, this commit solves this problem by changing the representation of the set from map to array.
* Improve compilation time a littleRyo Nihei2021-05-021-172/+97
| | | | | | | | | | A pattern like \p{Letter} generates an AST with many symbols concatenated by alt operators, which results in a large number of symbol positions in one state of the DFA. Such a pattern increases the compilation time. This commit improves the compilation time a little better. - To avoid calling astNode#first and astNode#last recursively, memoize the result of them. - Use a byte sequence that symbol positions are encoded to as a hash value to avoid using fmt.Fprintf function. - Implement a sort function for symbol positions instead of using sort.Slice function.
* Increase the maximum number of symbol positions per patternRyo Nihei2021-04-121-20/+38
| | | | | This commit increases the maximum number of symbol positions per pattern to 2^15 (= 32,768). When the limit is exceeded, the parse method returns an error.
* Fix grammar the parser acceptsRyo Nihei2021-04-111-1/+19
| | | | | * Add cases test the parse method. * Fix the parser to pass the cases.
* Add logging to compile commandRyo Nihei2021-04-081-0/+37
| | | | | compile command writes logs out to the maleeni-compile.log file. When you use compiler.Compile(), you can choose whether the lexer writes logs or not.
* Add logical inverse expressionRyo Nihei2021-04-011-9/+0
| | | | [^a-z] matches any character that is not in the range a-z.
* RefactoringRyo Nihei2021-02-251-11/+8
| | | | | | * Remove token field from symbolNode * Simplify notation of nested nodes * Simplify arguments of newSymbolNode()
* Add + and ? operatorsRyo Nihei2021-02-201-0/+42
| | | | | * a+ matches 'a' one or more times. This is equivalent to aa*. * a? matches 'a' zero or one time.
* Fix computation of last positionsRyo Nihei2021-02-171-0/+3
|
* Add dot symbol matching any single characterRyo Nihei2021-02-141-3/+26
| | | | | | | | | The dot symbol matches any single character. When the dot symbol appears, the parser generates an AST matching all of the well-formed UTF-8 byte sequences. Refelences: * https://www.unicode.org/versions/Unicode13.0.0/ch03.pdf#G7404 * Table 3-6. UTF-8 Bit Distribution * Table 3-7. Well-Formed UTF-8 Byte Sequences
* Add compilerRyo Nihei2021-02-141-0/+367
The compiler takes a lexical specification expressed by regular expressions and generates a DFA accepting the tokens. Operators that you can use in the regular expressions are concatenation, alternation, repeat, and grouping.