From 42ec7286b2d36a9ba22925f816a17cb1cc2aa5ce Mon Sep 17 00:00:00 2001 From: chai Date: Sat, 30 Oct 2021 11:32:16 +0800 Subject: + Penlight --- .../Penlight/docs/libraries/pl.lexer.html | 524 +++++++++++++++++++++ 1 file changed, 524 insertions(+) create mode 100644 Data/Libraries/Penlight/docs/libraries/pl.lexer.html (limited to 'Data/Libraries/Penlight/docs/libraries/pl.lexer.html') diff --git a/Data/Libraries/Penlight/docs/libraries/pl.lexer.html b/Data/Libraries/Penlight/docs/libraries/pl.lexer.html new file mode 100644 index 0000000..7b3d6fd --- /dev/null +++ b/Data/Libraries/Penlight/docs/libraries/pl.lexer.html @@ -0,0 +1,524 @@ + + + + + Penlight Documentation + + + + +
+ +
+ +
+
+
+ + +
+ + + + + + +
+ +

Module pl.lexer

+

Lexical scanner for creating a sequence of tokens from text.

+

lexer.scan(s) returns an iterator over all tokens found in the + string s. This iterator returns two values, a token type string + (such as 'string' for quoted string, 'iden' for identifier) and the value of the + token.

+ +

Versions specialized for Lua and C are available; these also handle block comments + and classify keywords as 'keyword' tokens. For example:

+ + +
+> s = 'for i=1,n do'
+> for t,v in lexer.lua(s)  do print(t,v) end
+keyword for
+iden    i
+=       =
+number  1
+,       ,
+iden    n
+keyword do
+
+ +

See the Guide for further discussion

+ + +

Functions

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
scan (s, matches[, filter[, options]])create a plain token iterator from a string or file-like object.
insert (tok, a1, a2)insert tokens into a stream.
getline (tok)get everything in a stream upto a newline.
lineno (tok)get current line number.
getrest (tok)get the rest of the stream.
get_keywords ()get the Lua keywords as a set-like table.
lua (s[, filter[, options]])create a Lua token iterator from a string or file-like object.
cpp (s[, filter[, options]])create a C/C++ token iterator from a string or file-like object.
get_separated_list (tok[, endtoken=')'[, delim=']])get a list of parameters separated by a delimiter from a stream.
skipws (tok)get the next non-space token from the stream.
expecting (tok, expected_type, no_skip_ws)get the next token, which must be of the expected type.
+ +
+
+ + +

Functions

+ +
+
+ + scan (s, matches[, filter[, options]]) +
+
+ create a plain token iterator from a string or file-like object. + + +

Parameters:

+
    +
  • s + string or file + a string or a file-like object with :read() method returning lines. +
  • +
  • matches + tab + an optional match table - array of token descriptions. + A token is described by a {pattern, action} pair, where pattern should match + token body and action is a function called when a token of described type is found. +
  • +
  • filter + tab + a table of token types to exclude, by default {space=true} + (optional) +
  • +
  • options + tab + a table of options; by default, {number=true,string=true}, + which means convert numbers and strip string quotes. + (optional) +
  • +
+ + + + + +
+
+ + insert (tok, a1, a2) +
+
+ insert tokens into a stream. + + +

Parameters:

+
    +
  • tok + a token stream +
  • +
  • a1 + a string is the type, a table is a token list and + a function is assumed to be a token-like iterator (returns type & value) +
  • +
  • a2 + string + a string is the value +
  • +
+ + + + + +
+
+ + getline (tok) +
+
+ get everything in a stream upto a newline. + + +

Parameters:

+
    +
  • tok + a token stream +
  • +
+ +

Returns:

+
    + + a string +
+ + + + +
+
+ + lineno (tok) +
+
+ get current line number. + + +

Parameters:

+
    +
  • tok + a token stream +
  • +
+ +

Returns:

+
    + + the line number. + if the input source is a file-like object, + also return the column. +
+ + + + +
+
+ + getrest (tok) +
+
+ get the rest of the stream. + + +

Parameters:

+
    +
  • tok + a token stream +
  • +
+ +

Returns:

+
    + + a string +
+ + + + +
+
+ + get_keywords () +
+
+ get the Lua keywords as a set-like table. + So res["and"] etc would be true. + + + +

Returns:

+
    + + a table +
+ + + + +
+
+ + lua (s[, filter[, options]]) +
+
+ create a Lua token iterator from a string or file-like object. + Will return the token type and value. + + +

Parameters:

+
    +
  • s + string + the string +
  • +
  • filter + tab + a table of token types to exclude, by default {space=true,comments=true} + (optional) +
  • +
  • options + tab + a table of options; by default, {number=true,string=true}, + which means convert numbers and strip string quotes. + (optional) +
  • +
+ + + + + +
+
+ + cpp (s[, filter[, options]]) +
+
+ create a C/C++ token iterator from a string or file-like object. + Will return the token type type and value. + + +

Parameters:

+
    +
  • s + string + the string +
  • +
  • filter + tab + a table of token types to exclude, by default {space=true,comments=true} + (optional) +
  • +
  • options + tab + a table of options; by default, {number=true,string=true}, + which means convert numbers and strip string quotes. + (optional) +
  • +
+ + + + + +
+
+ + get_separated_list (tok[, endtoken=')'[, delim=']]) +
+
+ get a list of parameters separated by a delimiter from a stream. + + +

Parameters:

+
    +
  • tok + the token stream +
  • +
  • endtoken + string + end of list. Can be '\n' + (default ')') +
  • +
  • delim + string + separator + (default ') +
  • +
+ +

Returns:

+
    + + a list of token lists. +
+ + + + +
+
+ + skipws (tok) +
+
+ get the next non-space token from the stream. + + +

Parameters:

+
    +
  • tok + the token stream. +
  • +
+ + + + + +
+
+ + expecting (tok, expected_type, no_skip_ws) +
+
+ get the next token, which must be of the expected type. + Throws an error if this type does not match! + + +

Parameters:

+
    +
  • tok + the token stream +
  • +
  • expected_type + string + the token type +
  • +
  • no_skip_ws + bool + whether we should skip whitespace +
  • +
+ + + + + +
+
+ + +
+
+
+generated by LDoc 1.4.6 +
+
+ + -- cgit v1.1-26-g67d0