Introduction
A parser can have various applications in everyday life, and you probably use some parser daily. Babel, webpack, eslint, prettier, and jscodeshift. All of them, behind the scenes, run a parser that manipulates an Abstract Syntax Tree (AST) to do what you need - we’ll talk about that later, don’t worry.
The idea of this text is to introduce the concept of lexing and parsing, implementing them using JavaScript to analyze expressions in JSON. The goal will be to separate this process into functions, explain these functions, and, in the end, have you implement a JSON parser generating an AST.
It’s worth noting that my repository is open, and you can access it here.
Lexing
A lexer
will be responsible for converting an expression, whatever it may be, into tokens. These tokens are identifiable elements that have an assigned meaning.
You can divide these tokens in several ways, such as identifier, keyword, delimiter, operator, literal, and comment.
{ "type": "LEFT_BRACE", "value": undefined }
is an example of a delimiter. { "type": "STRING", "value": "name" }
is an example of a literal.
Example: {"name":"Vitor","age":18}
Note that in lexical analysis, you separate your expression into tokens, and each token has its identification.
To code this, let’s first understand what we will be doing exactly. The idea of the lexer
function is to receive an argument of type String and return an Array of tokens, which will be our JSON divided into specific types of information, as we have already seen and discussed.
To achieve this, we will create a variable called current
, which will store the current position of the character in the input
being analyzed by the lexer
. In other words, it represents the position it is currently at in our JSON. Additionally, we will have a constant called tokens
, which will be an array that will hold all of our tokens in the end.
Now, we need to run a loop that will iterate until all the characters of the input have been processed.
Note that it’s possible to refactor all these
if
blocks into aswitch
statement. However, I followed an imperative style.
This code may seem complicated, but it’s actually quite simple. Inside my loop, I start by defining the variable char
, which will store the character currently being analyzed in that iteration of the loop. Then, for each type of character, we want a specific action.
If the char
is equal to {
, we push into the tokens array, passing to it the ENUM of Tokens, which will be the name of each token.
In the function createToken
, it only returns an object with the type
or value
if it exists.
After that, it increments +1 to the current
variable to move to the next char
in our string. This process is quite repetitive and straightforward. I won’t explain each one of them, but it’s worth paying attention to those that deviate a bit from the pattern.
If the char
is equal to "
, we enter a loop to continue reading the subsequent characters until we find another quotation mark, as this indicates the end of the string. All the characters read during this loop are concatenated into the “value” variable. Then, a new token is added to the “tokens” array.
The other lines are used to set boolean values. If the current character is “f” and the subsequent characters form the word false
, we add false to the tokens
array. The same process is repeated for true
.
In all cases, the current
variable is incremented to point to the next character to be processed, just like we did throughout the rest of our code.
Parsing
A parser is responsible for transforming a sequence of tokens into a data structure, in this case, an AST.
Illustration of an AST taken from the book “Modern Compiler Implementation in ML.”
An Abstract Syntax Tree (AST) is a data structure that represents the syntactic structure of a program. Within the AST, there are several nodes, and each node represents a valid syntactic construct of the program. For example:
This is the AST of the JSON I exemplified at the beginning. In this example, we have a total of 8 nodes. The Program
node represents the main program. ObjectExpression
represents an object, Property
represents a property within an object, consisting of a key and a value. STRING
represents a string used as a key, StringLiteral represents a string within a property, and NumberLiteral
represents a numeric value within a property.
Through an AST, it’s possible to optimize code, transform one code into another, perform static analysis, generate code, and more. For example, you could implement a new syntax, create a parser, and generate JavaScript code that would execute normally.
To generate our AST, we will need a function that receives our array of tokens, iterates through it, and generates the AST according to the tokens it encounters. For this purpose, we will create a function called walk
, which will traverse the tokens and return the nodes of the AST.
The first check we perform is whether the current token is {
. If it is, we create a new node of type ObjectExpression
and iterate through the following tokens, adding them as properties of the object until we find the end of the key, which is }
. Each property is represented by a node of type Property
. This Property
type has a value that is generated by the walk()
function, which is called recursively.
Note that I use tokens[++current]
. This is to advance the cursor to the next token. And if a token of type , (comma) is found, we advance the cursor again to skip the comma.
The rest of the code is quite similar to what I’ve just explained, so it’s worth the effort to look and try to understand or implement the rest on your own. It’s not very complex.
Finally, I create the constant ast
, which will contain the type Program
and the body of the AST, generated by the walk()
function. The while loop ensures that current
does not exceed the size of the tokens array.
After that, we simply return the AST.
Parsing can be fun, but in reality, it’s not often written, and you probably don’t need to write your own parser. If you are implementing a programming language, for example, there are already various tools that can do this job for you, such as OCamllex, Menhir, or Nearley.
It’s also essential to note that, as this is an introductory article, I didn’t cover the different tokenization and parsing techniques, such as LR(0), LR(1), SLR(1), etc. However, be aware that these techniques exist, and you can research more about them. There are also many books that cover these topics.
If you want to see how the AST of popular languages looks, I recommend the AST Explorer. It supports various languages, and you can view the complete AST and navigate through the nodes. If you want to go further, you can try to copy some logic from an existing parser and implement it in your own, such as calculating an expression according to precedence order, for example: 1 + 2 * 3
(which is 7, not 9).
If you’re interested in learning more, I recommend the book “Modern Compiler Implementation in ML.” Despite the title being in ML, you can study from it without necessarily writing ML code, as there are other versions written in C, C++, and Java.