Our paper presents STScript, a toolchain that generates TypeScript APIs for communication-safe web development over WebSockets, and RouST, a new session type theory that supports multiparty communications with routing mechanisms.
This overview describes the steps to assess the practical claims of the paper using the artifact.
In this section, we outline how to access and run the artifact. We also introduce the layout of this repository, which is used to build the artifact Docker image.
We provide a Docker image with the necessary dependencies. The following steps assume a Unix environment with Docker properly installed. Other platforms supported by Docker may find a similar way to import the Docker image.
Make sure that the Docker daemon is running.
Load the Docker image (use sudo
if necessary):
$ docker load < stscript-cc21-artifact.tar.gz
You should see the following as output after the last operation:
Loaded image: stscript-cc21-artifact
Alternatively, you can build the Docker image from source:
$ git clone --recursive \
https://github.com/STScript-2020/cc21-artifact
$ cd cc21-artifact
$ docker build . -t "stscript-cc21-artifact"
To run the image, run the command
(use sudo
if necessary):
$ docker run -it -p 127.0.0.1:5000:5000 \
-p 127.0.0.1:8080:8080 -p 127.0.0.1:8888:8888 \
stscript-cc21-artifact
This command exposes the terminal of the container. To run the STScript toolchain (e.g. show the helptext):
stscript@stscript:~$ codegen --help
For example, the following command reads as follows:
$ codegen ~/protocols/TravelAgency.scr TravelAgency A \
browser -s S -o ~/case-studies/TravelAgency/client/src
-
Generate APIs for role
A
of theTravelAgency
protocol specified in~/protocols/TravelAgency.scr
; -
Role
A
is implemented as abrowser
endpoint, and assume roleS
to be the server; -
Output the generated APIs under the path
~/case-studies/TravelAgency/client/src
scribble-java
contains the Scribble toolchain, for handling multiparty protocol descriptions, a dependency of our toolchain.codegen
contains the source code of our code generator, written in Python, which generates TypeScript code for implementing the provided multiparty protocol.protocols
contains various Scribble protocol descriptions, including those used in the case studies.case-studies
contains 3 case studies of implementing interactive web applications with our toolchain, namely Noughts and Crosses, Travel Agency, and Battleships.perf-benchmarks
contains the code to generate performance benchmarks, including an iPython notebook to visualise the benchmarks collected from an experiment run.scripts
contains various convenient scripts to run the toolchain and build the case studies.web-sandbox
contains configuration files for the web development, e.g. TypeScript configurations and NPMpackage.json
files.
In this section, we explain the workflow for carrying out the experiments to verify the claims made in our paper.
To run the end-to-end tests:
# Run from any directory
$ run_tests
The end-to-end tests verify that
- STScript correctly parses the Scribble protocol specification files, and,
- STScript correctly generates TypeScript APIs, and,
- The generated APIs can be type-checked by the TypeScript Compiler successfully.
The protocol specification files, describing the multiparty communication, are
located in ~/codegen/tests/system/examples
.
The generated APIs are saved under ~/web-sandbox
(which is a
sandbox environment set up for the TypeScript Compiler) and are deleted when the test
finishes.
Verify that all tests pass. You should see the following output, with the exception of the test execution time which may vary:
-------------------------------------------------------
Ran 14 tests in 171.137s
OK
Passing the end-to-end tests means that our STScript toolchain correctly generates type-correct TypeScript code.
We include three case studies of realistic web applications, namely Noughts and Crosses, Travel Agency and Battleships, implemented using the generated APIs to show the expressiveness of the generated APIs and the compatibility with modern web programming practices.
This is the classic turn-based 2-player game as introduced in §5. To generate the APIs for both players and the game server:
# Run from any directory
$ build_noughts-and-crosses
To run the case study:
$ cd ~/case-studies/NoughtsAndCrosses
$ npm start
Visit http://localhost:8080
on two web browser windows
side-by-side, one for each player.
Play the game;
you may refer to https://youtu.be/SBANcdwpYPw for an example game execution
as a starting point.
You may also verify the following:
-
Open 4 web browsers to play 2 games simultaneously. Observe that the state of each game board is consistent with its game, i.e. moves do not get propagated to the wrong game.
-
Open 2 web browsers to play a game, and close one of them mid-game. Observe that the remaining web browser is notified that their opponent has forfeited the match.
Additional Notes:
- Refresh both web browsers to start a new game.
- Stop the web application by pressing
Ctrl+C
on the terminal.
This is the running example of our paper, as introduced in §1. To generate the APIs for both travellers and the agency:
# Run from any directory
$ build_travel-agency
To run the case study:
$ cd ~/case-studies/TravelAgency
$ npm start
Visit http://localhost:8080
on two web browser windows
side-by-side, one for each traveller.
Execute the Travel Agency service;
you may refer to https://youtu.be/mZzIBYP_Xac for an example execution
as a starting point.
-
Log in as
Friend
andCustomer
on separate windows. -
As
Friend
, suggest 'Tokyo'. AsCustomer
, query for 'Tokyo'. Expect to see that there is no availability. -
As
Friend
, suggest 'Edinburgh'. AsCustomer
, query for 'Edinburgh'. Expect to see that there is availability, then askFriend
. AsFriend
, enter a valid numeric split and pressOK
. AsCustomer
, enter any string for your name and any numeric value for credit card and pressOK
. Expect to see that both roles show success messages. -
Refresh both web browsers and log in as
Friend
andCustomer
on separate windows again. AsFriend
, suggest 'Edinburgh' again. AsCustomer
, query for 'Edinburgh'. Expect to see that there is no availability, as the last seat has been taken.
Stop the web application by pressing Ctrl+C
on the terminal.
This is a turn-based 2-player board game with more complex application logic compared with Noughts and Crosses. To generate the APIs for both players and the game server:
# Run from any directory
$ build_battleships
To run the case study:
$ cd ~/case-studies/Battleships
$ npm start
Visit http://localhost:8080
on two web browser windows
side-by-side, one for each player.
Play the game;
you may refer to https://youtu.be/cGrKIZHgAtE for an example game execution
as a starting point.
Additional Notes:
- Refresh both web browsers to start a new game.
- Stop the web application by pressing
Ctrl+C
on the terminal.
We include a script to run the performance benchmarks
as introduced in Appendix C.1.
By default, the script executes the same experiment configurations -- parameterising the Ping Pong
protocol with
and without additional UI requirements with 100 and 1000 messages,
and running each experiment 20 times.
Refer to 3.2 on how to customise these parameters.
To run the performance benchmarks:
$ cd ~/perf-benchmarks
$ ./run_benchmark.sh
Note: If the terminal log gets stuck at
Loaded client page
, open a web browser and access
http://localhost:5000.
Observe the following discrepancies between the artifact and the paper:
- The
simple_pingpong
example in the artifact refers to thePing Pong
protocol without UI requirements in the paper. - The
complex_pingpong
example in the artifact refers to thePing Pong
protocol with UI requirements in the paper.
To visualise the performance benchmarks, run:
$ cd ~/perf-benchmarks
$ jupyter notebook --ip=0.0.0.0
/* ...snip... */
To access the notebook, open this file in a browser:
/* ...snip... */
Or copy and paste one of these URLs:
http://ststcript:8888/?token=<token>
or http://127.0.0.1:8888/?token=<token>
Use a web browser to open the URL
in the terminal output
beginning with http://127.0.0.1:8888
.
Open the STScript Benchmark Visualisation.ipynb notebook.
Click on Kernel -> Restart & Run All from the top menu bar.
Tables 1 and 2 (from the paper) can be located by scrolling to the end (bottom) of the notebook.
Verify the following claims made in the paper against the tables printed at the end (bottom) of the notebook.
-
Simple Ping Pong ("w/o req"):
-
Time taken by
node
is less than time taken byreact
, which entails that "the round trip time is dominated by the browser-side message processing time". -
The delta (of
mpst
relative tobare
) for theReact
endpoints is greater than the delta for theNode
endpoints, which entails that "mpst
introduces overhead dominated by the React.js session runtime".
-
-
Complex Ping Pong ("w/ req"):
- Inspect the difference between the message processing time
across Simple Ping Pong and Complex Ping Pong.
This difference is greater for
bare
implementations compared withmpst
implementations, which entails that "the UI requirements requirebare
to perform additional state updates and rendering, reducing the overhead relative tompst
".
- Inspect the difference between the message processing time
across Simple Ping Pong and Complex Ping Pong.
This difference is greater for
Stop the notebook server by pressing Ctrl+C
on the terminal,
and confirm the shutdown command by entering y
.
In this section, we show how to customise the experiment workflow to implement your own use case.
We provide a step-by-step guide on implementing your own web applications using STScript under the wiki.
We use the Adder
protocol as an example, but
you are free to use your own protocol. Other examples of protocols
(including Adder
) can be found under ~/protocols
.
You can customise the number of messages (exchanged
during the Ping Pong
protocol) and the
number of runs for each experiment.
These parameters are represented in the run_benchmark.sh
script by the -m
and -r
flags respectively.
For example, to set up two configurations -- running Ping Pong
with 100
round trips and 1000
round trips -- and run each configuration 100
times:
$ cd ~/perf-benchmarks
$ ./run_benchmark.sh -m 100 1000 -r 100
Note: running ./run_benchmark.sh
will clear any existing logs.
Refer to §2.3 for instructions on visualising the logs from the performance benchmarks.
Note: If you change the message configuration (i.e.
the -m
flag), update the NUM_MSGS
tuple located
in the first cell of the notebook as shown below:
# Update these variables if you wish to
# visualise other benchmarks.
VARIANTS = ('bare', 'mpst')
NUM_MSGS = (100, 1000)
This work is licensed under Apache 2.0 Licence.