This was initially written by chimmykk and bengtlofgren. It has been optimized, bug-fixed, and enhanced with additional logic to meet the requirements of the donor drop campaign and the official frontend. This project is licensed under the MIT-license (see LICENSE).
- Contains SQL database which can be setup via Docker
- Contains scraper which is dependent on the existence of this SQL database.
npm install
OR
yarn install
Create an .env file in the root. See .env.example for an example.
Before continuing, read the comments in the docker-compose.yml file and configure this properly.
docker-compose up -d
This will setup the correct postgres database running on POSTGRES_PORT
(default port: 5434). The tables and views that will get created are specified in the init-scripts/init.sql file.
For easy access to the database, you could use a tool like pgAdmin
or dbeaver
using the credentials specified in your .env file.
Note
For a quick reset, use the script clean-start.sh
.
IMPORTANT: this will wipe the ENTIRE database.
Tip
Use a separate systemctl service to run the scraper. See issue #22 for a template.
node scraper.mjs
There are currently two flags one could run the scraper with:
-
--once
node scraper.mjs --once
This will only let the scraper do a single run. Useful if you just want to fetch data once, without letting it check Etherscan/Infura every n-seconds.
-
--all-etherscan-txs
node scraper.mjs --all-etherscan-txs
This flag will act as if
--once
is set as well.This will get all transactions that meet the conditions described in A.1 Donation finality without doing any memo or tnam validation. This is useful if we want to give the people that made a mistake during the donor drop the opportunity to link their tnams again. See A.2 Rescue plan for a detailed description on how to approach that.
The following commands can be used to export the end results in .csv-format:
-
List of perfect users (the TOTAL SUM of these will equal the target ETH amount or less if the target was not reached):
copy(SELECT from_address, tnam, eligible_amount as eth, suggested_nam FROM private_result_eligible_addresses_finalized_in_db) To '/var/lib/postgresql/private_result_eligible_addresses_finalized_in_db.csv' With CSV DELIMITER ',' HEADER;
-
List of users who donated after the cap got reached:
copy(SELECT from_address, tnam, eligible_above_cap as eth, suggested_nam FROM private_result_above_cap_addresses_in_db) To '/var/lib/postgresql/private_result_above_cap_addresses_in_db.csv' With CSV DELIMITER ',' HEADER;
-
List of users who were initially not included due to mistakes, but got corrected using A.2 Rescue plan:
copy(SELECT from_address, tnam, sig_hash, total_eth as eth, suggested_nam FROM private_result_addresses_not_in_db) To '/var/lib/postgresql/private_result_addresses_not_in_db.csv' With CSV DELIMITER ',' HEADER;
By default the scraper will periodically check for transactions made to the address defined in your .env-file. It uses a combination of info gathered from the Etherscan and Infura API and only picks up the transactions that meet the following conditions:
- donation x comes from block n, where n >=
SCRAPER_START_BLOCK
. - donation x has transaction date d, where d >=
SCRAPER_START_DATE
and d <=SCRAPER_END_DATE
. - donation x has hex h in the transaction's memo-field, where decode(h) = a valid tnam-address. The decode-method is quite robust and auto-corrects most of the common mistakes people make (e.g. multi-encoded hex string, forgetting the '0x' part, adding more characters than necessary).
- donation x is not a failed transaction.
The scraper starts two schedulers: one that registers any transaction that passes the requirements above and the other that also considers block finality.
Note
Why two schedulers?
A transaction is only certain once a block is completely finalized on-chain. This takes on average 15 to 20 minutes. Which is problematic if we want to show a tally in real-time. So, to solve this, we temporarily use the data from the scheduler that's unbothered by finalization as an indication, whereas the actual results get calculated using the data from the finalized-scheduler. The frontend makes sure to take both this real-time and finalized data into account and visualize them accordingly.
If there are people who messed up their donation, the following can be done:
- Wait for approx. 30 minutes after the donor drop ended (to make sure all eth blocks are in a
finalized
state). - (Optional) adjust the
SCRAPER_START_DATE
,SCRAPER_END_DATE
andSCRAPER_START_BLOCK
in your .env-file. - Run
node scraper.mjs --all-etherscan-txs
. - Double-check the data this command gathered in the
etherscan_transactions_all
-table. It should contain every transaction done betweenSCRAPER_START_DATE
andSCRAPER_END_DATE
. - Switch the frontend to the
with-link
-branch and re-deploy it. - Let people link their tnam addresses using the frontend (this form will only allow wallet addresses that failed to register a tnam address).
- Keep track of the results by checking
unaccounted_addresses
andprivate_result_addresses_not_in_db
.
The testing suite works as follows:
(If you have not already done so, please run npm install
and set up docker-compose as described above)
The tests will be done against the database specified in the .env
file. Ideally this would be done against a .env.test
file, but for the purposes of this project, the .env
file will be used.
To run the tests, use the following command:
npm test -- --detectOpenHandles --verbose
All tests should pass.