Skip to content

Commit

Permalink
GDA Support (#239)
Browse files Browse the repository at this point in the history
---------

Co-authored-by: Axe <[email protected]>
Co-authored-by: didi <[email protected]>
Co-authored-by: Philip Andersson <[email protected]>
  • Loading branch information
4 people authored Oct 23, 2023
1 parent 519d0b8 commit d8698c1
Show file tree
Hide file tree
Showing 67 changed files with 12,683 additions and 46,149 deletions.
9 changes: 7 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:

strategy:
matrix:
node-version: [16.x]
node-version: [18.x]

steps:
- uses: actions/checkout@v3
Expand All @@ -30,10 +30,15 @@ jobs:
uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node }}
registry-url: "https://npm.pkg.github.com"
env:
NODE_AUTH_TOKEN: ${{secrets.GITHUB_TOKEN}}

- name: Install, lint & build
run: |
npm ci
yarn install
env:
NODE_AUTH_TOKEN: ${{secrets.GITHUB_TOKEN}}

- name: run node unit tests
run: |
Expand Down
24 changes: 13 additions & 11 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
node_modules
.env
database.sqlite
datadir/
.idea/
.vscode
snapshots/
coverage
typechain

# Hardhat files
cache
artifacts

database.sqlite
.env
.env*
!.env-example

.DS_Store

datadir/

*.sqlite

.idea/

snapshots/

.npmrc
coverage.json
typechain-types
10 changes: 6 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
# syntax = docker/dockerfile:1.3

# Always add commit hash for reproducability
FROM node:16-alpine@sha256:43b162893518666b4a08d95dae49153f22a5dba85c229f8b0b8113b609000bc2
FROM node:18-alpine@sha256:3482a20c97e401b56ac50ba8920cc7b5b2022bfc6aa7d4e4c231755770cf892f

# Enable prod optimizations
ENV NODE_ENV=production

WORKDIR /app
RUN apk add --update --no-cache g++ make python3 && ln -sf python3 /usr/bin/python
RUN apk add --update --no-cache g++ make python3 && \
ln -sf python3 /usr/bin/python && \
apk add --update --no-cache yarn

COPY ["package.json", "package-lock.json*", "./"]
RUN npm ci --only=production
COPY ["package.json", "yarn.lock", "./"]
RUN yarn install --frozen-lockfile --production
COPY . /app

# make sure we can write the data directory
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Check `.env.example` for additional configuration items and their documentation.

### Native Setup

Requires Node.js v16+ and npm already installed.
Requires Node.js v18+ and yarn already installed.

First, check out this repository and cd into it:

Expand All @@ -43,7 +43,7 @@ cd superfluid-sentinel
Then install dependencies with:

```
NODE_ENV=production npm ci
NODE_ENV=production yarn install
```

Before starting the instance, make sure it's configured according to your needs. The configuration can be provided
Expand All @@ -56,7 +56,7 @@ cp superfluid-sentinel.service.template superfluid-sentinel.service
```

Then edit `superfluid-sentinel.service` to match your setup. You need to set the working directory to the root directory
of the sentinel, the username to execute with and the path to npm on your system. Then you can install and start the
of the sentinel, the username to execute with and the path to yarn on your system. Then you can install and start the
service:

```
Expand Down Expand Up @@ -123,13 +123,13 @@ use the wrong RPC node or write to a sqlite file created for a different network
With the env files in place, you can start instances like this:

```
npm start <network-name>
yarn start <network-name>
```

For example: `npm start xdai`will start an instance configured according to the settings in `.env-xdai`.
For example: `yarn start xdai`will start an instance configured according to the settings in `.env-xdai`.

If you use systemd, create instance specific copies of the service file, e.g. `superfluid-sentinel-xdai.service`, and
add the network name to the start command, e.g. `ExecStart=/usr/bin/npm start xdai`.
add the network name to the start command, e.g. `ExecStart=/home/ubuntu/.nvm/nvm-exec yarn start xdai`.

#### Update

Expand All @@ -142,7 +142,7 @@ git pull
```
in order to get the latest version of the code. Then do
```
NODE_ENV=production npm ci
NODE_ENV=production yarn ci
```
in order to update dependencies if needed.
Then restart the service(s). E.g. for a single instance running with systemd, do
Expand Down
230 changes: 230 additions & 0 deletions _old_tests/integration/batch.integration.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,230 @@
const BatchLiquidator = require("../../src/abis/BatchLiquidator.json");

const protocolHelper = require("../../test/utils/protocolHelper");
const expect = require("chai").expect;
const ganache = require("../../test/utils/ganache");
const App = require("../../src/app");

const AGENT_ACCOUNT = "0x868D9F52f84d33261c03C8B77999f83501cF5A99";

let app, accounts, snapId, protocolVars, web3, batchContract;

// eslint-disable-next-line promise/param-names
const delay = ms => new Promise(res => setTimeout(res, ms));
const exitWithError = (error) => {
console.error(error);
process.exit(1);
};

const deployBatchContract = async () => {
if (batchContract === undefined) {
const contract = new web3.eth.Contract(BatchLiquidator.abi);
const res = await contract.deploy({
data: BatchLiquidator.bytecode,
arguments: [protocolVars.host._address, protocolVars.cfa._address]
}).send({
from: accounts[0],
gas: 1500000,
gasPrice: "1000"
});
batchContract = res;
console.log(`BatchLiquidator address: ${res._address}`);
}
};

const bootNode = async (config) => {
const sentinelConfig = protocolHelper.getSentinelConfig(config);
app = new App(sentinelConfig);
app.start();
while (!app.isInitialized()) {
await protocolHelper.timeout(3000);
}
};

const closeNode = async (force = false) => {
if (app !== undefined) {
return app.shutdown(force);
}
};

describe("Integration scripts tests", () => {
before(async function () {
protocolVars = await protocolHelper.setup(ganache.provider, AGENT_ACCOUNT);
web3 = protocolVars.web3;
accounts = protocolVars.accounts;
await deployBatchContract();
snapId = await ganache.helper.takeEvmSnapshot();
});

beforeEach(async () => {
});

afterEach(async () => {
try {
snapId = await ganache.helper.revertToSnapShot(snapId.result);
} catch (err) {
exitWithError(err);
}
});

after(async () => {
if(!app._isShutdown) {
await closeNode(true);
}
await ganache.close();
});

it("Send a batch Liquidation to close multi streams", async () => {
try {
const flowData1 = protocolVars.cfa.methods.createFlow(protocolVars.superToken._address, accounts[0], "1000000000000000", "0x").encodeABI();
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[1],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[2],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[3],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[4],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[5],
gas: 1000000
});
const tx = await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[1],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[2],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[3],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[4],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[5],
gas: 1000000
});
await bootNode({batch_contract: batchContract._address, polling_interval: 1, max_tx_number: 5});
await ganache.helper.timeTravelOnce(1000, app, true);
const result = await protocolHelper.waitForEventAtSameBlock(protocolVars, app, ganache, "AgreementLiquidatedV2", 5, tx.blockNumber);
await app.shutdown();
expect(result).gt(tx.blockNumber);
} catch (err) {
exitWithError(err);
}
});

it("Don't go over limit of tx per liquidation job", async () => {
try {
const flowData1 = protocolVars.cfa.methods.createFlow(protocolVars.superToken._address, accounts[0], "1000000000000000", "0x").encodeABI();
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[1],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[2],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[3],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[4],
gas: 1000000
});
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flowData1, "0x").send({
from: accounts[5],
gas: 1000000
});
const tx = await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[1],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[2],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[3],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[4],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[5],
gas: 1000000
});
await bootNode({batch_contract: batchContract._address, polling_interval: 1, max_tx_number: 3});
await ganache.helper.timeTravelOnce(1000, app, true);
const result1 = await protocolHelper.waitForEventAtSameBlock(protocolVars, app, ganache, "AgreementLiquidatedV2", 3, tx.blockNumber);
const result2 = await protocolHelper.waitForEventAtSameBlock(protocolVars, app, ganache, "AgreementLiquidatedV2", 2, result1);
await app.shutdown();
expect(result1).gt(tx.blockNumber);
expect(result2).gt(result1);
} catch (err) {
exitWithError(err);
}
});

it("Go over the gasLimit, reduce batch size", async () => {
try {
for (let i = 1; i <= 5; i++) {
for (let j = 6; j <= 7; j++) {
console.log(`Sending from i=${i} , j=${j} , ${accounts[i]} -> ${accounts[j]}`);
const flow = protocolVars.cfa.methods.createFlow(protocolVars.superToken._address, accounts[j], "1000000000000000", "0x").encodeABI();
await protocolVars.host.methods.callAgreement(protocolVars.cfa._address, flow, "0x").send({
from: accounts[i],
gas: 1000000
});
}
}
const tx = await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[1],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[2],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[3],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[4],
gas: 1000000
});
await protocolVars.superToken.methods.transferAll(accounts[9]).send({
from: accounts[5],
gas: 1000000
});
await bootNode({batch_contract: batchContract._address, polling_interval: 1, max_tx_number: 10});
// blockGasLimit random number picked lower than the gas limit of the tx needed for batch call
app.setTestFlag("REVERT_ON_BLOCK_GAS_LIMIT", { blockGasLimit: 3847206 });
await ganache.helper.timeTravelOnce(1000, app, true);
const result1 = await protocolHelper.waitForEventAtSameBlock(protocolVars, app, ganache, "AgreementLiquidatedV2", 5, tx.blockNumber);
const result2 = await protocolHelper.waitForEventAtSameBlock(protocolVars, app, ganache, "AgreementLiquidatedV2", 5, result1 + 1);
await app.shutdown();
expect(result1).gt(tx.blockNumber);
expect(result2).gt(result1);
} catch (err) {
exitWithError(err);
}
});
});
Loading

0 comments on commit d8698c1

Please sign in to comment.