Skip to content

Commit

Permalink
Merge branch 'main' into aurox-token
Browse files Browse the repository at this point in the history
  • Loading branch information
just-a-node authored Nov 14, 2023
2 parents 6ece79b + 30e8549 commit b893f6b
Show file tree
Hide file tree
Showing 12 changed files with 195 additions and 63 deletions.
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
<br />

<h3 align="center">About Connext</h3>
<h4 align="center">Connext is public infrastructure powering fast, trust-minimized communication between blockchains.</h4>
<h4 align="center">Connext is a public infrastructure powering fast, trust-minimized communication between blockchains.</h4>

<p align="center">
Useful Links
Expand Down Expand Up @@ -79,7 +79,7 @@ Connext is a modular stack for trust-minimized, generalized communication betwee

- [adapters](https://github.com/connext/monorepo/tree/main/packages/adapters) - Wrappers around external modules. These adapters can be shared between different packages.

- [Cache](https://github.com/connext/monorepo/tree/main/packages/adapters/cache) is a wrapper around all the redis based caches that are used.
- [Cache](https://github.com/connext/monorepo/tree/main/packages/adapters/cache) is a wrapper around all the Redis-based caches that are used.
- [Database](https://github.com/connext/monorepo/tree/main/packages/adapters/database) is implementation of schema and client for the database.
- [Subrgaph](https://github.com/connext/monorepo/tree/main/packages/adapters/subgraph) includes graphclient implementation and reader functions for subgraph.
- [TxService](https://github.com/connext/monorepo/tree/main/packages/adapters/txservice) resiliently attempts to send transactions to chain (with retries, etc.) and is used to read and write to RPC providers, and has fallback providers if needed. Fallbacks can be defined as arrays and this way we can provide resiliency in case of failure
Expand All @@ -97,13 +97,13 @@ Connext is a modular stack for trust-minimized, generalized communication betwee
- [deployments](https://github.com/connext/monorepo/tree/main/packages/deployments)

- [Contracts](https://github.com/connext/monorepo/tree/main/packages/deployments/contracts) - Contracts are the contracts that we deploy and the deployment scripts
- [Subgraph](https://github.com/connext/monorepo/tree/main/packages/deployments/subgraph) is all the subgraph source code to define all the mappings and contains all the configurations to deploy to different graph hosted services or third party graph providers
- [Subgraph](https://github.com/connext/monorepo/tree/main/packages/deployments/subgraph) is all the subgraph source code to define all the mappings and contains all the configurations to deploy to different graph hosted services or third-party graph providers

- [examples](https://github.com/connext/monorepo/tree/main/packages/examples) - these are not used in production, but contains ways to use the SDK that are illustrative of how to integrate Connext
- [examples](https://github.com/connext/monorepo/tree/main/packages/examples) - these are not used in production, but contain ways to use the SDK that are illustrative of how to integrate Connext
- [integration](https://github.com/connext/monorepo/tree/main/packages/integration) - Utilities for integration test
- [utils](https://github.com/connext/monorepo/tree/main/packages/utils) - Collection of helper functions that are shared throughout the different packages

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

## First time setup

Expand Down Expand Up @@ -134,7 +134,7 @@ To run Redis, execute the following command:

`docker run -it --rm --name redis -p 6379:6379 redis`

This command will download the latest Redis image and start a container with the name redis.
This command will download the latest Redis image and start a container with the name Redis.

And now you are all ready to interact with Monorepo.

Expand Down Expand Up @@ -181,7 +181,7 @@ Note: We use `node-lib` as the template for all the packages. There are some oth

- Update the [`CHANGELOG.md`](./CHANGELOG.md).
- Run `yarn version:all X.X.X` where `X.X.X` is the full version string of the NPM version to deploy (i.e. `0.0.1`).
- Use `X.X.X-beta.N` for Amarok releases from `production` branch and `X.X.X-alpha.N` for Amarok releases from `main` branch.
- Use `X.X.X-beta.N` for Amarok releases from the `production` branch and `X.X.X-alpha.N` for Amarok releases from `main` branch.
- Commit and add a tag matching the version: `git commit -am "<version>" && git tag -am "<version>"`
- Run `git push --follow-tags`.
- The [GitHub action will](./.github/workflows/build-docker-image-and-verify.yml) publish the packages by recognizing the version tag.
Expand All @@ -190,7 +190,7 @@ Note: We use `node-lib` as the template for all the packages. There are some oth

## Contributing

Contributions are what makes the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
Contributions are what makes the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
Expand All @@ -201,19 +201,19 @@ Don't forget to give the project a star! Thanks again!
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

<!-- LICENSE -->

## License

Distributed under the MIT License. See `LICENSE.txt` for more information.

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

Project Link: [https://github.com/connext/monorepo](https://github.com/connext/monorepo)

<p align="right">(<a href="#top">back to top</a>)</p>
<p align="right">(<a href="#top">⬆️ back to top</a>)</p>

<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
Expand Down
7 changes: 5 additions & 2 deletions packages/adapters/cache/src/lib/caches/messages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ export class MessagesCache extends Cache {
for (const value of values) {
const message = await this.getMessage(value.leaf);
if (message) {
await this.storeMessage(message.data, value.status, message.attempt);
await this.storeMessage(message.data, value.status, value.status == ExecStatus.None ? 0 : message.attempt);
}
}
}
Expand Down Expand Up @@ -122,7 +122,10 @@ export class MessagesCache extends Cache {
*/
private async addPending(originDomain: string, destinationDomain: string, leaf: string) {
const pendingKey = `${originDomain}-${destinationDomain}`;
await this.data.rpush(`${this.prefix}:pending:${pendingKey}`, leaf);
const message = await this.getMessage(leaf);
if (!message) {
await this.data.rpush(`${this.prefix}:pending:${pendingKey}`, leaf);
}
}

/**
Expand Down
47 changes: 46 additions & 1 deletion packages/adapters/cache/test/lib/caches/messages.spec.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
import { ExecStatus, Logger, RelayerType, XMessage, expect, mkBytes32, mock } from "@connext/nxtp-utils";
import {
ExecStatus,
Logger,
RelayerType,
XMessage,
expect,
getRandomBytes32,
mkBytes32,
mock,
} from "@connext/nxtp-utils";
import { MessagesCache } from "../../../src/index";

const logger = new Logger({ level: "debug" });
Expand All @@ -10,6 +19,23 @@ const mockXMessages: XMessage[] = [
{ ...mock.entity.xMessage(), originDomain, destinationDomain, leaf: mkBytes32("0x222") },
];

const genMockXMessages = (count: number): XMessage[] => {
const xMessages: XMessage[] = [];
for (let i = 0; i < count; i++) {
const leaf = getRandomBytes32();
const root = getRandomBytes32();
xMessages.push({
...mock.entity.xMessage(),
originDomain,
destinationDomain,
origin: { index: 100 + i, root, message: leaf },
leaf,
});
}

return xMessages;
};

describe("MessagesCache", () => {
beforeEach(async () => {
messagesCache = new MessagesCache({ host: "mock", port: 1234, mock: true, logger });
Expand Down Expand Up @@ -140,6 +166,25 @@ describe("MessagesCache", () => {
const message2 = await messagesCache.getMessage(mockXMessages[1].leaf);
expect(message2?.status).to.be.deep.eq(ExecStatus.Completed);
});

it("shouldn't add the message to the pending list if exists", async () => {
const mockXMessages = genMockXMessages(10);
await messagesCache.storeMessages(mockXMessages);

const xMessage1 = mockXMessages[0];
await messagesCache.setStatus([{ leaf: xMessage1.leaf, status: ExecStatus.Sent }]);
const message = await messagesCache.getMessage(xMessage1.leaf);
expect(message?.status).to.be.eq(ExecStatus.Sent);

let pendings = await messagesCache.getPending(originDomain, destinationDomain, 0, 100);
expect(pendings.length).to.be.eq(10);
expect(pendings).to.be.deep.eq(mockXMessages.map((it) => it.leaf));

await messagesCache.removePending(originDomain, destinationDomain, [xMessage1.leaf]);
pendings = await messagesCache.getPending(originDomain, destinationDomain, 0, 100);
expect(pendings.length).to.be.eq(9);
expect(pendings).to.be.deep.eq(mockXMessages.slice(1).map((it) => it.leaf));
});
});

describe("#nonce", () => {
Expand Down
10 changes: 10 additions & 0 deletions packages/agents/lighthouse/src/errors/prover.ts
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,13 @@ export class NoMessageProof extends NxtpError {
super(`No index ${index} for message hash ${leaf}`, context, NoMessageProof.name);
}
}

export class EmptyMessageProofs extends NxtpError {
constructor(originDomain: string, destinationDomain: string, context: any = {}) {
super(
`Empty message proofs for origin: ${originDomain} and destination: ${destinationDomain}`,
context,
EmptyMessageProofs.name,
);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ export const consume = async () => {
await processMessages(brokerMessage, requestContext);
channel.ack(message);
} catch (err: unknown) {
logger.error("Processing messaages failed", requestContext, methodContext, undefined, { err });
logger.error("Processing messages failed", requestContext, methodContext, undefined, { err });
channel.reject(message, false);
const statuses = brokerMessage.messages.map((it) => ({ leaf: it.leaf, status: ExecStatus.None }));
await cache.messages.setStatus(statuses);
Expand Down
37 changes: 21 additions & 16 deletions packages/agents/lighthouse/src/tasks/prover/operations/process.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ import {
NoMessageRootProof,
NoMessageProof,
MessageRootVerificationFailed,
EmptyMessageProofs,
RelayerSendFailed,
} from "../../../errors";
import { sendWithRelayerWithBackup } from "../../../mockable";
import { HubDBHelper, SpokeDBHelper } from "../adapters";
Expand Down Expand Up @@ -86,7 +88,8 @@ export const processMessages = async (brokerMessage: BrokerMessage, _requestCont
let failCount = 0;
for (const message of messages) {
// If message has been removed. Skip processing it.
if (!cache.messages.getMessage(message.leaf)) continue;
const _message = await cache.messages.getMessage(message.leaf);
if (!_message) continue;

const messageEncodedData = contracts.spokeConnector.encodeFunctionData("messages", [message.leaf]);
try {
Expand Down Expand Up @@ -150,6 +153,9 @@ export const processMessages = async (brokerMessage: BrokerMessage, _requestCont
});
// Do not process message if proof verification fails.
failCount += 1;

// Before you skip to process a message, the status needs to be reset so it can be retried in the next cycle.
await cache.messages.setStatus([{ leaf: message.leaf, status: ExecStatus.None }]);
continue;
}
}
Expand All @@ -171,7 +177,8 @@ export const processMessages = async (brokerMessage: BrokerMessage, _requestCont
originDomain,
destinationDomain,
});
return;

throw new EmptyMessageProofs(originDomain, destinationDomain);
}

// Proof path for proving inclusion of messageRoot in aggregateRoot.
Expand Down Expand Up @@ -271,20 +278,18 @@ export const processMessages = async (brokerMessage: BrokerMessage, _requestCont
requestContext,
);
logger.info("Proved and processed message sent to relayer", requestContext, methodContext, { taskId });
if (taskId) {
await cache.messages.addTaskPending(
taskId,
relayerType,
originDomain,
destinationDomain,
provenMessages.map((it) => it.leaf),
);
const statuses = messages.map((it) => ({ leaf: it.leaf, status: ExecStatus.Sent }));
await cache.messages.setStatus(statuses);

return;
}
await cache.messages.addTaskPending(
taskId,
relayerType,
originDomain,
destinationDomain,
provenMessages.map((it) => it.leaf),
);
const statuses = messages.map((it) => ({ leaf: it.leaf, status: ExecStatus.Sent }));
await cache.messages.setStatus(statuses);
} catch (err: unknown) {
logger.error("Error sending proofs to relayer", requestContext, methodContext, jsonifyError(err as NxtpError));
throw new RelayerSendFailed({
error: jsonifyError(err as Error),
});
}
};
2 changes: 2 additions & 0 deletions packages/agents/lighthouse/test/mock.ts
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,9 @@ export const mockCache = () => {
storeMessages: stub().resolves(),
getPending: stub().resolves(),
getPendingTasks: stub().resolves(),
addTaskPending: stub().resolves(),
getMessage: stub().resolves(),
setStatus: stub().resolves(),
increaseAttempt: stub().resolves(),
removePending: stub().resolves(),
getNode: stub().resolves(),
Expand Down
Loading

0 comments on commit b893f6b

Please sign in to comment.